id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
2747182
https://en.wikipedia.org/wiki/Thermodynamic%20state
Thermodynamic state
In thermodynamics, a thermodynamic state of a system is its condition at a specific time; that is, fully identified by values of a suitable set of parameters known as state variables, state parameters or thermodynamic variables. Once such a set of values of thermodynamic variables has been specified for a system, the values of all thermodynamic properties of the system are uniquely determined. Usually, by default, a thermodynamic state is taken to be one of thermodynamic equilibrium. This means that the state is not merely the condition of the system at a specific time, but that the condition is the same, unchanging, over an indefinitely long duration of time. Properties that Define a Thermodynamic State Temperature (T) represents the average kinetic energy of the particles in a system. It's a measure of how hot or cold a system is. Pressure (P) is the force exerted by the particles of a system on a unit area of the container walls. Volume (V) refers to the space occupied by the system. Composition defines the amount of each component present for systems with more than one component (e.g., mixtures). Thermodynamic Path When a system undergoes a change from one state to another, it is said to traverse a path. The path can be described by how the properties change, like isothermal (constant temperature) or isobaric (constant pressure) paths. Thermodynamics sets up an idealized conceptual structure that can be summarized by a formal scheme of definitions and postulates. Thermodynamic states are amongst the fundamental or primitive objects or notions of the scheme, for which their existence is primary and definitive, rather than being derived or constructed from other concepts. A thermodynamic system is not simply a physical system. Rather, in general, infinitely many different alternative physical systems comprise a given thermodynamic system, because in general a physical system has vastly many more microscopic characteristics than are mentioned in a thermodynamic description. A thermodynamic system is a macroscopic object, the microscopic details of which are not explicitly considered in its thermodynamic description. The number of state variables required to specify the thermodynamic state depends on the system, and is not always known in advance of experiment; it is usually found from experimental evidence. The number is always two or more; usually it is not more than some dozen. Though the number of state variables is fixed by experiment, there remains choice of which of them to use for a particular convenient description; a given thermodynamic system may be alternatively identified by several different choices of the set of state variables. The choice is usually made on the basis of the walls and surroundings that are relevant for the thermodynamic processes that are to be considered for the system. For example, if it is intended to consider heat transfer for the system, then a wall of the system should be permeable to heat, and that wall should connect the system to a body, in the surroundings, that has a definite time-invariant temperature. For equilibrium thermodynamics, in a thermodynamic state of a system, its contents are in internal thermodynamic equilibrium, with zero flows of all quantities, both internal and between system and surroundings. For Planck, the primary characteristic of a thermodynamic state of a system that consists of a single phase, in the absence of an externally imposed force field, is spatial homogeneity. For non-equilibrium thermodynamics, a suitable set of identifying state variables includes some macroscopic variables, for example a non-zero spatial gradient of temperature, that indicate departure from thermodynamic equilibrium. Such non-equilibrium identifying state variables indicate that some non-zero flow may be occurring within the system or between system and surroundings. State variables and state functions A thermodynamic system can be identified or described in various ways. Most directly, it can be identified by a suitable set of state variables. Less directly, it can be described by a suitable set of quantities that includes state variables and state functions. The primary or original identification of the thermodynamic state of a body of matter is by directly measurable ordinary physical quantities. For some simple purposes, for a given body of given chemical constitution, a sufficient set of such quantities is 'volume and pressure'. Besides the directly measurable ordinary physical variables that originally identify a thermodynamic state of a system, the system is characterized by further quantities called state functions, which are also called state variables, thermodynamic variables, state quantities, or functions of state. They are uniquely determined by the thermodynamic state as it has been identified by the original state variables. There are many such state functions. Examples are internal energy, enthalpy, Helmholtz free energy, Gibbs free energy, thermodynamic temperature, and entropy. For a given body, of a given chemical constitution, when its thermodynamic state has been fully defined by its pressure and volume, then its temperature is uniquely determined. Thermodynamic temperature is a specifically thermodynamic concept, while the original directly measureable state variables are defined by ordinary physical measurements, without reference to thermodynamic concepts; for this reason, it is helpful to regard thermodynamic temperature as a state function. A passage from a given initial thermodynamic state to a given final thermodynamic state of a thermodynamic system is known as a thermodynamic process; usually this is transfer of matter or energy between system and surroundings. In any thermodynamic process, whatever may be the intermediate conditions during the passage, the total respective change in the value of each thermodynamic state variable depends only on the initial and final states. For an idealized continuous or quasi-static process, this means that infinitesimal incremental changes in such variables are exact differentials. Together, the incremental changes throughout the process, and the initial and final states, fully determine the idealized process. In the most commonly cited simple example, an ideal gas, the thermodynamic variables would be any three variables out of the following four: amount of substance, pressure, temperature, and volume. Thus, the thermodynamic state would range over a three-dimensional state space. The remaining variable, as well as other quantities such as the internal energy and the entropy, would be expressed as state functions of these three variables. The state functions satisfy certain universal constraints, expressed in the laws of thermodynamics, and they depend on the peculiarities of the materials that compose the concrete system. Various thermodynamic diagrams have been developed to model the transitions between thermodynamic states. Equilibrium state Physical systems found in nature are practically always dynamic and complex, but in many cases, macroscopic physical systems are amenable to description based on proximity to ideal conditions. One such ideal condition is that of a stable equilibrium state. Such a state is a primitive object of classical or equilibrium thermodynamics, in which it is called a thermodynamic state. Based on many observations, thermodynamics postulates that all systems that are isolated from the external environment will evolve so as to approach unique stable equilibrium states. There are a number of different types of equilibrium, corresponding to different physical variables, and a system reaches thermodynamic equilibrium when the conditions of all the relevant types of equilibrium are simultaneously satisfied. A few different types of equilibrium are listed below. Thermal equilibrium: When the temperature throughout a system is uniform, the system is in thermal equilibrium. Mechanical equilibrium: If at every point within a given system there is no change in pressure with time, and there is no movement of material, the system is in mechanical equilibrium. Phase equilibrium: This occurs when the mass for each individual phase reaches a value that does not change with time. Chemical equilibrium: In chemical equilibrium, the chemical composition of a system has settled and does not change with time.
Physical sciences
Thermodynamics
Physics
2750283
https://en.wikipedia.org/wiki/Anser%20%28bird%29
Anser (bird)
Anser is a waterfowl genus that includes the grey geese and the white geese. It belongs to the true goose and swan subfamily of Anserinae under the family of Anatidae. The genus has a Holarctic distribution, with at least one species breeding in any open, wet habitats in the subarctic and cool temperate regions of the Northern Hemisphere in summer. Some also breed farther south, reaching into warm temperate regions. They mostly migrate south in winter, typically to regions in the temperate zone between the January 0 °C (32 °F) and 5 °C (41 °F) isotherms. The genus contains 11 living species. Description The species of this genus span nearly the whole range of true goose shapes and sizes. The largest are the bean, greylag and swan geese at up to around in weight (with domestic forms far exceeding this), and the smallest are the lesser white-fronted and Ross's geese, which ranges from about . All have legs and feet that are pink, or orange, and bills that are pink, orange, or black. All have white under- and upper-tail coverts, and several have some extent of white on their heads. The neck, body and wings are grey or white, with black or blackish primary—and also often secondary—remiges (pinions). The three species of "white geese" (emperor, snow and Ross's geese) were formerly treated as a separate genus Chen, but are now generally included in Anser. The closely related "black" geese in the genus Branta differ in having black legs, and generally darker body plumage. Systematics, taxonomy and evolution The genus Anser was introduced by the French zoologist Mathurin Jacques Brisson in 1760. The name comes from the Latin word anser meaning "goose" used as the specific epithet for the greylag goose (Anas anser) introduced by Linnaeus in 1758, that epithet was repeated to become its generic name as the type species. Phylogeny The evolutionary relationships between Anser geese have been difficult to resolve because of their rapid radiation during the Pleistocene and frequent hybridization. In 2016 Ottenburghs and colleagues published a study that established the phylogenetic relationships between the species by comparing exonic DNA sequences. Species The genus contains 11 species: The following white geese were separated as the genus Chen. Most ornithological works now include Chen within Anser, Snow goose, Anser caerulescens Ross's goose, Anser rossii Emperor goose, Anser canagicus – sometimes separated in Philacte Some authorities also treat some subspecies as distinct species (notably the tundra bean goose) or as likely future species splits (notably the Greenland white-fronted goose). Fossil record Numerous fossil species have been allocated to this genus. As the true geese are near-impossible to assign osteologically to genus, this must be viewed with caution. It can be assumed with limited certainty that European fossils from known inland sites belong into Anser. As species related to the Canada goose have been described from the Late Miocene onwards in North America too, sometimes from the same localities as the presumed grey geese, it casts serious doubt on the correct generic assignment of the supposed North American fossil geese. Heterochen = Anser pratensis seems to differ profoundly from other species of Anser and might be placed into a different genus; alternatively, it might have been a unique example of a grey goose adapted for perching in trees. †Anser atavus (Middle/Late Miocene of Bavaria, Germany) – sometimes in Cygnus †Anser arenosus Bickart 1990 (Big Sandy Late Miocene of Wickieup, USA) †Anser arizonae Bickart 1990 (Big Sandy Late Miocene of Wickieup, USA) †Anser cygniformis (Late Miocene of Steinheim, Germany) †Anser oeningensis (Meyer 1865) Milne-Edwards 1867b [Anas oeningensis Meyer 1865] (Late Miocene of Oehningen, Switzerland) †Anser thraceiensis Burchak-Abramovich & Nikolov 1984 (Late Miocene/Early Pliocene of Trojanovo, Bulgaria) †Anser pratensis (Short 1970) [Heterochen pratensis Short 1970] (Valentine Early Pliocene of Brown County, USA) †Anser pressus (Brodkorb 1964) [Chen pressa Brodkorb 1964] (Dwarf Snow goose) (Glenns Ferry Late Pliocene of Hagerman, USA) †Anser thompsoni Martin & Mengel 1980 (Pliocene of Nebraska) †Anser azerbaidzhanicus (Early? Pleistocene of Binagady, Azerbaijan) †Anser devjatkini Kuročkin 1971 †Anser eldaricus Burchak-Abramovich & Gadzyev 1978 †Anser tchikoicus Kuročkin 1985 †Anser djuktaiensis Zelenkov & Kurochkin 2014 (Late Pleistocene of Yakutia, Russia) The Maltese swan Cygnus equitum was occasionally placed into Anser, and Anser condoni is a synonym of Cygnus paloregonus. A goose fossil from the Early-Middle Pleistocene of El Salvador is highly similar to Anser. Given its age it is likely to belong to an extant genus, and biogeography indicates Branta as other likely candidate. ?Anser scaldii Beneden 1872 nomen nudum (Late Miocene of Antwerp, Belgium) may be a shelduck. Relationship with humans and conservation status Two species in the genus are of major commercial importance, having been domesticated as poultry: European domesticated geese are derived from the greylag goose, and Chinese and some African domesticated geese are derived from the swan goose. Most species are hunted to a greater or lesser extent; in some areas, some populations are threatened by over-hunting and habitat loss. Although most species are not considered threatened by the IUCN, the lesser white-fronted goose and swan goose are listed as Vulnerable and the emperor goose is near-threatened. Other species have benefited from reductions in hunting since the late 19th and early 20th centuries, with most species in western Europe and North America showing marked increases in response to protection. In some cases, this has led to conflicts with farming, when large flocks of geese graze crops in the winter.
Biology and health sciences
Anseriformes
Animals
2751818
https://en.wikipedia.org/wiki/Thylacosmilus
Thylacosmilus
Thylacosmilus is an extinct genus of saber-toothed metatherian mammals that inhabited South America from the Late Miocene to Pliocene epochs. Though Thylacosmilus looks similar to the "saber-toothed cats", it was not a felid, like the well-known North American Smilodon, but a sparassodont, a group closely related to marsupials, and only superficially resembled other saber-toothed mammals due to convergent evolution. A 2005 study found that the bite forces of Thylacosmilus and Smilodon were low, which indicates the killing-techniques of saber-toothed animals differed from those of extant species. Remains of Thylacosmilus have been found primarily in Catamarca, Entre Ríos, and La Pampa Provinces in northern Argentina. Taxonomy In 1926, the Marshall Field Paleontological Expeditions collected mammal fossils from the Ituzaingó Formation of Corral Quemado, in Catamarca Province, northern Argentina. Three specimens were recognized as representing a new type of marsupial, related to the borhyaenids, and were reported to the Paleontological Society of America in 1928, though without being named. In 1933, the American paleontologist Elmer S. Riggs named and preliminarily described the new genus Thylacosmilus based on these specimens, while noting that a full description was being prepared and would be published at a later date. He named two new species in the genus, T. atrox and T. lentis. The generic name Thylacosmilus means "pouch knife", while the specific name atrox means "cruel". Riggs found the genus distinct enough to warrant a new subfamily within Borhyaenidae, Thylacosmilinae, and stated it was "one of the most unique flesh-eating mammals of all times". The holotype specimen of T. atrox, FMNH P 14531, was collected by Riggs and an assistant. It consists of a skull with the teeth of the right side entirely preserved as well as the left canine found separate in the matrix, fragments of the mandibles, and a partial skeleton consisting of a humerus, a broken radius and broken femora, and foot bones. Missing and scattered parts of the skull and mandible were reconstructed and fitted together. Specimen P 14344 was designated as the paratype of T. atrox, and consists of the skull, the mandible, seven cervical, two dorsal, two lumbar, and two sacral vertebrae, a femur, a tibia, a fibula, and various foot bones. It was one fourth smaller than the holotype, and may have been a young adult. It was collected by the American paleontologist Robert C. Thorne. The holotype of T. lentis, specimen P 14474, is a partial skull with the teeth of the right side preserved, and is about the same size as the T. atrox paratype. It was collected a few miles away from the site of the T. atrox holotype discovery, by the German biologist Rudolf Stahlecker. These specimens were housed at the Field Museum of Natural History in Chicago, while the T. lentis type later became part of the Museum of La Plata collection. In 1934, Riggs fully described the animal, after the fossils had been prepared and compared with other mammals from the same formation and better known borhyaenids from the Santa Cruz Formation. More fragmentary Thylacosmilus specimens have since been discovered. Riggs and the American paleontologist Bryan Patterson reported in 1939 that a canine (MLP 31-XI-12-4) tentatively assigned to Achlysictis or Stylocynus by the Argentinian paleontologist Lucas Kraglievich in 1934 belonged to Thylacosmilus. A partial right ramus and front half of a skull (MLP 65_VI 1-29-41.) was collected in 1965. In a 1972 thesis, the Argentinian paleontologist Jorge Zetti suggested that T. atrox and T. lentis represented a single species, and the American paleontologist Larry G, Marshall agreed in 1976, stating the features distinguishing the two were of dubious taxonomic value, and probably due to differences in age and sex. He also found it hard to explain how two sympatric species (related species that lived in the same area at the same time) would be virtually identical in their specializations. Marshall also suggested Hyaenodonops could be cogeneric, though it was impossible to determine from the available specimens. Evolution Though Thylacosmilus is one of several predatory mammal genera typically called "saber-toothed cats", it was not a felid placentalian, but a sparassodont, a group closely related to marsupials, and only superficially resembled other saber-toothed mammals due to convergent evolution. The term "saber-tooth" refers to an ecomorph consisting of various groups of extinct predatory synapsids (mammals and close relatives), which convergently evolved extremely long maxillary canines, as well as adaptations to the skull and skeleton related to their use. This includes members of Gorgonopsia, Thylacosmilidae, Machaeroidinae, Nimravidae, Barbourofelidae, and Machairodontinae. The cladogram below shows the position of Thylacosmilus within Sparassodonta, according to Suarez and colleagues, 2015. Description Body mass for sparassodonts is difficult to estimate, since these animals have relatively large heads in proportion to their bodies, leading to overestimations, particularly when compared with skulls of modern members of Carnivora, which have different locomotive and functional adaptations, or with those of the recent predatory marsupials, which do not exceed of body mass. Recent methods, like Ercoli and Prevosti's (2011) linear regressions on postcranial elements that directly support the body's weight (such as tibiae, humeri and ulnae), comparing Thylacosmilus to both extinct and modern carnivorans and metatherians, suggest that it weighed between , with one estimate suggesting up to , about the same size as a modern jaguar. The differences in weight estimations may be due to the individual size variation of the specimens studied in each analysis, as well as the different samples and methods used. In any case, the weight estimations are consistent for terrestrial species that are generalists or have some degree of cursoriality. A weight in this range would make Thylacosmilus one of the largest known carnivorous metatherians. Skull Thylacosmilus had large, saber-like canines. The roots of these canines grew throughout the animal's life, growing in an arc up the maxilla and above the orbits. Thylacosmilus teeth are in many aspects even more specialized than the teeth of other sabertoothed predators. In these animals the predatory function of the "sabres" gave rise to a specialization of the general dentition, in which some teeth were reduced or lost. In Thylacosmilus the canines are relatively longer and more slender, relatively triangular in cross-section, in contrast with the oval shape of carnivorans' saber-like canines. The function of these large canines was once thought to have apparently even eliminated the need for functional incisors, while carnivorans like Smilodon and Barbourofelis still have a full set of incisors. However, evidence in the form of wear facets on the internal sides of the lower canines of Thylacosmilus indicate that the animal did indeed have incisors, though they remain hitherto unknown due to poor fossilization and the fact that no specimen thus far has been preserved with its premaxilla intact. In Thylacosmilus there is also evidence of the reduction of postcanine teeth, which developed only a tearing cusp, as a continuation of the general trend observed in other sparassodonts, which lost many of the grinding surfaces in the premolars and molars. The canines were hypsodont and more anchored in the skull, with more than half of the tooth contained within the alveoli, which were extended over the braincase. They were protected by the large symphyseal flange and they were powered by the highly developed musculature of the neck, which allowed forceful downward and backward movements of the head. The canines had only a thin layer of enamel, just 0.25 mm in its maximum depth at the lateral facets, this depth being consistent down the length of the teeth. The teeth had open roots and grew constantly, which eroded the abrasion marks that are present in the surface of the enamel of other sabertooths, such as Smilodon. The sharp serrations of the canines were maintained by the action of the wear with the lower canines, a process known as thegosis. The convex upper portion of the maxilla is ornamented with extensive furrows and pits. This texturing has been correlated with an extensive network of blood vessels, which may suggest that the upper maxilla was covered by some form of soft tissue which tentatively has been hypothesized as a "horn covering" (keratinous structure). Postcranial skeleton Although the postcranial remains of Thylacosmilus are incomplete, the elements recovered so far allow the examination of characteristics that this animal acquired in convergence with the sabertooth felids. Its cervical vertebrae were very strong and to some extent resembled the vertebrae of Machairodontinae; also the cervical vertebrae have neural apophysis well developed, along with ventral apophysis in some cervicals, an element that is characteristic of other borhyaenoids. The lumbar vertebrae are short and more rigid than in Prothylacynus. The bones of the limbs, like the humerus and femur, are very robust, since they probably had to deal with larger forces than in the modern felids. In particular, the features of the humerus suggest a great development of the pectoral and deltoid muscles, not only required to capture its prey, also to absorb the energy of the impact of the collision with such prey. The features of the hindlimb, with a robust femur equipped with a greater trochanter in the lower part, the short tibia and plantigrade feet shows that this animal was not a runner, and probably stalked its prey animals. The hindlimbs also allowed a certain mobility of the hip, and possibly the ability to stand up only with its hindlimbs, like Prothylacynus and Borhyaena. Contrary to felids, barbourofelids and nimravids, the claws of Thylacosmilus were not retractable. Palaeobiology Diet and feeding Recent comparative biomechanical analysis have estimated the bite force of T. atrox, starting from maximum gape, at , much weaker than that of a leopard, suggesting its jaw muscles had an insignificant role in the dispatch of prey. Its skull was similar to that of Smilodon in that it was much better adapted to withstand loads applied by the neck musculature, which, along with evidence for powerful and flexible forelimb musculature and other skeleton adaptations for stability, support the hypothesis that its killing method consisted on immobilization of its prey followed by precisely directed, deep bites into the soft tissue driven by powerful neck muscles. It has been suggested that its specialized predatory lifestyle could be linked to more extensive parental care than in modern marsupial predators, due that the killing technique only could be used by adult individuals with a full development of its peculiar dental anatomy and grasping abilities; it could require some time for young individuals to learn the necessary skills, although there are no clear evidence in the fossils Thylacosmilus, and this kind of cooperative behavior is unknown in modern marsupials. In 1988 Juan C. Quiroga published a study on the cerebral cortex of two proterotherids and Thylacosmilus. The study examines endocranial casts of two Thylacosmilus specimens: MLP 35-X-41-1 (from the Montehermosan age in Catamarca Province), which represents a natural cast of the left half of the cranial cavity lacking the anterior part of the olfactory bulbs and the brain hemispheres; and MMP 1443 (from the Chapadmalalan age in Buenos Aires Province), which is a complete, artificial cast that shows some ventral displacement but with the anterior right part of the brain hemisphere and olfactory bulb. Quiroga's analysis showed that the somatic nervous system of Thylacosmilus represented 27% of the entire cortex, with the visual area representing 18% and the auditory area 7%. The paleocortex was more than 8%. The sulci of the cortex are relatively complex and similar in pattern and number to the modern diprotodont marsupials. Compared with Macropus and Trichosurus, Thylacosmilus had less development of the maxillar area with respect to the mandibular area, and the rhinal fissure is taller than in Macropus and Thylacinus. This disproportion between the maxillar and mandibular areas, which are roughly similar in marsupials, seems to be a consequence of the extreme development of the neck and mandibular musculature, used in the functioning of the osteodentary anatomy of this animal. However, the area dedicated to the oral-mandible region comprises 42% of the somatic area. The comparison between the endocranial casts of Thylacosmilus and a proterotherid specimen (possibly a species coevolving with Thylacosmilus and a potential prey item) indicates that Thylacosmilus had only half of the encephalization and a quarter of the cortical area, however it has more somatized areas, similar visual areas and less auditory area, which suggests different sensomotoric qualities between both animals. The analysis published by Christine Argot in 2002 about the evolution of predatory borhyaenoids suggests that Thylacosmilus was a specialized form, which have a limited stereoscopic vision with small eyes, with an overlap of 50-60°, very low compared with modern predators, but the ossified and great auditory bulla and the muscular body would indicate that it could be an ambush predator in open and relatively dry environments, where the sound absorption is lesser than in more humid areas, and the acute hearing could compensate the limited vision. Argot suggested that Thylacosmilus maybe was a nocturnal hunter, as modern lions. Studies published in 2023 by Gaillard et al. suggest that despite the unique placement and divergences of the eyes, Thylacosmilus was still granted some stereoscopic visual capability as a result of the frontation and verticality of its eye orbits, with this adaptation being a trade-off as a result of the unique morphology of its teeth, which never stopped growing. This study also suggests that Thylacosmilus was largely unimpeded in predatory capability by the reduction in binocular vision created by its hypertrophied canines. A 2005 study published by Wroe et al. analysing bite-forces with the use of regressions on body mass and applying the model of "dry skull" in which the jaw is modeled as a lever based in relations between the skull dimensions and the jaw muscles, was employed in some extinct and extant placental and metatherian predatory mammals. Thylacosmilus atrox reached the lowest value in that analysis, just barely surpassed by Smilodon fatalis. The authors concluded that both taxa, with low bite forces and peculiar cranial and postcranial anatomies, had a killing technique to dispatch large bodied prey without a true analogue between modern taxa. An analysis by Goswami et al. in 2010 tested if the metatherian mode of reproduction has produced any constraint in their cranial morphological evolution. Using landmarks in the skulls of several eutherian and metatherian meat-eating lineages, they compared the ecomorphological convergences in these groups. Metatherian lineages, including specialised forms as Thylacoleo and Thylacosmilus, showed values in morphospace more similar to caniniforms than felids, due that even the shortening of the skull and reduction of postcanine teeth are not so drastic as in felids, despite them often being compared to feliform eutherians. The study shows that in any case, metatherians could be so diverse in cranial diversity as its eutherian counterparts, even with very extreme forms as Thylacosmilus itself and that the metatherian development doesn't have any significative role in cranial evolution. A 2020 study found several functional disparities between Thylacosmilus'''s cranial anatomy and that of saber-toothed eutherians that cannot be explained by its metatherian status, such as the lack of a jaw symphysis, subtriangular canines instead of blade-like ones, lack of incisors (that would render feline-like feeding behaviours impossible), weak jaw musculature and unaligned teeth with no evidence of shearing activity, as well as a post-cranial skeleton more akin to that of a bear than a cursorial predator like a cat. This study very tentatively suggests that Thylacosmilus might have been an intestine specialist that slashed open and sucked up the carcass' entrails. A 2021 statistical analysis conversely concluded that Thylacosmilus killed in the same manner as other sabre-tooths, because the premaxillary area, the carnassial region, and the nape of Smilodon, Homotherium, Barbourofelis, and Thylacosmilus are all similarly developed, which they presumed was to, respectively, withstand high bite forces, maximise gape, and strengthen neck-driven head pulling. Thylacosmilus scored closest to Barbourofelis. An isotope ratio study, using stable isotopes of carbon and oxygen from the tooth enamel of several mammals from the Pampean region from the Late Miocene to Late Pleistocene, was published by Domingo et al. in 2020 and indicates that the favoured prey of Thylacosmilus were grazers, mainly notoungulates from open areas. This diet seems to coincide with the expansion of vast grasslands of C4 plants in southern South America and the increasing of aridity and lower temperatures, in the interval between 11 million and 3 million years ago known as Edad de las Planicies Australes ("Age of the Southern Plains", in Spanish). Motion Various studies have been published on the musculature and motion of Thylacosmilus. The analysis made by William Turnbull published in 1976 and 1978 included a reconstruction of the masticatory muscles of Thylacosmilus modelling them with plasticine over a cast of the skull and following the muscle scars on the surface of the fossil, then making a rubber model of the musculature and calculating the percentage of weight of these muscles compared with recent mammals, he concluded that the muscles involved in jaw closing in this animal were not unusual in size neither in form, compared with modern carnivore mammals, even indicating that they weren't so reduced as in the machairodont felids. Turnbull concluded that in Thylacosmilus these masticatory muscles was not involved at all in the use of the sabertooth canines, which depended of the large neck muscles and the flexion of the head to be used killing the prey, combining in a sense the stabbing and slashing techniques from "dirk-toothed" and "scimitar" sabertooths. The comparative studies of Argot 2004, indicates that the basicranium had rugose crests that served as attachments for the neck flexor muscles, which are associated to the increase of the bite strength. The deltopectoral crest is large, with 60% of the length of the humerus, which is correlated with musculature to manipulate heavy prey. This animal had an absent entepicondylar foramen in the humerus, which is correlated with the reduction of the abduction movement in that bone in cursorial ungulates and carnivores (Borhyaena also show this condition), although it contrasts with its probable powerful adductor muscles. Although the lumbar vertebrae are not completely known, the two last ones are known and suggest for its vertical neural process that there is not an anticlinal vertebra; probably the muscles of the back (m. longissimus dorsi) acted to stabilize the column and contribute to body propulsion, as occurs in Smilodon, which contrast with more flexible backs of the closest relatives of these sabertooth taxa. Distribution and habitat Based on studies of its habitat, Thylacosmilus is believed to have hunted in savanna-like or sparsely forested areas, avoiding the more open plains where it would have faced competition with the more successful and aggressive Phorusrhacids and the giant vulture-like Teratornithid Argentavis. Fossils of Thylacosmilus have been found in the Huayquerian (Late Miocene) Ituzaingó, Epecuén, and Cerro Azul Formations and the Montehermosan (Early Pliocene) Brochero and Monte Hermoso Formations in Argentina. Extinction Although older references have often stated that Thylacosmilus became extinct due to competition with the "more competitive" saber-toothed cat Smilodon during the Great American Interchange, newer studies have shown this is not the case. Thylacosmilus died out during the Pliocene (3.6 to 2.58 Ma) whereas saber-toothed cats are not known from South America until the Middle Pleistocene (781,000 to 126,000 years ago). As a result, the last appearance of Thylacosmilus is separated from the first appearance of Smilodon'' by over one and a half million years.
Biology and health sciences
Marsupials
Animals
1978507
https://en.wikipedia.org/wiki/Carpenter%20ant
Carpenter ant
Carpenter ants (Camponotus spp.) are large ants (workers ) indigenous to many forested parts of the world. They build nests inside wood, consisting of galleries chewed out with their mandibles or jaws, preferably in dead, damp wood. However, unlike termites, they do not consume wood, but instead discard a material that resembles sawdust outside their nest. Sometimes, carpenter ants hollow out sections of trees. They also commonly infest wooden buildings and structures, causing a widespread problem: they are a major cause of structural damage. Nevertheless, their ability to excavate wood helps in forest decomposition. The genus includes over 1,000 species. They also farm aphids. In their farming, the ants protect the aphids from predators (usually other insects) while they excrete a sugary fluid called honeydew, which the ants get by stroking the aphids with their antennae. Description Carpenter ants are generally large ants: workers are 4–7 mm long in small species and 7–13 mm in large species, queens are 9–20 mm long and males are 5–13 mm long. The bases of the antennae are separated from the clypeal border by a distance of at least the antennal scape's maximum diameter. The mesosoma in profile usually forms a continuous curve from the pronotum through to the propodeum. Habitat Carpenter ant species reside both outdoors and indoors in moist, decaying, or hollow wood, most commonly in forest environments. They cut "galleries" into the wood grain to provide passageways to allow for movement between different sections of the nest. Certain parts of a house, such as around and under windows, roof eaves, decks and porches, are more likely to be infested by carpenter ants because these areas are most vulnerable to moisture. Carpenter ants have been known to construct extensive underground tunneling systems. These systems often end at some food source – often aphid colonies, where the ants extract and feed on honeydew. These tunneling systems also often exist in trees. The colonies typically include a central "parent" colony surrounded and supplemented by smaller satellite colonies. Food Carpenter ants are considered both predators and scavengers. These ants are foragers that typically eat parts of other dead insects or substances derived from other insects. Common foods for them include insect parts, "honeydew" produced by aphids, and extrafloral nectar from plants. They are also known for eating other sugary liquids such as honey, syrup, or juices. Carpenter ants can increase the survivability of aphids when they tend them. Most species of carpenter ants forage at night. When foraging, they usually collect and consume dead insects. Some species less commonly collect live insects. When they discover a dead insect, workers surround it and extract its body fluids to be carried back to the nest. The remaining chitin-based shell is left behind. Occasionally, the ants bring the chitinous head of the insect back to the nest, where they also extract its inner tissue. The ants can forage individually or in small or large groups, though they often opt to do so individually. Different colonies in close proximity may have overlapping foraging regions, although they typically do not assist each other in foraging. Their main food sources normally include proteins and carbohydrates. Instances of carpenter ants bleeding Chinese elm trees for the sap have been observed in northern Arizona. These instances may be rare, as the colonies vastly exceeded the typical size of carpenter ant colonies elsewhere. When workers find food sources, they communicate this information to the rest of the nest. They use biochemical pheromones to mark the shortest path that can be taken from the nest to the source. When a sizable number of workers follows this trail, the strength of the cue increases and a foraging trail is established. This ends when the food source is depleted. The workers will then feed the queen and the larvae by consuming the food they have found, and regurgitating the food at the nest. Foraging trails can be either under or above ground. Although carpenter ants do not tend to be extremely aggressive, they have developed mechanisms to maximize what they take from a food source when that same food source is also visited by competing organisms. This is accomplished in different ways. Sometimes they colonize an area near a relatively static food supply. More often, they develop a systemic way to visit the food source, with alternating trips by different individual ants or groups. This allows them to decrease the gains of intruders because the intruders tend to visit in a scattered, random, and unorganized manner. The ants, however, visit the sources systematically so that they reduce the average crop remaining. They tend to visit more resource-dense food areas in an attempt to minimize resource availability for others. That is, the more systematic the foraging behavior of the ants, the more random that of its competitors. Contrary to popular belief, carpenter ants do not actually eat wood, as they are unable to digest cellulose. They only create tunnels and nests within it. Some carpenter ant species can obtain nitrogen by feeding on urine or urine-stained sand. This may be beneficial in nitrogen-limited environments. Symbionts All ants in this genus, and some related genera, possess an obligate bacterial endosymbiont called Blochmannia. This bacterium has a small genome, and retains genes to biosynthesize essential amino acids and other nutrients. This suggests the bacterium plays a role in ant nutrition. Many Camponotus species are also infected with Wolbachia, another endosymbiont that is widespread across insect groups. Wolbachia is associated with the nurse cells in the queen's ovaries in the species Camponotus textor, which results in the worker larva being infected. Behavior and ecology Nesting Carpenter ants work to build the nests that house eggs in environments with usually high humidity due to their sensitivity to environmental humidity. These nests are called primary nests. Satellite nests are constructed once the primary nest is established and has begun to mature. Residents of satellite nests include older larvae, pupae, and some winged individuals, such as male ants (drones), or future queen ants. Only eggs, the newly hatched larvae, workers, and the queen reside in the primary nests. As satellite nests do not have environmentally sensitive eggs, the ants can construct them in rather diverse locations that can actually be relatively dry. Some species, like Camponotus vagus, build the nest in a dry place, usually in wood. Nuptial flight When conditions are warm and humid, winged males and females participate in a nuptial flight. They emerge from their satellite nests and females mate with a number of males while in flight. The males die after mating. These newly fertilized queens discard their wings and search for new areas to establish primary nests. The queens build new nests and deposit around 20 eggs, nurturing them as they grow until worker ants emerge. The worker ants eventually assist her in caring for the brood as she lays more eggs. After a few years, reproductive winged ants are born, allowing for the making of new colonies. Again, satellite nests will be established and the process will repeat itself. Relatedness Relatedness is the probability that a gene in one individual is an identical copy, by descent, of a gene in another individual. It is essentially a measure of how closely related two individuals are with respect to a gene. It is quantified by the coefficient of relatedness, which is a number between zero and one. The larger the value, the more two individuals are "related". Carpenter ants are social hymenopteran insects. This means the relatedness between offspring and parents is disproportionate. Females are more closely related to their sisters than they are to their offspring. Between full sisters, the coefficient of relatedness is r > 0.75 (due to their haplodiploid genetic system). Between parent and offspring, the coefficient of relatedness is r = 0.5, because, given the event in meiosis, a certain gene has a 50% chance of being passed on to the offspring. Genetic diversity Eusocial insects tend to present low genetic diversity within colonies, which can increase with the co-occurrence of multiple queens (polygyny) or with multiple mating by a single queen (polyandry). Distinct reproductive strategies may generate similar patterns of genetic diversity in ants. Kin recognition According to Hamilton's rule for relatedness, for relative-specific interactions to occur, such as kin altruism, a high level of relatedness is necessary between two individuals. Carpenter ants, like many social insect species, have mechanisms by which individuals determine whether others are nestmates or not. They are useful because they explain the presence or absence of altruistic behavior between individuals. They also act as evolutionary strategies to help prevent incest and promote kin selection. Social carpenter ants recognize their kin in many ways. These methods of recognition are largely chemical in nature, and include environmental odors, pheromones, "transferable labels", and labels from the queen that are distributed to and among nest members. Because they have a chemical basis for emission and recognition, odors are useful because many ants can detect such changes in their environment through their antennae. The process of recognition for carpenter ants requires two events. First, a cue must be present on a "donor animal". These cues are called "labels". Next, the receiving animal must be able to recognize and process the cue. In order for an individual carpenter ant to be recognized as a nestmate, it must, as an adult, go through specific interactions with older members of the nest. This process is also necessary in order for the ant to recognize and distinguish other individuals. If these interactions do not occur in the beginning of adult life, the ant will be unable to be distinguished as a nestmate and unable to distinguish nestmates. Kin altruism Recognition allows for the presence of kin-specific interactions, such as kin altruism. Altruistic individuals increase other individuals' fitness at the expense of their own. Carpenter ants perform altruistic actions toward their nestmates so that their shared genes are propagated more readily or more often. In many social insect species like these ants, many worker animals are sterile and do not have the ability to reproduce. As a result, they forgo reproduction to donate energy and help the fertile individuals reproduce. Pheromones As in most other social insect species, individual interaction is heavily influenced by the queen. The queen can influence individuals with odors called pheromones, which can have different effects. Some pheromones have been known to calm workers, while others have been known to excite them. Pheromonal cues from ovipositing queens have a stronger effect on worker ants than those of virgin queens. Social immunity In many social insect species, social behavior can increase the disease resistance of animals. This phenomenon, called social immunity, exists in carpenter ants. It is mediated through the feeding of other individuals by regurgitation. The regurgitate can have antimicrobial activity, which would be spread amongst members of the colony. Some proteases with antimicrobial activity have been found to exist in regurgitated material. Communal sharing of immune response capability is likely to play a large role in colonial maintenance during highly pathogenic periods. Polygyny Polygyny often is associated with many social insect species, and usually is characterized by limited mating flights, small queen size, and other characteristics. However, carpenter ants have "extensive" mating flights and relatively large queens, distinguishing them from polygynous species. Carpenter ants are described as oligogynous because they have a number of fertile queens which are intolerant of each other and must therefore spread to different areas of the nest. Some aggressive interactions have been known to take place between queens, but not necessarily through workers. Queens become aggressive mainly to other queens if they trespass on a marked territory. Queens in a given colony can work together in brood care and the workers tend to experience higher rates of survival in colonies with multiple queens. Some researchers still subscribe to the notion that carpenter ant colonies are only monogynous. Exploding ants In at least nine Southeast Asian species of the cylindricus complex, including Colobopsis saundersi, workers feature greatly enlarged mandibular glands that run the entire length of the ant's body. They can release their contents suicidally by performing autothysis, thereby rupturing the ant's body and spraying a toxic substance from the head, which gives these species the common name "exploding ants". The enlarged mandibular gland, which is many times the size of that of a normal ant, produces a glue. The glue bursts out and entangles and immobilizes all nearby victims. However, all exploding ant species have been moved to Colobopsis based on recent taxonomy. The termite species Globitermes sulphureus has a similar defensive system. Subgenera Camponotus currently has 43 subgenera. Camponotus Mayr, 1861 Dendromyrmex Emery, 1895 Forelophilus Kutter, 1931 Hypercolobopsis Emery, 1920 Karavaievia Emery, 1925 Manniella Wheeler, W.M., 1921 Mayria Forel, 1878 Myrmacrhaphe Santschi, 1926 Myrmamblys Forel, 1912 Myrmaphaenus Emery, 1920 Myrmentoma Forel, 1912 Myrmepomis Forel, 1912 Myrmespera Santschi, 1926 Myrmeurynota Forel, 1912 Myrmisolepis Santschi, 1921 Myrmobrachys Forel, 1912 Myrmocladoecus Wheeler, W.M., 1921 Myrmodirhachis Emery, 1925 Myrmomalis Forel, 1914 Myrmonesites Emery, 1920 Myrmopalpella Stärcke, 1934 Myrmopelta Santschi, 1921 Myrmophyma Forel, 1912 Myrmopiromis Wheeler, W.M., 1921 Myrmoplatypus Santschi, 1921 Myrmoplatys Forel, 1916 Myrmopsamma Forel, 1914 Myrmosaga Forel, 1912 Myrmosaulus Wheeler, W.M., 1921 Myrmosericus Forel, 1912 Myrmosphincta Forel, 1912 Myrmostenus Emery, 1920 Myrmotarsus Forel, 1912 Myrmothrix Forel, 1912 Myrmotrema Forel, 1912 Myrmoxygenys Emery, 1925 Orthonotomyrmex Ashmead, 1906 Paramyrmamblys Santschi, 1926 Phasmomyrmex Stitz, 1910 Pseudocolobopsis Emery, 1920 Rhinomyrmex Forel, 1886 Tanaemyrmex Ashmead, 1905 Thlipsepinotus Santschi, 1928 Selected species Relationship with humans As pests Carpenter ants can damage wood used in the construction of buildings. They can leave behind a sawdust-like material called frass that provides clues to their nesting location. Carpenter ant galleries are smooth and very different from termite-damaged areas, which have mud packed into the hollowed-out areas. Carpenter ants can be identified by the general presence of one upward protruding node, looking like a spike, at the "waist" attachment between the thorax and abdomen (petiole). Control involves application of insecticides in various forms including dusts and liquids. The dusts are injected directly into galleries and voids where the carpenter ants are living. The liquids are applied in areas where foraging ants are likely to pick the material up and spread the poison to the colony upon returning. As food Carpenter ants and their larvae are eaten in various parts of the world. In Australia, the Honeypot ant (Camponotus inflatus) is regularly eaten raw by Indigenous Australians. It is a particular favourite source of sugar for Australian Aborigines living in arid regions, partially digging up their nests instead of digging them up entirely, in order to preserve this food source. The honey also has antimicrobial properties which the aboriginal population use to their advantage to cure colds. In North America, lumbermen during the early years in Maine would eat carpenter ants to prevent scurvy, and in John Muir's publication, First Summer in the Sierra, Muir notes that the Northern Paiute people of California ate the tickling, acid gasters of the large jet-black carpenter ants. In Africa, carpenter ants are among a vast number of species that are consumed by the San people.
Biology and health sciences
Hymenoptera
null
1979078
https://en.wikipedia.org/wiki/Color%20model
Color model
In color science, a color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components. When this model is associated with a precise description of how the components are to be interpreted (viewing conditions, etc.), taking account of visual perception, the resulting set of colors is called "color space." This article describes ways in which human color vision can be modeled, and discusses some of the models in common use. Tristimulus color space One can picture this space as a region in three-dimensional Euclidean space if one identifies the x, y, and z axes with the stimuli for the long-wavelength (L), medium-wavelength (M), and short-wavelength (S) light receptors. The origin, (S,M,L) = (0,0,0), corresponds to black. White has no definite position in this diagram; rather it is defined according to the color temperature or white balance as desired or as available from ambient lighting. The human color space is a horse-shoe-shaped cone such as shown here (see also CIE chromaticity diagram below), extending from the origin to, in principle, infinity. In practice, the human color receptors will be saturated or even be damaged at extremely high light intensities, but such behavior is not part of the CIE color space and neither is the changing color perception at low light levels (see: Kruithof curve). The most saturated colors are located at the outer rim of the region, with brighter colors farther removed from the origin. As far as the responses of the receptors in the eye are concerned, there is no such thing as "brown" or "gray" light. The latter color names refer to orange and white light respectively, with an intensity that is lower than the light from surrounding areas. One can observe this by watching the screen of an overhead projector during a meeting: one sees black lettering on a white background, even though the "black" has in fact not become darker than the white screen on which it is projected before the projector was turned on. The "black" areas have not actually become darker but appear "black" relative to the higher intensity "white" projected onto the screen around it.
Physical sciences
Basics
Physics
1979466
https://en.wikipedia.org/wiki/Taxus%20baccata
Taxus baccata
{{Speciesbox | image = Taxus baccata MHNT.jpg | image_caption = Taxus baccata (European yew) shoot with immature cones, and a mature cone bearing an aril | status = LC | status_system = IUCN3.1 | status_ref = | genus = Taxus | species = baccata | authority = L. | range_map = Taxus baccata range.svg | range_map_caption = Natural (native [green] + naturalised [ochre]) range<ref name="Atlas">Benham, S. E., Houston Durrant, T., Caudullo, G., de Rigo, D., 2016. Taxus baccata in Europe: distribution, habitat, usage and threats. In: San-Miguel-Ayanz, J., de Rigo, D., Caudullo, G., Houston Durrant, T., Mauri, A. (Eds.), European Atlas of Forest Tree Species. Publ. Off. EU, Luxembourg, pp. e015921+</ref> | synonyms_ref = | synonyms = }}Taxus baccata is a species of evergreen tree in the family Taxaceae, native to Western Europe, Central Europe and Southern Europe, as well as Northwest Africa, northern Iran, and Southwest Asia. It is the tree originally known as yew, though with other related trees becoming known, it may be referred to as common yew, European yew, or in North America English yew. It is a woodland tree in its native range, and is also grown as an ornamental tree, hedge or topiary. The plant is poisonous, with toxins that can be absorbed through inhalation, ingestion, and transpiration through the skin. Consuming any part of the tree, excluding the aril, can be deadly and the consumption of even a small amount of the foliage can result in death. Taxonomy and naming The word yew is from Old English īw, ēow, ultimately from Proto-Indo-European *h₁eyHw-. Possibly entered Germanic languages(discussion:Eihwaz) through a Celtic language, see Gaulish *ivos, Breton ivin, Irish ēo, Welsh yw(ywen) and French if. In German it is known as Eibe. Baccata is Latin for 'bearing berries'. The word yew as it was originally used seems to refer to the colour brown. The yew (μίλος) was known to Theophrastus, who noted its preference for mountain coolness and shade, its evergreen character and its slow growth. Most Romance languages, with the notable exception of French (if), kept a version of the Latin word taxus (Italian tasso, Corsican tassu, Occitan teis, Catalan teix, Gasconic tech, Spanish tejo, Asturian texu, Portuguese teixo, Galician teixo and Romanian tisă) from the same root as toxic. In Slavic languages, the same root is preserved: Polish cis, Ukrainian, Slovakian and Russian tis (тис), Serbian-Croatian-Bosnian-Montenegrin tisa/тиса, tis, Slovenian tisa. Albanian borrowed it as tis. In Iran, the tree is known as sorkhdār (, literally "the red tree"). Common yew was one of the many species first described by Linnaeus. It is one of around 30 conifer species in seven genera in the family Taxaceae, which is placed in the order Pinales. Description Yews are small to medium-sized evergreen trees, growing (exceptionally up to ) tall, with a trunk up to (exceptionally ) in diameter. The bark is thin, scaly brown, and comes off in small flakes aligned with the stem. The leaves are flat, dark green, long, broad, and arranged spirally on the stem, but with the leaf bases twisted to align the leaves in two flat rows on either side of the stem, except on erect leading shoots where the spiral arrangement is more obvious. The leaves are poisonous. The seed cones are modified, each cone containing a single seed, which is long, and partly surrounded by a fleshy scale which develops into a soft, bright red berry-like structure called an aril. The aril is long and wide and open at the end. The arils mature 6 to 9 months after pollination, and with the seed contained, are eaten by thrushes, waxwings and other birds, which disperse the hard seeds undamaged in their droppings. Maturation of the arils is spread over 2 to 3 months, increasing the chances of successful seed dispersal. The seeds themselves are poisonous and bitter, but are opened and eaten by some bird species, including hawfinches, greenfinches, and great tits. The aril is not poisonous; it is gelatinous and very sweet tasting. The male cones are globose, in diameter, and shed their pollen in early spring. Yews are mostly dioecious, but occasional individuals can be variably monoecious, or change sex with time. Distribution and habitat T. baccata is native to all countries of Europe (except Iceland), the Caucasus, and beyond from Turkey eastwards to northern Iran. Its range extends south to Morocco and Algeria in North Africa. A few populations are also present in the archipelagos of the Azores and Madeira. The limit of its northern Scandinavian distribution is its sensitivity to frost, with global warming predicted to allow its spread inland. It has been introduced elsewhere, including the United States. Aside from its natural habitat, it is also common to find English Yew in gardens because it is very tolerant to pruning, and in cemeteries, as it symbolizes death. T. baccatas richest central European populations are in Swiss yew-beech woodlands, on cool, steep marl slopes up to in elevation in the Jura Mountains and Alpine foothills. In England it grows best in steep slopes of the chalk downs, forming extensive stands invading the grassland outside the beech woods. In more continental climates of Europe it fares better in mixed forests, of both coniferous and mixed broadleaf-conifer composition. Under its evergreen shade, no other plants can grow. T. baccata prefers steep rocky calcareous slopes. It rarely develops beyond saplings on acid soil when under a forest canopy, but is tolerant of soil pH when planted by humans, such as their traditional placement in churchyards and cemeteries, where some of the largest and oldest trees in northwestern Europe are found. It grows well in well-drained soils, tolerating nearly any soil type, typically humus and base-rich soils, but also on rendzina and sand soils given adequate moisture. They can survive temporary flooding and moderate droughts. Roots can penetrate extremely compressed soils, such as on rocky terrain and vertical cliff faces. T. baccata normally appears individually or in small groups within the understory, but also forms stands throughout its range, such as in sheltered calcareous sites. T. baccata is extremely shade-tolerant, with the widest temperature range for photosynthesis among European trees, able to photosynthesize in winter after deciduous trees have shed their leaves. It can grow under partial canopies of beech and other deciduous broad-leafed trees, though it only grows into large trees without such shade. In centuries past T. baccata was exterminated from many woodlands as a poisonous hazard to the cattle and horses that often grazed in the woods. Rabbits and deer however have a level of immunity to the poisonous alkaloids, and the seeds are dispersed by birds, with thrushes greatly enjoying the fruit. It also regenerates readily from stumps and roots, even when ancient and hollow. Longevity Taxus baccata can reach 400 to 600 years of age. Some specimens live longer but the age of yews is often overestimated. Ten yews in Britain are believed to predate the 10th century. The potential age of yews is impossible to determine accurately and is subject to much dispute. There is rarely any wood as old as the entire tree, while the boughs themselves often become hollow with age, making ring counts impossible. Evidence based on growth rates and archaeological work of surrounding structures suggests the oldest yews, such as the Fortingall Yew in Perthshire, Scotland, may be in the range of 2,000 years, placing them among the oldest plants in Europe. One characteristic contributing to yews' longevity is that, unlike most other trees, they are able to split under the weight of advanced growth without succumbing to disease in the fracture. Another is their ability to give rise to new epicormic and basal shoots from cut surfaces and low on their trunks, even in old age. Significant trees The Fortingall Yew in Perthshire, Scotland, has one of the largest recorded trunk girths in Britain, reportedly 16-17m in the 18th century, and experts estimate it to be 5,000 years old. The Llangernyw Yew in Clwyd, Wales, can be found at another early saint site and is about 4000–5000 years old according to an investigation led by David Bellamy, who also carbon-dated a yew tree in Tisbury at around 4000 years old. A certificate and memorial board by the tree confirm the tree's age estimate. Other well known yews include the Ankerwycke Yew, the Balderschwang Yew, the Caesarsboom, the Florence Court Yew, and the Borrowdale Fraternal Four, of which poet William Wordsworth wrote. The Kingley Vale National Nature Reserve in West Sussex has one of Europe's largest yew woodlands. The oldest specimen in Spain is located in Bermiego, Asturias. It is known as in the Asturian language. It stands tall with a trunk diameter of and a crown diameter of . It was declared a Natural Monument on April 27, 1995, by the Asturian Government and is protected by the Plan of Natural Resources. A unique forest formed by Taxus baccata and European box (Buxus sempervirens) lies within the city of Sochi, in the Western Caucasus. The oldest Irish Yew (Taxus baccata 'Fastigiata'), the Florence Court Yew, still stands in the grounds of the Florence Court estate in County Fermanagh, Northern Ireland. The Irish Yew has become ubiquitous in cemeteries across the world, and it is believed that all known examples are from cuttings from this tree. Toxicity The entire plant is poisonous, with the exception of the aril (the red flesh of the “berry” covering the seed). Yews contain numerous toxic compounds, including "at least ten alkaloids, nitriles (cyanogenic glycoside esters), ephedrine", and their essential oil, but the most important toxins are taxine alkaloids, cardiotoxic chemical compounds which act via calcium and sodium channel antagonism. If any leaves or seeds of the plant are ingested, urgent medical attention is recommended as well as observation for at least 6 hours after the point of ingestion. The European yew is one of the most toxic species in the genus, along with the Japanese yew, T. cuspidata. Yew poisonings are relatively common in both domestic and wild animals which consume the plant accidentally, resulting in "countless fatalities in livestock". Taxines are also absorbed efficiently via the skin. Taxus species should thus be handled with care and preferably with gloves. "The lethal dose for an adult is reported to be 50 g of yew needles. Patients who ingest a lethal dose frequently die due to cardiogenic shock, in spite of resuscitation efforts." There are currently no known antidotes for yew poisoning, but drugs such as atropine have been used to treat the symptoms. Taxine remains in the plant all year, with maximal concentrations appearing during the winter. Dried yew plant material retains its toxicity for several months and even increases its toxicity as the water is removed. Fallen leaves should therefore also be considered toxic. Poisoning usually occurs when leaves of yew trees are eaten, but in at least one case a victim inhaled sawdust from a yew tree. Allergenic potential Male yews are extremely allergenic, blooming and releasing abundant amounts of pollen in the spring, with an OPALS allergy scale rating of 10 out of 10. Completely female yews have an OPALS rating of 1, the lowest possible, trapping pollen while producing none. The pollen, like most species', easily passes through window screens. While yew pollen does not contain sufficient taxine alkaloids to cause poisoning, its allergenic potential has been implicated in adverse reactions to paclitaxel treatment. Place names Words and Morphemes for "yew tree" have resulted in a number of place names. These include the Proto-Celtic ; Old Irish ; Irish , and ; and the Scottish Gaelic , Newry Newry, Northern Ireland is an anglicization of , an oblique form of , which means "the grove of yew trees". The modern Irish name for Newry is (pronounced [ənʲ ˈtʲuːɾˠ]), which means "the yew tree". is a shortening of , "yew tree at the head of the strand", which was formerly the most common Irish name for Newry. This relates to an apocryphal story that Saint Patrick planted a yew tree there in the 5th century. The Irish name Cathair an Iúir (City of Newry) appears on some bilingual signs around the city. The area of Ydre in the South Swedish highlands is interpreted to mean "place of yews". Two localities in particular, Idhult and Idebo, appear to be further associated with yews. York York () is derived from the Brittonic name (Latinised variously as , , or ), a combination of "yew-tree" and a suffix of appurtenance "belonging to-, place of-" (compare Welsh ) meaning "place of the yew trees" ( in Welsh, Old Irish "grove of yew trees, place with one or more yew trees", in Irish Gaelic and in Scottish Gaelic); the city itself is called (Irish) and (Scottish Gaelic), from the Latin ); or alternatively, "the settlement of (a man named) " (a Celtic personal name is mentioned in different documents as , , and and, when combined with the Celtic possessive suffix , could be used to denote his property). The 12th‑century chronicler Geoffrey of Monmouth, in his account of the prehistoric kings of Britain, , suggests the name derives from that of a pre-Roman city founded by the legendary king Ebraucus. The name became the Anglian in the 7th century: a compound of , from the old name, and "a village", probably by conflation of the element with a Germanic root ('boar'); by the 7th century the Old English for 'boar' had become . When the Danish army conquered the city in 866, its name became . The Old French and Norman name of Yorks following the Norman Conquest was recorded as (modern Norman ) in works such as Wace's Roman de Rou. , meanwhile, gradually reduced to York in the centuries after the Conquest, moving from the Middle English in the 14th century through in the 16th century to Yarke in the 17th century. The form York was first recorded in the 13th century. Many company and place names, such as the Ebor race meeting, refer to the Latinised Brittonic, Roman name. The Archbishop of York uses Ebor as his surname in his signature. Traditions Historic suicides In the ancient Celtic world, the yew tree (*eburos) had extraordinary importance; a passage by Caesar narrates that Cativolcus, chief of the Eburones, poisoned himself with yew rather than submit to Rome (Gallic Wars 6: 31). Similarly, Florus notes that when the Cantabrians were under siege by the legate Gaius Furnius in 22 BC, most of them took their lives either by sword, fire, or a poison extracted ex arboribus taxeis, that is, from the yew tree (2: 33, 50–51). In a similar way, Orosius notes that when the Astures were besieged at Mons Medullius, they preferred to die by their own swords or by yew poison rather than surrender (6, 21, 1). Religion The yew is traditionally and regularly found in churchyards in England, Wales, Scotland, Ireland, and Northern France (particularly Normandy). Some examples can be found in La Haye-de-Routot or La Lande-Patry. It is said up to 40 people could stand inside one of the La-Haye-de-Routot yew trees, and the Le Ménil-Ciboult yew is probably the largest, with a girth of 13 m. Yews may grow to become exceptionally large (over 5 m diameter) and may live to be over 2,000 years old. Sometimes monks planted yews in the middle of their cloister, as at Muckross Abbey (Ireland) or abbaye de Jumièges (Normandy). Some ancient yew trees are located at St. Mary the Virgin Church, Overton-on-Dee in Wales. In the Septuagint rendering of the Book of Nahum, 1:10, Nineveh and other deemed enemies of the biblical God are foretold to "be laid bare even to its foundation, and…devoured as a twisted yew." In Asturian tradition and culture, the yew tree was considered to be linked with the land, people, ancestors, and ancient religion. It was tradition on All Saints' Day to bring a branch of a yew tree to the tombs of those who had died recently so they would be guided in their return to the Land of Shadows. The yew tree has been found near chapels, churches, and cemeteries since ancient times as a symbol of the transcendence of death. They are often found in the main squares of villages where people celebrated the open councils that served as a way of general assembly to rule village affairs. It has been suggested that the sacred tree at the Temple at Uppsala was an ancient yew tree. The Christian church commonly found it expedient to take over existing pre-Christian sacred sites for churches. It has also been suggested that yews were planted at religious sites as their long life was suggestive of eternity, or because, being toxic when ingested, they were seen as trees of death. Another suggested explanation is that yews were planted to discourage farmers and drovers from letting animals wander onto the burial grounds, the poisonous foliage being the disincentive. A further possible reason is that fronds and branches of yew were often used as a substitute for palms on Palm Sunday. King Edward I of England ordered yew trees planted in churchyards to protect the buildings. Some yews existed before their churches, as preachers held services beneath them when churches were unavailable. Due to the ability of their branches to root and sprout anew after touching the ground, yews became symbols of death, rebirth, and therefore immortality. In interpretations of Norse cosmology, the tree Yggdrasil has traditionally been interpreted as a giant ash tree. Some scholars now believe errors were made in past interpretations of the ancient writings, and that the tree is most likely a European yew (Taxus baccata). In the Crann Ogham—the variation on the ancient Irish Ogham alphabet which consists of a list of trees—yew is the last in the main list of 20 trees, primarily symbolizing death. There are stories of people who have committed suicide by ingesting the foliage. As the ancient Celts also believed in the transmigration of the soul, there is in some cases a secondary meaning of the eternal soul that survives death to be reborn in a new form. Uses Yew wood was historically important, finding use in the Middle Ages in items such as musical instruments, furniture, and longbows. The species was felled nearly to extinction in much of Europe. In the modern day it is not considered a commercial crop due to its very slow growth, but it is valued for hedging and topiary. Medical Certain compounds found in the bark of yew trees were discovered in 1967 to have efficacy as anti-cancer agents. The precursors of the chemotherapy drug paclitaxel (taxol) were later shown to be synthesized easily from extracts of the leaves of European yew, which is a much more renewable source than the bark of the Pacific yew (Taxus brevifolia) from which they were initially isolated. This ended a point of conflict in the early 1990s; many environmentalists, including Al Gore, had opposed the destructive harvesting of Pacific yew for paclitaxel cancer treatments. Docetaxel can then be obtained by semi-synthetic conversion from the precursors. Woodworking Wood from the yew is classified as a closed-pore softwood, similar to cedar and pine. Easy to work, yew is among the hardest of the softwoods, yet it possesses a remarkable elasticity, making it ideal for products that require springiness, such as bows. The wood is esteemed for cabinetry and tool handles. The hard, slow-growing wood also finds use in gates, furniture, parquet floors, and paneling. Its typical burls and contorted growth, with intricate multicolored patterns, make it attractive for carving and woodturning, but also make the wood unsuited for construction. It is good firewood and is sometimes burnt as incense. Due to all parts of the yew and its volatile oils being poisonous and cardiotoxic, a mask should be worn if one comes in contact with sawdust from the wood. One of the world's oldest surviving wooden artifacts is a Clactonian yew spear head, found in 1911 at Clacton-on-Sea, in Essex, UK. Known as the Clacton Spear, it is estimated to be over 400,000 years old. Longbows Yew is also associated with Wales and England because of the longbow, an early weapon of war developed in northern Europe, and as the English longbow the basis for a medieval tactical system. The oldest surviving yew longbow was found at Rotten Bottom in Dumfries and Galloway, Scotland. It has been given a calibrated radiocarbon date of 4040 BC to 3640 BC and is on display in the National Museum of Scotland. Yew is the wood of choice for longbow making; the heartwood is always on the inside of the bow with the sapwood on the outside. This makes most efficient use of their properties as heartwood is best in compression whilst sapwood is superior in tension. However, much yew is knotty and twisted, and therefore unsuitable for bowmaking; most trunks do not give good staves and even in a good trunk much wood has to be discarded. There was a tradition of planting yew trees in churchyards throughout Britain and Ireland, among other reasons, as a resource for bows, such as at "Ardchattan Priory whose yew trees, according to other accounts, were inspected by Robert the Bruce and cut to make at least some of the longbows used at the Battle of Bannockburn." The trade of yew wood to England for longbows was so robust that it depleted the stocks of good-quality, mature yew over a vast area. The first documented import of yew bowstaves to England was in 1294. In 1423 the Polish king commanded protection of yews in order to cut exports, facing nearly complete destruction of local yew stock. In 1470 compulsory archery practice was renewed, and hazel, ash, and laburnum were specifically allowed for practice bows. Supplies still proved insufficient, until by the Statute of Westminster in 1472, every ship coming to an English port had to bring four bowstaves for every tun. Richard III of England increased this to ten for every tun. This stimulated a vast network of extraction and supply, which formed part of royal monopolies in southern Germany and Austria. In 1483, the price of bowstaves rose from £2 to £8 per hundred (equivalent to £ to £ in ), and in 1510 the Venetians would only sell a hundred for £16 (). In 1507 the Holy Roman Emperor asked the Duke of Bavaria to stop cutting yew, but the trade was profitable, and in 1532 the royal monopoly was granted for the usual quantity "if there are that many." In 1562, the Bavarian government sent a long plea to the Holy Roman Emperor asking him to stop the cutting of yew, and outlining the damage done to the forests by its selective extraction, which broke the canopy and allowed wind to destroy neighbouring trees. In 1568, despite a request from Saxony, no royal monopoly was granted because there was no yew to cut, and the next year Bavaria and Austria similarly failed to produce enough yew to justify a royal monopoly. Forestry records in this area in the 17th century do not mention yew, and it seems that no mature trees were to be had. The English tried to obtain supplies from the Baltic, but at this period bows were being replaced by guns in any case. Musical instruments The late Robert Lundberg, a noted luthier who performed extensive research on historical lute-making methodology, states in his 2002 book Historical Lute Construction that yew was historically a prized wood for lute construction. European legislation establishing use limits and requirements for yew limited supplies available to luthiers, but it was apparently as prized among medieval, renaissance, and baroque lute builders as Brazilian rosewood is among contemporary guitar-makers for its quality of sound and beauty. Horticulture Today European yew is widely used in landscaping and ornamental horticulture. Due to its dense, dark green, mature foliage, and its tolerance of even very severe pruning, it is used especially for formal hedges and topiary. Its relatively slow growth rate means that in such situations it needs to be clipped only once per year (in late summer). European yew will tolerate a wide range of soils and situations, including shallow chalk soils and shade, although in deep shade its foliage may be less dense. However it cannot tolerate waterlogging, and in poorly-draining situations is liable to succumb to the root-rotting pathogen Phytophthora cinnamomi. T. baccata is tolerant of urban pollution, cold, and heat, though soil compaction e.g. by roads can harm it. It is slow-growing, taking about 20 years to grow tall, and vertical growth effectively stops after 100 years. With its soft bark, the tree can be killed over time by rubbing such as by climbing children. In Europe, Taxus baccata grows naturally north to Molde in southern Norway, but it is used in gardens further north. It is also popular as a bonsai in many parts of Europe and makes a handsome small- to large-sized bonsai. Well over 200 cultivars of T. baccata have been named. The most popular of these are the Irish yew (T. baccata 'Fastigiata'), a fastigiate cultivar of the European yew selected from two trees found growing in Ireland, and the several cultivars with yellow leaves, collectively known as "golden yew". In some locations, e.g. when hemmed in by buildings or other trees, an Irish yew can reach 20 feet in height without exceeding 2 feet in diameter at its thickest point, although with age many Irish yews assume a fat cigar shape rather than being truly columnar. AGM cultivars The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:- T. baccata T. baccata 'Fastigiata' (Irish yew) T. baccata 'Fastigiata Aureomarginata' (golden Irish yew) T. baccata 'Icicle' T. baccata 'Repandens' T. baccata 'Repens Aurea' T. baccata 'Semperaurea' T. baccata 'Standishii' Privies In England, yew has historically been sometimes associated with privies (outside toilets), possibly because the smell of the plant keeps insects away. Culinary The edible arils, often colloquially referred to as “yew berries” (or traditionally as “snotty gogs” in parts of England), are eaten by some foragers in western countries, although great care must be taken to remove or spit out the toxic seed. Conservation Historically, T. baccata populations were gravely threatened by felling for longbows and destruction to protect livestock from poisoning. It is now endangered in parts of its range due to intensive land use. The species is also harvested to meet pharmaceutical demand for taxanes. Trees are often damaged by browsing and bark stripping. Yew's thin bark makes it vulnerable to fire. Its toxicity protects against many insects, but the yew mite causes significant bud mortality, and seedlings can be killed by fungi. Clippings from ancient specimens in the UK, including the Fortingall Yew, were taken to the Royal Botanic Gardens in Edinburgh to form a mile-long hedge. The purpose of this "Yew Conservation Hedge Project" is to maintain the DNA of Taxus baccata. Another conservation programme was run in Catalonia in the early 2010s by the Forest Sciences Centre of Catalonia (CTFC) in order to protect genetically endemic yew populations and preserve them from overgrazing and forest fires. In the framework of this programme, the 4th International Yew Conference was organised in the Poblet Monastery in 2014. There has also been a conservation programme in northern Portugal and Northern Spain (Cantabrian Range).
Biology and health sciences
Pinophyta (Conifers)
Plants
1980717
https://en.wikipedia.org/wiki/Drizzle
Drizzle
Drizzle is a light precipitation which consists of liquid water drops that are smaller than those of rain – generally smaller than in diameter. Drizzle is normally produced by low stratiform clouds and stratocumulus clouds. Precipitation rates from drizzle are on the order of a millimetre (0.04 in) per day or less at the ground. Owing to the small size of drizzle drops, under many circumstances drizzle largely evaporates before reaching the surface, and so may be undetected by observers on the ground. The METAR code for drizzle is DZ and for freezing drizzle is FZDZ. Effects While most drizzle has only a minor immediate impact upon humans, freezing drizzle can lead to treacherous conditions. Freezing drizzle occurs when supercooled drizzle drops land on a surface whose temperature is below freezing. These drops immediately freeze upon impact, leading to the buildup of sheet ice (sometimes called black ice) on the surface of roads. Occurrence Drizzle tends to be the most frequent form of precipitation over large areas of the world's oceans, particularly in the colder regions of the subtropics. These regions are dominated by shallow marine stratocumulus and trade wind cumulus clouds, which exist entirely within the marine boundary layer. Despite the low rates of surface accumulation, it has become apparent that drizzle exerts a major influence over the structure, coverage, and radiative properties of clouds in these regions. This has motivated scientists to design more sophisticated and sensitive instruments such as high-frequency radars which can detect drizzle. These studies have shown that the quantity of drizzle is strongly linked to cloud morphology and tends to be associated with updrafts within the marine boundary layer. Increased amounts of drizzle tend to be found in marine clouds that form in clean air masses that have low concentrations of cloud droplets. This interconnection between clouds and drizzle can be explored using high-resolution numerical modelling such as large eddy simulation. Influence of aerosols It has been hypothesized by a group of atmospheric scientists at Texas A&M University that particulates in the atmosphere caused by human activities may suppress drizzle. According to this hypothesis, because drizzle can be an effective means of removing moisture from a cloud, its suppression could help to increase the thickness, coverage, and longevity of marine stratocumulus clouds. This would lead to increased cloud albedo on a regional to global scale, and a cooling effect on the atmosphere. Estimates using complex global climate models suggest that this effect may be partially masking the effects of greenhouse gas increases on global surface temperature. However, it is not clear that the representation of the chemical and physical processes needed to accurately simulate the interaction between aerosols, clouds, and drizzle in current () climate models is sufficient to fully understand the global impacts of changes in particulates.
Physical sciences
Precipitation
Earth science
1982496
https://en.wikipedia.org/wiki/Nuclear%20force
Nuclear force
The nuclear force (or nucleon–nucleon interaction, residual strong force, or, historically, strong nuclear force) is a force that acts between hadrons, most commonly observed between protons and neutrons of atoms. Neutrons and protons, both nucleons, are affected by the nuclear force almost identically. Since protons have charge +1 e, they experience an electric force that tends to push them apart, but at short range the attractive nuclear force is strong enough to overcome the electrostatic force. The nuclear force binds nucleons into atomic nuclei. The nuclear force is powerfully attractive between nucleons at distances of about 0.8 femtometre (fm, or ), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsion is responsible for the size of nuclei, since nucleons can come no closer than the force allows. (The size of an atom, of size in the order of angstroms (Å, or ), is five orders of magnitude larger.) The nuclear force is not simple, though, as it depends on the nucleon spins, has a tensor component, and may depend on the relative momentum of the nucleons. The nuclear force has an essential role in storing energy that is used in nuclear power and nuclear weapons. Work (energy) is required to bring charged protons together against their electric repulsion. This energy is stored when the protons and neutrons are bound together by the nuclear force to form a nucleus. The mass of a nucleus is less than the sum total of the individual masses of the protons and neutrons. The difference in masses is known as the mass defect, which can be expressed as an energy equivalent. Energy is released when a heavy nucleus breaks apart into two or more lighter nuclei. This energy is the internucleon potential energy that is released when the nuclear force no longer holds the charged nuclear fragments together. A quantitative description of the nuclear force relies on equations that are partly empirical. These equations model the internucleon potential energies, or potentials. (Generally, forces within a system of particles can be more simply modelled by describing the system's potential energy; the negative gradient of a potential is equal to the vector force.) The constants for the equations are phenomenological, that is, determined by fitting the equations to experimental data. The internucleon potentials attempt to describe the properties of nucleon–nucleon interaction. Once determined, any given potential can be used in, e.g., the Schrödinger equation to determine the quantum mechanical properties of the nucleon system. The discovery of the neutron in 1932 revealed that atomic nuclei were made of protons and neutrons, held together by an attractive force. By 1935 the nuclear force was conceived to be transmitted by particles called mesons. This theoretical development included a description of the Yukawa potential, an early example of a nuclear potential. Pions, fulfilling the prediction, were discovered experimentally in 1947. By the 1970s, the quark model had been developed, by which the mesons and nucleons were viewed as composed of quarks and gluons. By this new model, the nuclear force, resulting from the exchange of mesons between neighbouring nucleons, is a multiparticle interaction, the collective effect of strong force on the underlining structure of the nucleons. Description While the nuclear force is usually associated with nucleons, more generally this force is felt between hadrons, or particles composed of quarks. At small separations between nucleons (less than ~ 0.7 fm between their centres, depending upon spin alignment) the force becomes repulsive, which keeps the nucleons at a certain average separation. For identical nucleons (such as two neutrons or two protons) this repulsion arises from the Pauli exclusion force. A Pauli repulsion also occurs between quarks of the same flavour from different nucleons (a proton and a neutron). Field strength At distances larger than 0.7 fm the force becomes attractive between spin-aligned nucleons, becoming maximal at a centre–centre distance of about 0.9 fm. Beyond this distance the force drops exponentially, until beyond about 2.0 fm separation, the force is negligible. Nucleons have a radius of about 0.8 fm. At short distances (less than 1.7 fm or so), the attractive nuclear force is stronger than the repulsive Coulomb force between protons; it thus overcomes the repulsion of protons within the nucleus. However, the Coulomb force between protons has a much greater range as it varies as the inverse square of the charge separation, and Coulomb repulsion thus becomes the only significant force between protons when their separation exceeds about . The nuclear force has a spin-dependent component. The force is stronger for particles with their spins aligned than for those with their spins anti-aligned. If two particles are the same, such as two neutrons or two protons, the force is not enough to bind the particles, since the spin vectors of two particles of the same type must point in opposite directions when the particles are near each other and are (save for spin) in the same quantum state. This requirement for fermions stems from the Pauli exclusion principle. For fermion particles of different types, such as a proton and neutron, particles may be close to each other and have aligned spins without violating the Pauli exclusion principle, and the nuclear force may bind them (in this case, into a deuteron), since the nuclear force is much stronger for spin-aligned particles. But if the particles' spins are anti-aligned, the nuclear force is too weak to bind them, even if they are of different types. The nuclear force also has a tensor component which depends on the interaction between the nucleon spins and the angular momentum of the nucleons, leading to deformation from a simple spherical shape. Nuclear binding To disassemble a nucleus into unbound protons and neutrons requires work against the nuclear force. Conversely, energy is released when a nucleus is created from free nucleons or other nuclei: the nuclear binding energy. Because of mass–energy equivalence (i.e. Einstein's formula ), releasing this energy causes the mass of the nucleus to be lower than the total mass of the individual nucleons, leading to the so-called "mass defect". The nuclear force is nearly independent of whether the nucleons are neutrons or protons. This property is called charge independence. The force depends on whether the spins of the nucleons are parallel or antiparallel, as it has a non-central or tensor component. This part of the force does not conserve orbital angular momentum, which under the action of central forces is conserved. The symmetry resulting in the strong force, proposed by Werner Heisenberg, is that protons and neutrons are identical in every respect, other than their charge. This is not completely true, because neutrons are a tiny bit heavier, but it is an approximate symmetry. Protons and neutrons are therefore viewed as the same particle, but with different isospin quantum numbers; conventionally, the proton is isospin up, while the neutron is isospin down. The strong force is invariant under SU(2) isospin transformations, just as other interactions between particles are invariant under SU(2) transformations of intrinsic spin. In other words, both isospin and intrinsic spin transformations are isomorphic to the SU(2) symmetry group. There are only strong attractions when the total isospin of the set of interacting particles is 0, which is confirmed by experiment. Our understanding of the nuclear force is obtained by scattering experiments and the binding energy of light nuclei. The nuclear force occurs by the exchange of virtual light mesons, such as the virtual pions, as well as two types of virtual mesons with spin (vector mesons), the rho mesons and the omega mesons. The vector mesons account for the spin-dependence of the nuclear force in this "virtual meson" picture. The nuclear force is distinct from what historically was known as the weak nuclear force. The weak interaction is one of the four fundamental interactions, and plays a role in processes such as beta decay. The weak force plays no role in the interaction of nucleons, though it is responsible for the decay of neutrons to protons and vice versa. History The nuclear force has been at the heart of nuclear physics ever since the field was born in 1932 with the discovery of the neutron by James Chadwick. The traditional goal of nuclear physics is to understand the properties of atomic nuclei in terms of the "bare" interaction between pairs of nucleons, or nucleon–nucleon forces (NN forces). Within months after the discovery of the neutron, Werner Heisenberg and Dmitri Ivanenko had proposed proton–neutron models for the nucleus. Heisenberg approached the description of protons and neutrons in the nucleus through quantum mechanics, an approach that was not at all obvious at the time. Heisenberg's theory for protons and neutrons in the nucleus was a "major step toward understanding the nucleus as a quantum mechanical system". Heisenberg introduced the first theory of nuclear exchange forces that bind the nucleons. He considered protons and neutrons to be different quantum states of the same particle, i.e., nucleons distinguished by the value of their nuclear isospin quantum numbers. One of the earliest models for the nucleus was the liquid-drop model developed in the 1930s. One property of nuclei is that the average binding energy per nucleon is approximately the same for all stable nuclei, which is similar to a liquid drop. The liquid-drop model treated the nucleus as a drop of incompressible nuclear fluid, with nucleons behaving like molecules in a liquid. The model was first proposed by George Gamow and then developed by Niels Bohr, Werner Heisenberg, and Carl Friedrich von Weizsäcker. This crude model did not explain all the properties of the nucleus, but it did explain the spherical shape of most nuclei. The model also gave good predictions for the binding energy of nuclei. In 1934, Hideki Yukawa made the earliest attempt to explain the nature of the nuclear force. According to his theory, massive bosons (mesons) mediate the interaction between two nucleons. In light of quantum chromodynamics (QCD)—and, by extension, the Standard Model—meson theory is no longer perceived as fundamental. But the meson-exchange concept (where hadrons are treated as elementary particles) continues to represent the best working model for a quantitative NN potential. The Yukawa potential (also called a screened Coulomb potential) is a potential of the form where g is a magnitude scaling constant, i.e., the amplitude of potential, is the Yukawa particle mass, r is the radial distance to the particle. The potential is monotone increasing, implying that the force is always attractive. The constants are determined empirically. The Yukawa potential depends only on the distance r between particles, hence it models a central force. Throughout the 1930s a group at Columbia University led by I. I. Rabi developed magnetic-resonance techniques to determine the magnetic moments of nuclei. These measurements led to the discovery in 1939 that the deuteron also possessed an electric quadrupole moment. This electrical property of the deuteron had been interfering with the measurements by the Rabi group. The deuteron, composed of a proton and a neutron, is one of the simplest nuclear systems. The discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. In particular, the result showed that the nuclear force was not a central force, but had a tensor character. Hans Bethe identified the discovery of the deuteron's quadrupole moment as one of the important events during the formative years of nuclear physics. Historically, the task of describing the nuclear force phenomenologically was formidable. The first semi-empirical quantitative models came in the mid-1950s, such as the Woods–Saxon potential (1954). There was substantial progress in experiment and theory related to the nuclear force in the 1960s and 1970s. One influential model was the Reid potential (1968) where and where the potential is given in units of MeV. In recent years, experimenters have concentrated on the subtleties of the nuclear force, such as its charge dependence, the precise value of the πNN coupling constant, improved phase-shift analysis, high-precision NN data, high-precision NN potentials, NN scattering at intermediate and high energies, and attempts to derive the nuclear force from QCD. As a residual of strong force The nuclear force is a residual effect of the more fundamental strong force, or strong interaction. The strong interaction is the attractive force that binds the elementary particles called quarks together to form the nucleons (protons and neutrons) themselves. This more powerful force, one of the fundamental forces of nature, is mediated by particles called gluons. Gluons hold quarks together through colour charge which is analogous to electric charge, but far stronger. Quarks, gluons, and their dynamics are mostly confined within nucleons, but residual influences extend slightly beyond nucleon boundaries to give rise to the nuclear force. The nuclear forces arising between nucleons are analogous to the forces in chemistry between neutral atoms or molecules called London dispersion forces. Such forces between atoms are much weaker than the attractive electrical forces that hold the atoms themselves together (i.e., that bind electrons to the nucleus), and their range between atoms is shorter, because they arise from small separation of charges inside the neutral atom. Similarly, even though nucleons are made of quarks in combinations which cancel most gluon forces (they are "colour neutral"), some combinations of quarks and gluons nevertheless leak away from nucleons, in the form of short-range nuclear force fields that extend from one nucleon to another nearby nucleon. These nuclear forces are very weak compared to direct gluon forces ("colour forces" or strong forces) inside nucleons, and the nuclear forces extend only over a few nuclear diameters, falling exponentially with distance. Nevertheless, they are strong enough to bind neutrons and protons over short distances, and overcome the electrical repulsion between protons in the nucleus. Sometimes, the nuclear force is called the residual strong force, in contrast to the strong interactions which arise from QCD. This phrasing arose during the 1970s when QCD was being established. Before that time, the strong nuclear force referred to the inter-nucleon potential. After the verification of the quark model, strong interaction has come to mean QCD. Nucleon–nucleon potentials Two-nucleon systems such as the deuteron, the nucleus of a deuterium atom, as well as proton–proton or neutron–proton scattering are ideal for studying the NN force. Such systems can be described by attributing a potential (such as the Yukawa potential) to the nucleons and using the potentials in a Schrödinger equation. The form of the potential is derived phenomenologically (by measurement), although for the long-range interaction, meson-exchange theories help to construct the potential. The parameters of the potential are determined by fitting to experimental data such as the deuteron binding energy or NN elastic scattering cross sections (or, equivalently in this context, so-called NN phase shifts). The most widely used NN potentials are the Paris potential, the Argonne AV18 potential, the CD-Bonn potential, and the Nijmegen potentials. A more recent approach is to develop effective field theories for a consistent description of nucleon–nucleon and three-nucleon forces. Quantum hadrodynamics is an effective field theory of the nuclear force, comparable to QCD for colour interactions and QED for electromagnetic interactions. Additionally, chiral symmetry breaking can be analyzed in terms of an effective field theory (called chiral perturbation theory) which allows perturbative calculations of the interactions between nucleons with pions as exchange particles. From nucleons to nuclei The ultimate goal of nuclear physics would be to describe all nuclear interactions from the basic interactions between nucleons. This is called the microscopic or ab initio approach of nuclear physics. There are two major obstacles to overcome: Calculations in many-body systems are difficult (because of multi-particle interactions) and require advanced computation techniques. There is evidence that three-nucleon forces (and possibly higher multi-particle interactions) play a significant role. This means that three-nucleon potentials must be included into the model. This is an active area of research with ongoing advances in computational techniques leading to better first-principles calculations of the nuclear shell structure. Two- and three-nucleon potentials have been implemented for nuclides up to A = 12. Nuclear potentials A successful way of describing nuclear interactions is to construct one potential for the whole nucleus instead of considering all its nucleon components. This is called the macroscopic approach. For example, scattering of neutrons from nuclei can be described by considering a plane wave in the potential of the nucleus, which comprises a real part and an imaginary part. This model is often called the optical model since it resembles the case of light scattered by an opaque glass sphere. Nuclear potentials can be local or global: local potentials are limited to a narrow energy range and/or a narrow nuclear mass range, while global potentials, which have more parameters and are usually less accurate, are functions of the energy and the nuclear mass and can therefore be used in a wider range of applications.
Physical sciences
Nuclear physics
Physics
1982853
https://en.wikipedia.org/wiki/Shear%20zone
Shear zone
In geology, a shear zone is a thin zone within the Earth's crust or upper mantle that has been strongly deformed, due to the walls of rock on either side of the zone slipping past each other. In the upper crust, where rock is brittle, the shear zone takes the form of a fracture called a fault. In the lower crust and mantle, the extreme conditions of pressure and temperature make the rock ductile. That is, the rock is capable of slowly deforming without fracture, like hot metal being worked by a blacksmith. Here the shear zone is a wider zone, in which the ductile rock has slowly flowed to accommodate the relative motion of the rock walls on either side. Because shear zones are found across a wide depth-range, a great variety of different rock types with their characteristic structures are associated with shear zones. General introduction A shear zone is a zone of strong deformation (with a high strain rate) surrounded by rocks with a lower state of finite strain. It is characterised by a length to width ratio of more than 5:1. Shear zones form a continuum of geological structures, ranging from brittle shear zones (or faults) via brittle–ductile shear zones (or semibrittle shear zones), ductile–brittle to ductile shear zones. In brittle shear zones, the deformation is concentrated in a narrow fracture surface separating the wall rocks, whereas in a ductile shear zone the deformation is spread out through a wider zone, the deformation state varying continuously from wall to wall. Between these end-members, there are intermediate types of brittle–ductile (semibrittle) and ductile–brittle shear zones that can combine these geometric features in different proportions. This continuum found in the structural geometries of shear zones reflects the different deformation mechanisms reigning in the crust, i.e. the changeover from brittle (fracturing) at or near the surface to ductile (flow) deformation with increasing depth. By passing through the brittle–semibrittle transition the ductile response to deformation is starting to set in. This transition is not tied to a specific depth, but rather occurs over a certain depth range - the so-called alternating zone, where brittle fracturing and plastic flow coexist. The main reason for this is found in the usually heteromineral composition of rocks, with different minerals showing different responses to applied stresses (for instance, under stress quartz reacts plastically long before feldspars do). Thus differences in lithology, grain size, and preexisting fabrics determine a different rheological response. Yet other, purely physical factors, influence the changeover depth as well, including: geothermal gradient, i.e. ambient temperature. confinement pressure and fluid pressure. bulk strain rate. stress field orientation. In Scholz's model for a quartzo-feldspathic crust (with a geotherm taken from Southern California), the brittle–semibrittle transition starts at about 11 km depth with an ambient temperature of 300 °C. The underlying alternating zone then extends to roughly 16 km depth with a temperature of about 360 °C. Below approximately 16 km depth, only ductile shear zones are found. The seismogenic zone, in which earthquakes nucleate, is tied to the brittle domain, the schizosphere. Below an intervening alternating zone, there is the plastosphere. In the seismogenic layer, which occurs below an upper stability transition related to an upper seismicity cutoff (situated usually at about 4–5 km depth), true cataclasites start to appear. The seismogenic layer then yields to the alternating zone at 11 km depth. Yet big earthquakes can rupture both up to the surface and well into the alternating zone, sometimes even into the plastosphere. Rocks produced in shear zones The deformations in shear zones are responsible for the development of characteristic fabrics and mineral assemblages reflecting the reigning pressure–temperature (pT) conditions, flow type, movement sense, and deformation history. Shear zones are therefore very important structures for unravelling the history of a specific terrane. Starting at the Earth's surface, the following rock types are usually encountered in a shear zone: uncohesive fault rocks. Examples being fault gouge, fault breccia, and foliated gouge. cohesive fault rocks like crush breccias and cataclasites (protocataclasite, cataclasite, and ultracataclasite). glassy pseudotachylites. Both fault gouge and cataclasites are due to abrasive wear on brittle, seismogenic faults. foliated mylonites (phyllonites). striped gneiss. Mylonites start to occur with the onset of semibrittle behaviour in the alternating zone characterised by adhesive wear. Pseudotachylites can still be encountered here. By passing into greenschist facies conditions, the pseudotachylites disappear and only different types of mylonites persist. Striped gneisses are high-grade mylonites and occur at the very bottom of ductile shear zones. Sense of shear The sense of shear in a shear zone (dextral, sinistral, reverse or normal) can be deduced by macroscopic structures and by a plethora of microtectonic indicators. Indicators The main macroscopic indicators are striations (slickensides), slickenfibers, and stretching– or mineral lineations. They indicate the direction of movement. With the aid of offset markers such as displaced layering and dykes, or the deflection (bending) of layering/foliation into a shear zone, one can additionally determine the sense of shear. En echelon tension gash arrays (or extensional veins), characteristic of ductile-brittle shear zones, and sheath folds can also be valuable macroscopic shear-sense indicators. Microscopic indicators consist of the following structures: asymmetric folds. foliations. imbrications. Crystallographic preferred orientation (CPO). mantled and winged porphyroclasts. Well-known examples are theta (Θ)-objects and phi (Φ)-porphyroclasts, as well as sigma (σ)- and delta (δ)-winged objects. mica fish (foliation fish). pressure shadows pull-aparts. quarter structures. shear band cleavages. step-over sites. Width of shear zones and resulting displacements The width of individual shear zones stretches from the grain scale to the kilometer scale. Crustal-scale shear zones (megashears) can become 10 km wide and consequently show very large displacements from tens to hundreds of kilometers. Brittle shear zones (faults) usually widen with depth and with an increase in displacements. Strain softening and ductility Because shear zones are characterised by the localisation of strain, some form of strain softening must occur, in order for the affected host material to deform more plastically. The softening can be brought about by the following phenomena: grain-size reductions. geometric softening. reaction softening. fluid-related softening. Furthermore, for a material to become more ductile (quasi-plastic) and undergo continuous deformation (flow) without fracturing, the following deformation mechanisms (on a grain scale) have to be taken into account: diffusion creep (various types). dislocation creep (various types). dynamic recrystallization pressure solution processes. grain-boundary sliding (superplasticity) and grain-boundary area reduction. Occurrence and examples of shear zones Due to their deep penetration, shear zones are found in all metamorphic facies. Brittle shear zones are more or less ubiquitous in the upper crust. Ductile shear zones start at greenschist facies conditions and are therefore restricted to metamorphic terranes. Shear zones can occur in the following geotectonic settings: transcurrent setting – steep to vertical: strike-slip zones. transform faults. compressive setting – low-angle recumbent fold nappes (at the base of). subduction zones. thrust sheets (at the base of). extensional setting – low-angle metamorphic core complex detachments. Shear zones are dependent neither on rock type nor on geological age. Most often they are not isolated in their occurrence, but commonly form fractal-scaled, linked up, anastomosing networks which reflect in their arrangement the underlying dominant sense of movement of the terrane at that time. Some good examples of shear zones of the strike-slip type are the South Armorican Shear Zone and the North Armorican Shear Zone in Brittany, the North Anatolian Fault Zone in Turkey, and the Dead Sea Fault in Israel. Shear zones of the transform type are the San Andreas Fault in California, and the Alpine Fault in New Zealand. A shear zone of the thrust type is the Moine Thrust in northwestern Scotland. An example for the subduction zone setting is the Japan Median Tectonic Line. Detachment fault related shear zones can be found in southeastern California, e.g. the Whipple Mountain Detachment Fault. An example of a huge anastomosing shear-zone is the Borborema Shear Zone in Brazil. Importance The importance of shear zones lies in the fact that they are major zones of weakness in the Earth's crust, sometimes extending into the upper mantle. They can be very long-lived features and commonly show evidence of several overprinting stages of activity. Material can be transported upwards or downwards in them, the most important one being water circulating dissolved ions. This can bring about metasomatism in the host rocks and even re-fertilise mantle material. Shear zones can host economically viable mineralizations, examples being important gold deposits in Precambrian terranes.
Physical sciences
Structural geology
Earth science
30159370
https://en.wikipedia.org/wiki/FKT%20algorithm
FKT algorithm
The Fisher–Kasteleyn–Temperley (FKT) algorithm, named after Michael Fisher, Pieter Kasteleyn, and Neville Temperley, counts the number of perfect matchings in a planar graph in polynomial time. This same task is #P-complete for general graphs. For matchings that are not required to be perfect, counting them remains #P-complete even for planar graphs. The key idea of the FKT algorithm is to convert the problem into a Pfaffian computation of a skew-symmetric matrix derived from a planar embedding of the graph. The Pfaffian of this matrix is then computed efficiently using standard determinant algorithms. History The problem of counting planar perfect matchings has its roots in statistical mechanics and chemistry, where the original question was: If diatomic molecules are adsorbed on a surface, forming a single layer, how many ways can they be arranged? The partition function is an important quantity that encodes the statistical properties of a system at equilibrium and can be used to answer the previous question. However, trying to compute the partition function from its definition is not practical. Thus to exactly solve a physical system is to find an alternate form of the partition function for that particular physical system that is sufficiently simple to calculate exactly. In the early 1960s, the definition of exactly solvable was not rigorous. Computer science provided a rigorous definition with the introduction of polynomial time, which dates to 1965. Similarly, the notation of not exactly solvable, for a counting problem such as this one, should correspond to #P-hardness, which was defined in 1979. Another type of physical system to consider is composed of dimers, which is a polymer with two atoms. The dimer model counts the number of dimer coverings of a graph. Another physical system to consider is the bonding of H2O molecules in the form of ice. This can be modelled as a directed, 3-regular graph where the orientation of the edges at each vertex cannot all be the same. How many edge orientations does this model have? Motivated by physical systems involving dimers, in 1961, Pieter Kasteleyn and Neville Temperley and Michael Fisher independently found the number of domino tilings for the m-by-n rectangle. This is equivalent to counting the number of perfect matchings for the m-by-n lattice graph. By 1967, Kasteleyn had generalized this result to all planar graphs. Algorithm Explanation The main insight is that every non-zero term in the Pfaffian of the adjacency matrix of a graph G corresponds to a perfect matching. Thus, if one can find an orientation of G to align all signs of the terms in Pfaffian (no matter + or - ), then the absolute value of the Pfaffian is just the number of perfect matchings in G. The FKT algorithm does such a task for a planar graph G. The orientation it finds is called a Pfaffian orientation. Let G = (V, E) be an undirected graph with adjacency matrix A. Define PM(n) to be the set of partitions of n elements into pairs, then the number of perfect matchings in G is Closely related to this is the Pfaffian for an n by n matrix A where sgn(M) is the sign of the permutation M. A Pfaffian orientation of G is a directed graph H with adjacency matrix B such that pf(B) = PerfMatch(G). In 1967, Kasteleyn proved that planar graphs have an efficiently computable Pfaffian orientation. Specifically, for a planar graph G, let H be a directed version of G where an odd number of edges are oriented clockwise for every face in a planar embedding of G. Then H is a Pfaffian orientation of G. Finally, for any skew-symmetric matrix A, where det(A) is the determinant of A. This result is due to Arthur Cayley. Since determinants are efficiently computable, so is PerfMatch(G). High-level description Compute a planar embedding of G. Compute a spanning tree T1 of the input graph G. Give an arbitrary orientation to each edge in G that is also in T1. Use the planar embedding to create an (undirected) graph T2 with the same vertex set as the dual graph of G. Create an edge in T2 between two vertices if their corresponding faces in G share an edge in G that is not in T1. (Note that T2 is a tree.) For each leaf v in T2 (that is not also the root): Let e be the lone edge of G in the face corresponding to v that does not yet have an orientation. Give e an orientation such that the number of edges oriented clock-wise is odd. Remove v from T2. Return the absolute value of the Pfaffian of the adjacency matrix of G, which is the square root of the determinant. Generalizations The sum of weighted perfect matchings can also be computed by using the Tutte matrix for the adjacency matrix in the last step. Kuratowski's theorem states that a finite graph is planar if and only if it contains no subgraph homeomorphic to K5 (complete graph on five vertices) or K3,3 (complete bipartite graph on two partitions of size three). Vijay Vazirani generalized the FKT algorithm to graphs that do not contain a subgraph homeomorphic to K3,3. More generally the complexity of counting perfect matchings has been completely characterized for families of graphs that are closed under graph minors. There exists a family of graphs, the shallow vortex grids, such that for a minor-closed family that does not include all shallow vortex grids, this counting problem is polynomially solvable. But for a minor-closed family that includes all shallow vortex grids, such as the -minor-free graphs, the problem of counting perfect matchings is #P-complete. Since counting the number of perfect matchings in a general graph is also #P-complete, some restriction on the input graph is required unless FP, the function version of P, is equal to #P. Counting matchings, which is known as the Hosoya index, is also #P-complete even for planar graphs. Applications The FKT algorithm has seen extensive use in holographic algorithms on planar graphs via matchgates. For example, consider the planar version of the ice model mentioned above, which has the technical name #PL-3-NAE-SAT (where NAE stands for "not all equal"). Valiant found a polynomial time algorithm for this problem which uses matchgates.
Mathematics
Graph theory
null
27432750
https://en.wikipedia.org/wiki/Homo%20gautengensis
Homo gautengensis
Homo gautengensis is a species name proposed by anthropologist Darren Curnoe in 2010 for South African hominin fossils otherwise attributed to H. habilis, H. ergaster, or, in some cases, Australopithecus or Paranthropus. The fossils assigned to the species by Curnoe cover a vast temporal range, from about 1.8 million years ago to potentially as late as 0.8 million years ago, meaning that if the species is considered valid, H. gautengensis would be both one of the earliest and one of the longest lived species of Homo. Since Curnoe's 2010 description, recognition of the species has been limited. The classification of most of the fossils referred to H. gautengensis was controversial before the description of the species and continue to be controversial to this day. Some palaeoanthropologists have gone as far as to declare that there is little reason to consider H. gautengensis a valid taxon. Research history Palaeoanthropologists vary in their recognition of which hominin fossil represents the earliest record of the genus Homo (and in what range of morphology the genus should encompass). Most of the fossils contending for the position have been dated to between 2.4 and 2.1 million years ago, and their classification is highly controversial on the genus level. Along with fossils such as the mandibles AL 666 from Ethiopia and UR 501 from Malawi (both probably exceeding 2.1 million years in age), a skull designated Stw 53 was once one of the primary contenders. Today, the fossil commonly seen as the earliest fossil specimen of the Homo genus is LD 350-1, a fossil jaw excavated in 2013 in the Afar Region in Ethiopia, dated to about 2.8 million years old. Stw 53 was discovered in August 1976 near Krugersdorp, Transvaal in South Africa and was described in 1977 by palaeoanthropologists Alun R. Hughes and Philip V. Tobias as a skull probably from an early species of Homo. Though many palaeoanthropologists recognised the fossil as representing a species of Homo, possibly H. habilis, this has never been universally accepted, many instead seeing it as a specimen of Australopithecus africanus. Though the site of the fossils was initially dated to over 2 million years old, work from 2012 suggests that the site was significantly younger, at 1.78–1.43 million years old. In 2010, anthropologist Darren Curnoe reviewed the large amount of fossil hominin specimens from South Africa and concluded that some of the fossils were sufficiently different from the other locally recognised Homo species (H. habilis and H. ergaster/H. erectus) to represent a new species. The classification of the fossil material in South Africa, on account of much of it being fragmentary, has historically been highly contested. A few scholars believed that the region didn't preserve any species of Homo, arguing that the fossil material all belonged to australopithecines. Others believed that a single species was represented (H. ergaster) and others accepted the presence of both H. ergaster/H. erectus and H. habilis. Prior to Curnoe's description, it had already been suggested by other palaeoanthropologists, such as Frederick E. Grine and colleagues in 1993 and 1996 that Stw 53, and another skull, SK 847, represented a new species closely related to H. habilis. Based on a number of features in the teeth and skull Curnoue concluded to be distinguishing between Stw 53 and SK 847, and the typical conditions of these features in H. habilis and H. ergaster specimens, Curnoe stated that "it is now clear that the southern African fossils are morphologically too distinct" to be accommodated within either species. As such, Curnoe erected a new species, H. gautengensis, to accommodate them. The species name gautengensis derives from the South African province Gauteng (its name in turn deriving from the Sotho-Tswana word for "place of gold"), where the fossils referred to the species had been recovered. Alongside Stw 53 (the holotype specimen) and SK 847, Curnoe assigned numerous fossil specimens to the species, designating them as paratype specimens; SE 255, SE 1508, Stw 19b/33, Stw 75–79, Stw 80, Stw 84, Stw 151, SK 15, SK 27, SK 45, SKX 257/258, SKX 267/268, SKX 339, SKX 610, SKW 3114 and DNH 70. Among the most major differences noted between Stw 53 and H. habilis by Curnoe was that some of the tooth crowns of Stw 53 were larger than the average tooth crowns of H. habilis whereas other tooth crowns were significantly more narrow. Recognition of H. gautengensis has been limited, with the classification of the individual fossils referred to the species still being contested among palaeoanthropologists. As an example, SK 847 has in addition to H. gautengensis also been referred to Australopithecus africanus, Paranthropus robustus, H. habilis, H. ergaster, H. sp. nov or H. leakeyi (another proposed species with little recognition). Most of the H. gautengensis fossils are usually seen as representing fossil remains of H. habilis or H. ergaster, though no fossil has a single, universally agreed upon identification at the species level. In 2011, palaeoanthropologist Lee R. Berger went as far as to state that "there is little reason to consider [H. gautengensis] a valid taxon", noting that the attribution of Stw 53 itself to Homo had been challenged on both anatomical and stratigraphic grounds. Notably, Berger stated that MH1, the holotype specimen of Australopithecus sediba, is more similar to early Homo than Stw 53 is, believing the former to be the skull of a Australopithecus africanus or a Au. africanus-like relative of Au. sediba. H. gautengensis is not the only species name proposed for fossils historically considered by most to represent H. habilis specimens, while H. rudolfensis (once proposed for a group of fossils formerly considered H. habilis) is widely accepted, many other proposals, such as H. microcranous (for the fossil KNM-ER 1813) have little to no recognition today. Antón and Middleton (2023) conducted a large analysis of hominin fossils and concluded that the Stw 53 skull cannot be allocated to Homo, SK 847 is H. aff. erectus, and Stw 80 is from an indeterminate genus. Implications The specimens referred to H. gautengensis by Curnoe cover a vast temporal range, from ~2 million years ago (or 1.78–1.43 million years according to more recent dating) to as late as 1.26–0.82 million years ago. If valid, H. gautengensis would be one of the earliest recognised species of Homo (as fossils earlier than 2 million years old have rarely been assigned at the species level) and also one of the most long-lived, spanning a period of time of over a million years.
Biology and health sciences
Homo
Biology
878461
https://en.wikipedia.org/wiki/Earth%27s%20orbit
Earth's orbit
Earth orbits the Sun at an average distance of , or 8.317 light-minutes, in a counterclockwise direction as viewed from above the Northern Hemisphere. One complete orbit takes  days (1 sidereal year), during which time Earth has traveled . Ignoring the influence of other Solar System bodies, Earth's orbit, also called Earth's revolution, is an ellipse with the Earth–Sun barycenter as one focus with a current eccentricity of 0.0167. Since this value is close to zero, the center of the orbit is relatively close to the center of the Sun (relative to the size of the orbit). As seen from Earth, the planet's orbital prograde motion makes the Sun appear to move with respect to other stars at a rate of about 1° eastward per solar day (or a Sun or Moon diameter every 12 hours). Earth's orbital speed averages , which is fast enough to cover the planet's diameter in 7 minutes and the distance to the Moon in 4 hours. The point towards which the Earth in its solar orbit is directed at any given instant is known as the "apex of the Earth's way". From a vantage point above the north pole of either the Sun or Earth, Earth would appear to revolve in a counterclockwise direction around the Sun. From the same vantage point, both the Earth and the Sun would appear to rotate also in a counterclockwise direction. History of study Heliocentrism is the scientific model that first placed the Sun at the center of the Solar System and put the planets, including Earth, in its orbit. Historically, heliocentrism is opposed to geocentrism, which placed the Earth at the center. Aristarchus of Samos already proposed a heliocentric model in the third century BC. In the sixteenth century, Nicolaus Copernicus' De revolutionibus presented a full discussion of a heliocentric model of the universe in much the same way as Ptolemy had presented his geocentric model in the second century. This "Copernican Revolution" resolved the issue of planetary retrograde motion by arguing that such motion was only perceived and apparent. According to historian Jerry Brotton, "Although Copernicus's groundbreaking book ... had been [printed more than] a century earlier, [the Dutch mapmaker] Joan Blaeu was the first mapmaker to incorporate his revolutionary heliocentric theory into a map of the world." Influence on Earth Because of Earth's axial tilt (often known as the obliquity of the ecliptic), the inclination of the Sun's trajectory in the sky (as seen by an observer on Earth's surface) varies over the course of the year. For an observer at a northern latitude, when the north pole is tilted toward the Sun the day lasts longer and the Sun appears higher in the sky. This results in warmer average temperatures, as additional solar radiation reaches the surface. When the north pole is tilted away from the Sun, the reverse is true and the weather is generally cooler. North of the Arctic Circle and south of the Antarctic Circle, an extreme case is reached in which there is no daylight at all for part of the year, and continuous daylight during the opposite time of year. This is called polar night and midnight sun, respectively. This variation in the weather (because of the direction of the Earth's axial tilt) results in the seasons. Events in the orbit By astronomical convention, the four seasons are determined by the solstices (the two points in the Earth's orbit of the maximum tilt of the Earth's axis, toward the Sun or away from the Sun) and the equinoxes (the two points in the Earth's orbit where the Earth's tilted axis and an imaginary line drawn from the Earth to the Sun are exactly perpendicular to one another). The solstices and equinoxes divide the year up into four approximately equal parts. In the northern hemisphere winter solstice occurs on or about December 21; summer solstice is near June 21; spring equinox is around March 20, and autumnal equinox is about September 23. The effect of the Earth's axial tilt in the southern hemisphere is the opposite of that in the northern hemisphere, thus the seasons of the solstices and equinoxes in the southern hemisphere are the reverse of those in the northern hemisphere (e.g. the northern summer solstice is at the same time as the southern winter solstice). In modern times, Earth's perihelion occurs around January 3, and the aphelion around July 4. In other words, the Earth is closer to the Sun in January, and further away in July, which might seem counter-intuitive to those residing in the northern hemisphere, where it is colder when the Earth is closest to the sun and warmer when it is furthest away. The changing Earth-Sun distance results in an increase of about 7% in total solar energy reaching the Earth at perihelion relative to aphelion. Since the southern hemisphere is tilted toward the Sun at about the same time that the Earth reaches the closest approach to the Sun, the southern hemisphere receives slightly more energy from the Sun than does the northern over the course of a year. However, this effect is much less significant than the total energy change due to the axial tilt, and most of the excess energy is absorbed by the higher proportion of surface covered by water in the southern hemisphere. The Hill sphere (gravitational sphere of influence) of the Earth is about 1,500,000 kilometers (0.01 AU) in radius, or approximately four times the average distance to the Moon. This is the maximal distance at which the Earth's gravitational influence is stronger than the more distant Sun and planets. Objects orbiting the Earth must be within this radius, otherwise, they may become unbound by the gravitational perturbation of the Sun. The following diagram illustrates the positions and relationship between the lines of solstices, equinoxes, and apsides of Earth's elliptical orbit. The six Earth images are positions along the orbital ellipse, which are sequentially the perihelion (periapsis—nearest point to the Sun) on anywhere from January 2 to January 5, the point of March equinox on March 19, 20, or 21, the point of June solstice on June 20, 21, or 22, the aphelion (apoapsis—the farthest point from the Sun) on anywhere from July 3 to July 5, the September equinox on September 22, 23, or 24, and the December solstice on December 21, 22, or 23. Future Mathematicians and astronomers (such as Laplace, Lagrange, Gauss, Poincaré, Kolmogorov, Vladimir Arnold, and Jürgen Moser) have searched for evidence for the stability of the planetary motions, and this quest led to many mathematical developments and several successive "proofs" of stability for the Solar System. By most predictions, Earth's orbit will be relatively stable over long periods. In 1989, Jacques Laskar's work indicated that Earth's orbit (as well as the orbits of all the inner planets) can become chaotic and that an error as small as 15 meters in measuring the initial position of the Earth today would make it impossible to predict where Earth would be in its orbit in just over 100 million years' time. Modeling the Solar System is a subject covered by the n-body problem.
Physical sciences
Solar System
Astronomy
879946
https://en.wikipedia.org/wiki/Embraer%20E-Jet%20family
Embraer E-Jet family
The Embraer E-Jet family is a series of four-abreast, narrow-body, short- to medium-range, twin-engined jet airliners designed and produced by Brazilian aerospace manufacturer Embraer. The E-Jet was designed to complement Embraer’s earlier ERJ family, the company’s first jet-powered regional aircraft. With a capacity of 66 to 124 passengers, the E-Jets were significantly larger than any aircraft Embraer had developed before that time. The project was unveiled in early 1997 and formally introduced at the 1999 Paris Air Show. On 19 February 2002, the first E-Jet prototype completed its maiden flight, and production began later that year. The first E170 was delivered to LOT Polish Airlines on 17 March 2004. Initial rollout issues were quickly overcome, and Embraer rapidly expanded product support for better global coverage. Larger variants, the E190 and E195, entered service later in 2004, while a stretched version of the E170, the E175, was introduced in mid-2005. The E-Jet series achieved commercial success, primarily due to their ability to serve lower-demand routes while offering many of the amenities and features of larger jets. The E-Jet family is used by both mainline and regional airlines worldwide, with particular popularity among regional airlines in the United States. It also served as the foundation for the Lineage 1000 business jet. In the 2010s, Embraer introduced the second-generation E-Jet E2 family, featuring more fuel-efficient engines. However, as of 2023, the first-generation E175 remains in production to meet the needs of U.S. regional airlines, which are restricted from operating the newer generation due to scope clause limitations. Development Background During the 1990s, the Brazilian aerospace manufacturer Embraer had introduced the ERJ family, its first jet-powered regional jet. As demand for the ERJ series proved strong even early on, the company decided that it could not rely on one family of aircraft alone and examined its options for producing a complementary regional jet, including designs that would be larger and more advanced than its preceding aircraft. During March 1997, Embraer made its first public disclosure that it was studying a new 70-seat aircraft, which was initially referred to as the EMB 170; this reveal was issued concurrently with the announcement of the development of the ERJ 135. As originally conceived, the EMB 170 was to feature a new wing and larger-diameter fuselage mated to the nose and cockpit of the ERJ 145. The proposed derivative would have cost $450 million to develop. While Alenia, Aerospatiale and British Aerospace through AI(R) were studying the Airjet 70 based on the ATR 42/72 fuselage for a range, AI(R) and Embraer were studying a joint development of a 70-seater jet since their separate projects were not yet launched. In February 1999, Embraer announced it had abandoned the derivative approach in favour of an all-new design. On 14 June 1999, the E-Jet family was formally launched at the Paris Air Show, initially using the twin designations ERJ-170 and ERJ-190; these were subsequently changed to Embraer 170 and Embraer 190 respectively. The launch customers for the airliner were the French airline Régional, which placed ten orders and five options for the E170, and the Swiss airline Crossair, which had ordered 30 E170s and 30 E190s. During July 2000, production of components for the construction of both the prototype and test airframes began. Difficulties with the advanced avionics selected for the aircraft, supplied by the American company Honeywell, led to delays in the development schedule; originally, the first flight had been set to take place during 2000. On 29 October 2001, the first prototype PP-XJE was rolled out at São José dos Campos, Brazil. Intro flight On 19 February 2002, the first prototype performed its maiden flight, marking the beginning of a multi-year flight test campaign involving a total of six prototypes. In May 2002, the aircraft was displayed to the public at the Regional Airline Association convention. During that same year, full-rate production of the E-Jet commenced; this activity was centred around a recently-completed factory built by Embraer at its São José dos Campos base. After a positive response from the airline community, Embraer launched the E175, which stretched the fuselage of the E170 by . During June 2003, the first flight of the E175 took place. In April 2003, jetBlue placed an order for 100 Embraer 190s, the deliveries of which commenced two years later. Following several delays in the certification process, the E170 received type certification from the civil aviation authorities of Brazil, Europe and the United States in February 2004. Production In 2008, the 400th E-jet was delivered to Republic Airways in the United States. In September 2009, the 600th E-jet was delivered to LOT Polish Airlines. On 10 October 2012, Embraer delivered the 900th E-Jet to Kenya Airways, its 12th E-Jet. On 13 September 2013, the delivery of the 1,000th E-Jet, an E175 to Republic Airways for American Eagle, was marked by a ceremony held at the Embraer factory in São José dos Campos, with a special "1,000th E-Jet" decal above the cabin windows. On 6 December 2017, the 1,400th E-Jet was delivered, an E175; it had a backlog of over 150 firm orders on 30 September 2017. On 18 December 2018, Embraer delivered the 1,500th E-Jet, an E175 to Alaska Air subsidiary Horizon Air, as Embraer claims an 80% market share of the North American 76-seaters. By this point, the fleet had completed 25 million flight hours in 18 million cycles (an average of h) with a 99.9% dependability. E-Jets Second Generation In November 2011, Embraer announced that it would develop revamped versions of the E-Jet to be called the E-Jet E2 family. The new jets would feature improved engines that would be more fuel efficient and take advantage of new technologies. Beyond the new engines, the E2 family would also feature new wings, improved avionics, and other improvements to the aircraft. The move came amid a period of high global fuel costs and better positions Embraer as competitors introduced new and more fuel efficient jets, including the Mitsubishi Regional Jet. The new aircraft family also includes a much larger variant, the E195-E2 capable of carrying between 120 and 146 passengers. This jet better positions Embraer against the competing Airbus A220 aircraft. The PW1000G was previously selected for use on competing aircraft. In January 2013, Embraer selected the Pratt & Whitney PW1000G geared turbofan engine to power the E2 family. On 28 February 2018, The E190-E2 received its type certificate from the ANAC, FAA and EASA. It was scheduled to enter service in the second quarter of 2018. Design The Embraer E-Jet family is composed of two main commercial families and a business jet variant. The smaller E170 and E175 make up the base model aircraft, while the E190 and E195 are stretched versions, being powered by different engines and furnished with larger wing, horizontal stabilizer, and landing gear structures. From the onset, the E-Jet had been designed to be stretched. The E170 and E175 share 95% commonality, as do the E190 and E195; the two families share near 89% commonality, maintaining identical fuselage cross-sections and avionics fitouts. The E190 and E195 possess capacities similar to the initial versions of the McDonnell Douglas DC-9 and Boeing 737. All members of the E-Jet family are available in baseline, long range (LR), and advanced range (AR) models, the latter being intended for long routes with limited passenger numbers. The smaller members of the E-Jet family are powered by the General Electric CF34-8E turbofan engine, each capable of generating up to of thrust, while the stretched aircraft are outfitted with the more powerful General Electric CF34-10E, capable of producing a maximum of thrust. These engines have been designed to minimise noise and emission outputs, exceeding the requirements established by the International Civil Aviation Organization; the relatively low acoustic signature has enabled the E-Jet to be operated from airports that have imposed strict noise restrictions, such as London City Airport. The type is also equipped with winglets that reduce fuel burn and thereby improve operational efficiency. The E-Jet family is equipped with a fly-by-wire flight control system. The flight deck is furnished with the Honeywell Primus Epic Electronic flight instrument system (EFIS) suite and has been designed to facilitate a common type rating, enabling flight crews to be readily moved between different members of the family without the need for any retraining/recertifying and providing greater flexibility to operators. Early operations of the E-Jet were frequently troubled by avionics issues; by September 2008, Honeywell had issued software updates that sought to rectify the encountered issues. The main cabin is configured with four-abreast seating (2+2) as standard, and features a "double-bubble" design that Embraer has purpose-developed for its commercial passenger jets to provide stand-up headroom. The dimensions of the cabin were intentionally comparable to the narrowbody airliners of Airbus and Boeing to permit greater comfort levels than most regional aircraft. Considerable attention to detail was reportedly paid by Embraer to elevating the type's passenger appeal. Many operators have chosen to outfit their aircraft with amenities such as Wi-Fi and at-seat power outlets. The windows of the E-Jet family are relatively large at in comparison to most contemporary airliners, such as the windows of the Boeing 787. United and SkyWest have begun retrofitting their jointly operated E175 aircraft with larger "wheels first" overhead bins which can accommodate up to an extra 29 bags, an 80 percent increase in space. The airlines will modify 50 aircraft with the new bins in 2024, and if successful, plan to retrofit more than 150 aircraft by the end of 2026. Operational history In early March 2004, the first E170 deliveries were made to LOT Polish Airlines, other customers to receive early deliveries were Alitalia and US Airways-subsidiary MidAtlantic Airways. On 17 March 2004, LOT operated the first commercial flight of an E-Jet, which flew from Warsaw to Vienna. Within four years, LOT was sufficiently pleased with the type to order 12 additional E175s. Launch customer Crossair had in the meantime ceased to exist after its takeover of Swissair, leading to the cancellation of these orders. Furthermore, fellow launch customer Régional chose to defer its order, not receiving its first E-jet—an E190LR—until 2006. During July 2005, the first E175 was delivered to Air Canada, entering revenue service with the airline that same month. In April 2013, Air Canada began the transfer of its 15-strong E175 fleet to subsidiary Sky Regional Airlines; this reorganisation was completed during September 2013. By July 2020, approximately 25 million passengers had flown on the Canadian fleet over a cumulative 650,000 flight hours, while a total of 25 E175s were in service on both domestic and transborder flights into the US, which were then being flown under the Air Canada Express branding. In March 2021, Air Canada announced its intention to consolidate all regional flying under the Jazz branding, thereby ending its affiliation between Sky Regional Airlines and Air Canada; accordingly, all of the E175s were transferred to Jazz. Early operations of the E-Jet were not problem-free: the American operator JetBlue reported engine troubles with its fleet, while cold start hydraulic issues were experienced by Air Canada. Embraer had to undertake a rapid expansion of its product support network in order to satisfy the needs of its mainline operators; by October 2014, the company had two directly-owned service centers, alongside nine authorized centers and 26 independent MRO organizations around the globe, while directly employing 1,200 staff for product support alone. In response to customer demands, the company also developed web-based support. BA CityFlyer, a subsidiary of British Airways, operates a fleet of 21 E190s, typically flying routes from London City Airport to various destinations in the United Kingdom and continental Europe. CityFlyer has publicly stated that a key factor in it opting for the E-Jet over competitors such as the De Havilland Canada Dash 8 was its greater speed. The procurement of E-Jets by CityFlier led to other competing British regional airliners taking interest in the type; on 20 July 2010, Flybe ordered 35 E175s valued at US$1.3 billion (£850 million), along with options for 65 more (valued at $2.3 bn/£1.5 bn) and purchase rights for a further 40 (valued at $1.4 bn/£0.9 bn), deliveries of which commenced in November 2011. On 6 November 2008, the longest flight of an E190 was flown by JetBlue from Anchorage Airport to Buffalo International Airport over , a re-positioning flight after a two-month charter for vice presidential candidate Sarah Palin. On 14 October 2017, an Airlink E190-100IGW with 78 passengers aboard inaugurated the first scheduled commercial airline service in history to Saint Helena in the South Atlantic Ocean, arriving at Saint Helena Airport after a flight of about six hours from Johannesburg, South Africa, with a stop at Windhoek, Namibia. The flight began a once-per-week scheduled service by Airlink between Johannesburg and Saint Helena using E190 aircraft. The inaugural flight was only the second commercial flight to Saint Helena in the island's history, and the first since a chartered Airlink Avro RJ85 landed at Saint Helena Airport on 3 May 2017. Variants E170 The E170 is the smallest aircraft in the E-Jet family and was the first to enter revenue service in March 2004. As of 2017, the E170 went out of production. The Embraer 170 typically seats around 72 passengers in a typical single class configuration, 66 in a dual class configuration, and up to 78 in a high-density configuration. The E170 directly competed with the Bombardier CRJ700 and loosely with the turboprop Bombardier Q400. The jet is powered with General Electric CF34-8E engines of 14,200 pounds (62.28 kN) thrust each. E175 The E175 is a slightly stretched version of the E170 and first entered revenue service with launch customer Air Canada in July 2005. The Embraer 175 typically seats around 78 passengers in a typical single-class configuration, 76 in a dual-class configuration, and up to 88 in a high-density configuration. Like the E170, it is powered by General Electric CF34-8E engines of 14,200 pounds-force (62.28 kN) of thrust each. It competed with the Bombardier CRJ900 in the market segment previously occupied by the earlier BAe 146 and Fokker 70. , it is the only aircraft currently produced in this market segment. The E175 was initially equipped with the same style of winglets as the rest of the E-Jet family. Starting in 2014, the winglets were made wider and more angled. Those winglets and other changes to the aircraft over time have improved efficiency. Embraer said that aircraft produced after 2017 consume 6.4% less fuel than original E175 aircraft. The angled winglets increase the wingspan from 26 m (85 ft 4 in) to 28.65 m (93 ft 11 in). This winglet change was only made available to the E175 and no other models in the family. In late 2017, Embraer announced the E175SC (special configuration), officially designated as ERJ 170-200 LL, limited to 70 seats like the E170 to take advantage of the E175 performance improvements but still comply with US airline scope clauses limiting operators to 70 seats. Embraer is marketing the E175SC as a replacement for the older 70-seat Bombardier CRJ700 with better efficiency and a larger first class. In 2018, a new E175 had a value of US$27 million, projected to fall to US$3–8 million 13 years later due to their concentration in the US with more than 450 in service out of 560, with Republic and SkyWest operating over 120 each, Compass 35 and Envoy Air 90, after the similar experience with the CRJ200 and ERJ 145 demonstrates the limited remarketing opportunities. , the E175 remains in production, with strong demand from regional airlines in the United States, which cannot order the newer but heavier E175-E2 due to scope clause restrictions on maximum takeoff weight. E190 The E190/195 models are larger stretches of the E170/175 models fitted with a new, larger wing, a larger horizontal stabilizer, adding two emergency overwing exits, and a new engine. Embraer 190 is fitted with two underwing-mounted General Electric CF34-10E turbofan engines, rated at . The engines are equipped with full authority digital engine control (FADEC). The fully redundant, computerized management system continuously optimizes the engine performance resulting in reduced fuel consumption and maintenance requirements. The aircraft carries of fuel and is fitted with a Parker Hannifin fuel system. Embraer offered three variants of the E190: the STD (Standard), LR (Long Range) and AR (Advanced Range). The STD served as the base model, while the LR featured a maximum takeoff weight (MTOW) that was increased by while the AR featured an MTOW that was further increased by compared to LR, allowing more fuel to be carried. This enhancement extended the range by . The aircraft is equipped with a Hamilton Sundstrand auxiliary power unit and electrical system. The GE CF34-10E, customers can choose between 5 different variants (-10E5, -10E5A1, -10E6, -10E6A1, -10E7), each with different performance and capabilities. It is the only powerplant offered for the aircraft. These aircraft compete with the Bombardier CRJ-1000. It can carry up to 100 passengers in a two-class configuration or up to 124 in the single-class high-density configuration. On 12 March 2004, the first flight of the E190 took place. The launch customer of the E190 was New York-based low-cost carrier JetBlue with 100 orders options in 2003 and took its first delivery in 2005. Air Canada operated 45 E190 aircraft fitted with 9 business-class and 88 economy-class seats as part of its primary fleet. They were retired in May 2020. American Airlines operated E190s until 2020. JetBlue and Georgian Airways operate the E190 as part of their own fleet. Largest operator of the type is Alliance Airlines with 64 E190s in the fleet which mostly took over from American Airlines and JetBlue to serve the Australian regional market, the rest are Aeroméxico Connect (37), Tianjin Airlines (35), Airlink (29) and KLM Cityhopper (28). By 2018, early E190s were valued at under US$10 million and could be leased for less than US$100,000 per month, while the most recent aircraft were worth US$30 million and could be leased for less than US$200,000 per month. E195 The Embraer 195 is the further stretch version of the Embraer 190, it is fitted with two underwing-mounted General Electric CF34-10E turbofan engines, customers can choose between 5 different variants (-10E5, -10E5A1, -10E6, -10E6A1, -10E7), each with different performance and capabilities. The engines are equipped with full authority digital engine control (FADEC). The fully redundant, computerized management system continuously optimizes the engine performance resulting in reduced fuel consumption and maintenance requirements. The aircraft carries of fuel and is fitted with a Parker Hannifin fuel system. Embraer offered three variants of the E190: the STD (Standard), LR (Long Range) and AR (Advanced Range). The STD served as the base model, while the LR featured a maximum takeoff weight (MTOW) that was increased by while the AR featured a maximum takeoff weight (MTOW) that was further increased by compared to LR, allowing more fuel to be carried. This enhancement extended the range by for the E195. The aircraft is equipped with a Hamilton Sundstrand auxiliary power unit and electrical system. The GE CF34-10E, rated at 18,500 lb (82.30 kN), is the only powerplant offered for the aircraft. These aircraft compete with the Airbus A220-100, Boeing 717-200, Boeing 737-500, Boeing 737-600, and the Airbus A318. It can carry up to 100 passengers in a two-class configuration or up to 124 in the single-class high-density configuration. The first flight of the E195 occurred on December 7, 2004. British low-cost carrier Flybe was the first operator of the E195, had 14 orders and 12 options, and started E195 operations on 22 September 2006. Flybe have since decided that they would remove the aircraft from their fleet in favour of the Dash 8 Q400 and Embraer 175, in an effort to reduce costs, by 2020. The largest operators of the largest variant in the E-Jet family are Azul Brazilian Airlines (45), Tianjin Airlines (17), Austrian Airlines (17), Air Dolomiti (17) and LOT Polish Airlines (16). Freighter conversions On 7 March 2022, Embraer confirmed their intent to enter the cargo market, offering conversions of E190 and E195 passenger aircraft to freighters. These will make their first flights in 2024, with certification expected later in the year. The E190F will have a payload capacity of , while the E195F’s will be . The company secured its first order in May 2023 for ten aircraft from lessor Nordic Aviation Capital, to be delivered to Astral Aviation as the launch operator. Embraer Lineage 1000 On 2 May 2006, Embraer announced plans for the business jet variant of the E190, the Embraer Lineage 1000. It has the same structure as the E190, but with an extended range of up to , and luxury seating for up to 19. The Lineage 1000 offers two different engine choices, the GE CF34-10E6 and the more powerful CF34-10E7-B. It was certified by the US Federal Aviation Administration on 7 January 2009. The first two production aircraft were delivered in December 2008. Undeveloped variants Embraer considered producing an aircraft which was known as the E195X, a stretched version of the E195. It would have seated approximately 130 passengers. The E195X was apparently a response to an American Airlines request for an aircraft to replace its McDonnell Douglas MD-80s. Embraer abandoned plans for the 195X in May 2010, following concerns that its flight range would be too short. Commercial names and official model designations The commercial names used for the E170 and E190 families differ from the official model designations, as used (for instance) with the Type-Certificates, and in national registries. Operators , the three largest operators of the E-Jet family were SkyWest Airlines (241), Republic Airways (208), and Envoy Air (152), operating variably for Alaska Airlines, American Eagle, Delta Connection, and United Express. Orders and deliveries List of Embraer's E-Jet family deliveries and orders: Accidents and incidents The E-Jet has been involved in 22 incidents, including nine hull losses: Accidents with fatalities Henan Airlines Flight 8387 – 44 casualties On 24 August 2010, Henan Airlines Flight 8387, an E190 that departed from Harbin, China, crash-landed about 1 km short of the runway at Yichun Lindu Airport, resulting in 44 deaths. The final investigation report, released in June 2012, concluded that the flight crew failed to observe safety procedures for operations in low visibility. Tianjin Airlines Flight 7554 – 2 casualties among hijackers On 29 June 2012, Tianjin Airlines Flight 7554, six passengers carrying explosives stood up and announced a hijacking, but they were subdued by other passengers. The E190 returned to Hotan Airport where the hijackers were apprehended and two of them later died in hospital from injuries received in the fight. LAM Mozambique Airlines Flight 470 – 33 casualties On 29 November 2013, LAM Mozambique Airlines Flight 470, an E190, crashed in Namibia, killing all 33 aboard (27 passengers, 6 crew members) by the deliberate actions of the pilot. The first officer reportedly left the cockpit to use the bathroom. He was then locked out by the captain, who dramatically reduced the aircraft's altitude and ignored various automated warnings ahead of the high-speed impact. Piedmont Airlines – 1 ground worker casualty On 31 December 2022, a baggage handler employed by Piedmont Airlines, an American Airlines regional carrier, was killed on the ramp at Montgomery Regional Airport when sucked into the jet engine of an Embraer 175 which was scheduled to fly as American Airlines Flight 3408. KLM Cityhopper – 1 ground worker casualty On 29 May 2024, a worker was sucked into the engine of an Embraer Jet owned by KLM Cityhopper at Amsterdam airport. Dutch authorities stated that the death was a suicide. Azerbaijan Airlines Flight 8243 – 38 casualties On 25 December 2024, Azerbaijan Airlines Flight 8243, an E190, crashed on approach to Aktau International Airport in Kazakhstan. The flight was originally supposed to land at Grozny International Airport but was forced to divert due to fog in Grozny. According to The New York Times, Azerbaijani investigators believed that a Russian Pantsir-S1 air-defence system damaged the plane before it crashed. Out of the 67 passengers on board, 38 people were killed, and 29 others survived. Hull losses with no fatalities On 17 July 2007, Aero República Flight 7330 overran the runway while landing at Simón Bolívar International Airport in Santa Marta, Colombia. The E190 slid down an embankment off the side of the runway and came to rest with the nose in shallow water. The aircraft was damaged beyond repair, but all 60 aboard evacuated unharmed. On 16 September 2011, an E190 operated by TAME landed long and ran off the end of the runway at Mariscal Sucre International Airport in Quito, colliding with approach equipment and a brick wall. The crew reportedly failed to adhere to the manufacturer's procedures in the event of a flap malfunction, continuing the approach in spite of the aircraft's condition. Eleven of the 103 aboard received minor injuries, and the aircraft was written off. On 31 July 2018, Aeroméxico Connect Flight 2431, an E190 bound for Mexico City, crashed in Durango, Mexico shortly after takeoff. 99 passengers and 4 crew were on board. Although there were no fatalities, the aircraft was destroyed by the ensuing fire. The probable cause was attributed to "loss of control [...] by low altitude windshear that caused a loss of speed and lift" with contributing factors from the crew and the Navigation Services. On 11 November 2018, Air Astana Flight 1388 on a flight from Alverca Airbase, Portugal, to Almaty suffered severe control issues including flipping over and diving sharply. The crew activated the direct mode for flight controls which allowed sufficient control to make an emergency landing on the third attempt at Beja Airbase in Portugal with serious damage sustained during these high-G maneuvers. It was subsequently written-off and broken up. The investigation revealed that the aileron cables were installed incorrectly, causing reversal of aileron controls. The investigation blamed the manufacturer of the airplane for the poorly written maintenance instructions, the supervising authorities for lack of oversight over the maintenance crew, who lacked the skill to perform the maintenance, and the flight crew for failing to notice the condition during pre-flight control checks. On 18 February 2024, Air Serbia Flight 324 from Belgrade Nikola Tesla Airport to Dusseldorf International Airport, operated by an E195 leased from Marathon Airlines, overran the runway on take-off and struck the runway's instrument landing system antenna array. The aircraft sustained substantial damage to the fuselage, left wing root, and left stabiliser. After 58 minutes, the aircraft landed back safely at Belgrade, and there were no casualties. After the incident, Air Serbia cancelled its contract with Marathon Airlines; the aircraft will reportedly be retired and scrapped. Other incidents On 22 October 2023, Horizon Air Flight 2059 was operating from Paine Field in Everett, Washington to San Francisco International Airport when Joseph David Emerson, an off-duty pilot sitting in the jumpseat inside the cockpit, reportedly tried to pull both engine fire extinguisher handles on the overhead panel. The E175 aircraft was operating at 31,000 feet at the time, and had Emerson been successful at activating the fire extinguishers, both engines would have shut down. The crew was able to subdue him and land at the Portland International Airport in Oregon, where Emerson was arrested and later charged with 83 counts of attempted murder. On 9 April 2017, a passenger was dragged off a United Express flight after he refused to get up from his seat. In the process, the security officers struck the face of David Dao, a Vietnamese-American, knocking him unconscious. This incident was highly criticized. This incident happened at Chicago O'Hare International Airport. The aircraft was operating a Republic Airways flight under United Express trademark. Preserved aircraft JA04FJ - formerly N866RW, nose section preserved at Matsumoto Airport. Specifications
Technology
Specific aircraft_2
null
880860
https://en.wikipedia.org/wiki/Content%20delivery%20network
Content delivery network
A content delivery network or content distribution network (CDN) is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance ("speed") by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social media sites. CDNs are a layer in the internet ecosystem. Content owners such as media companies and e-commerce vendors pay CDN operators to deliver their content to their end users. In turn, a CDN pays Internet service providers (ISPs), carriers, and network operators for hosting its servers in their data centers. CDN is an umbrella term spanning different types of content delivery services: video streaming, software downloads, web and mobile content acceleration, licensed/managed CDN, transparent caching, and services to measure CDN performance, load balancing, Multi CDN switching and analytics and cloud intelligence. CDN vendors may cross over into other industries like security, DDoS protection and web application firewalls (WAF), and WAN optimization. Notable content delivery service providers include Akamai Technologies, Edgio, Cloudflare, Amazon CloudFront, Fastly, and Google Cloud CDN. Technology CDN nodes are usually deployed in multiple locations, often over multiple Internet backbones. Benefits include reducing bandwidth costs, improving page load times, and increasing the global availability of content. The number of nodes and servers making up a CDN varies, depending on the architecture, some reaching thousands of nodes with tens of thousands of servers on many remote points of presence (PoPs). Others build a global network and have a small number of geographical PoPs. Requests for content are typically algorithmically directed to nodes that are optimal in some way. When optimizing for performance, locations that are best for serving content to the user may be chosen. This may be measured by choosing locations that are the fewest hops, the lowest number of network seconds away from the requesting client, or the highest availability in terms of server performance (both current and historical), to optimize delivery across local networks. When optimizing for cost, locations that are the least expensive may be chosen instead. In an optimal scenario, these two goals tend to align, as edge servers that are close to the end user at the edge of the network may have an advantage in performance or cost. Most CDN providers will provide their services over a varying, defined, set of PoPs, depending on the coverage desired, such as United States, International or Global, Asia-Pacific, etc. These sets of PoPs can be called "edges", "edge nodes", "edge servers", or "edge networks" as they would be the closest edge of CDN assets to the end user. Security and privacy CDN providers profit either from direct fees paid by content providers using their network, or profit from the user analytics and tracking data collected as their scripts are being loaded onto customers' websites inside their browser origin. As such these services are being pointed out as potential privacy intrusions for the purpose of behavioral targeting and solutions are being created to restore single-origin serving and caching of resources. In particular, a website using a CDN may violate the EU's General Data Protection Regulation (GDPR). For example, in 2021 a German court forbade the use of a CDN on a university website, because this caused the transmission of the user's IP address to the CDN, which violated the GDPR. CDNs serving JavaScript have also been targeted as a way to inject malicious content into pages using them. Subresource Integrity mechanism was created in response to ensure that the page loads a script whose content is known and constrained to a hash referenced by the website author. Content networking techniques The Internet was designed according to the end-to-end principle. This principle keeps the core network relatively simple and moves the intelligence as much as possible to the network end-points: the hosts and clients. As a result, the core network is specialized, simplified, and optimized to only forward data packets. Content Delivery Networks augment the end-to-end transport network by distributing on it a variety of intelligent applications employing techniques designed to optimize content delivery. The resulting tightly integrated overlay uses web caching, server-load balancing, request routing, and content services. Web caches store popular content on servers that have the greatest demand for the content requested. These shared network appliances reduce bandwidth requirements, reduce server load, and improve the client response times for content stored in the cache. Web caches are populated based on requests from users (pull caching) or based on preloaded content disseminated from content servers (push caching). Server-load balancing uses one or more techniques including service-based (global load balancing) or hardware-based (i.e. layer 4–7 switches, also known as a web switch, content switch, or multilayer switch) to share traffic among a number of servers or web caches. Here the switch is assigned a single virtual IP address. Traffic arriving at the switch is then directed to one of the real web servers attached to the switch. This has the advantage of balancing load, increasing total capacity, improving scalability, and providing increased reliability by redistributing the load of a failed web server and providing server health checks. A content cluster or service node can be formed using a layer 4–7 switch to balance load across a number of servers or a number of web caches within the network. Request routing directs client requests to the content source best able to serve the request. This may involve directing a client request to the service node that is closest to the client, or to the one with the most capacity. A variety of algorithms are used to route the request. These include Global Server Load Balancing, DNS-based request routing, Dynamic metafile generation, HTML rewriting, and anycasting. Proximity—choosing the closest service node—is estimated using a variety of techniques including reactive probing, proactive probing, and connection monitoring. CDNs use a variety of methods of content delivery including, but not limited to, manual asset copying, active web caches, and global hardware load balancers. Content service protocols Several protocol suites are designed to provide access to a wide variety of content services distributed throughout a content network. The Internet Content Adaptation Protocol (ICAP) was developed in the late 1990s to provide an open standard for connecting application servers. A more recently defined and robust solution is provided by the Open Pluggable Edge Services (OPES) protocol. This architecture defines OPES service applications that can reside on the OPES processor itself or be executed remotely on a Callout Server. Edge Side Includes or ESI is a small markup language for edge-level dynamic web content assembly. It is fairly common for websites to have generated content. It could be because of changing content like catalogs or forums, or because of the personalization. This creates a problem for caching systems. To overcome this problem, a group of companies created ESI. Peer-to-peer CDNs In peer-to-peer (P2P) content-delivery networks, clients provide resources as well as use them. This means that, unlike client–server systems, the content-centric networks can actually perform better as more users begin to access the content (especially with protocols such as Bittorrent that require users to share). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor. Private CDNs If content owners are not satisfied with the options or costs of a commercial CDN service, they can create their own CDN. This is called a private CDN. A private CDN consists of PoPs (points of presence) that are only serving content for their owner. These PoPs can be caching servers, reverse proxies or application delivery controllers. It can be as simple as two caching servers, or large enough to serve petabytes of content. Large content distribution networks may even build and set up their own private network to distribute copies of content across cache locations. Such private networks are usually used in conjunction with public networks as a backup option in case the capacity of the private network is not enough or there is a failure which leads to capacity reduction. Since the same content has to be distributed across many locations, a variety of multicasting techniques may be used to reduce bandwidth consumption. Over private networks, it has also been proposed to select multicast trees according to network load conditions to more efficiently utilize available network capacity. CDN trends Emergence of telco CDNs The rapid growth of streaming video traffic uses large capital expenditures by broadband providers in order to meet this demand and retain subscribers by delivering a sufficiently good quality of experience. To address this, telecommunications service providers have begun to launch their own content delivery networks as a means to lessen the demands on the network backbone and reduce infrastructure investments. Telco CDN advantages Because they own the networks over which video content is transmitted, telco CDNs have advantages over traditional CDNs. They own the last mile and can deliver content closer to the end-user because it can be cached deep in their networks. This deep caching minimizes the distance that video data travels over the general Internet and delivers it more quickly and reliably. Telco CDNs also have a built-in cost advantage since traditional CDNs must lease bandwidth from them and build the operator's margin into their own cost model. In addition, by operating their own content delivery infrastructure, telco operators have better control over the utilization of their resources. Content management operations performed by CDNs are usually applied without (or with very limited) information about the network (e.g., topology, utilization etc.) of the telco-operators with which they interact or have business relationships. These pose a number of challenges for the telco-operators who have a limited sphere of action in face of the impact of these operations on the utilization of their resources. In contrast, the deployment of telco-CDNs allows operators to implement their own content management operations, which enables them to have a better control over the utilization of their resources and, as such, provide better quality of service and experience to their end users. Federated CDNs and Open Caching In June 2011, StreamingMedia.com reported that a group of TSPs had founded an Operator Carrier Exchange (OCX) to interconnect their networks and compete more directly against large traditional CDNs like Akamai and Limelight Networks, which have extensive PoPs worldwide. This way, telcos are building a Federated CDN offering, which is more interesting for a content provider willing to deliver its content to the aggregated audience of this federation. It is likely that in a near future, other telco CDN federations will be created. They will grow by enrollment of new telcos joining the federation and bringing network presence and their Internet subscriber bases to the existing ones. The Open Caching specification by Streaming Media Alliance defines a set of APIs that allows a Content Provider to deliver its content using several CDNs in a consistent way, seeing each CDN provider the same way through these APIs. Improving CDN performance using Extension Mechanisms for DNS Traditionally, CDNs have used the IP of the client's recursive DNS resolver to geo-locate the client. While this is a sound approach in many situations, this leads to poor client performance if the client uses a non-local recursive DNS resolver that is far away. For instance, a CDN may route requests from a client in India to its edge server in Singapore, if that client uses a public DNS resolver in Singapore, causing poor performance for that client. Indeed, a recent study showed that in many countries where public DNS resolvers are in popular use, the median distance between the clients and their recursive DNS resolvers can be as high as a thousand miles. In August 2011, a global consortium of leading Internet service providers led by Google announced their official implementation of the edns-client-subnet IETF Internet Draft, which is intended to accurately localize DNS resolution responses. The initiative involves a limited number of leading DNS service providers, such as Google Public DNS, and CDN service providers as well. With the edns-client-subnet EDNS0 option, CDNs can now utilize the IP address of the requesting client's subnet when resolving DNS requests. This approach, called end-user mapping, has been adopted by CDNs and it has been shown to drastically reduce the round-trip latencies and improve performance for clients who use public DNS or other non-local resolvers. However, the use of EDNS0 also has drawbacks as it decreases the effectiveness of caching resolutions at the recursive resolvers, increases the total DNS resolution traffic, and raises a privacy concern of exposing the client's subnet. Virtual CDN (vCDN) Virtualization technologies are being used to deploy virtual CDNs (vCDNs) with the goal to reduce content provider costs, and at the same time, increase elasticity and decrease service delay. With vCDNs, it is possible to avoid traditional CDN limitations, such as performance, reliability and availability since virtual caches are deployed dynamically (as virtual machines or containers) in physical servers distributed across the provider's geographical coverage. As the virtual cache placement is based on both the content type and server or end-user geographic location, the vCDNs have a significant impact on service delivery and network congestion. Image Optimization and Delivery (Image CDNs) In 2017, Addy Osmani of Google started referring to software solutions that could integrate naturally with the Responsive Web Design paradigm (with particular reference to the <picture> element) as Image CDNs. The expression referred to the ability of a web architecture to serve multiple versions of the same image through HTTP, depending on the properties of the browser requesting it, as determined by either the browser or the server-side logic. The purpose of Image CDNs was, in Google's vision, to serve high-quality images (or, better, images perceived as high-quality by the human eye) while preserving download speed, thus contributing to a great User experience (UX). Arguably, the Image CDN term was originally a misnomer, as neither Cloudinary nor Imgix (the examples quoted by Google in the 2017 guide by Addy Osmani) were, at the time, a CDN in the classical sense of the term. Shortly afterwards, though, several companies offered solutions that allowed developers to serve different versions of their graphical assets according to several strategies. Many of these solutions were built on top of traditional CDNs, such as Akamai, CloudFront, Fastly, Edgecast and Cloudflare. At the same time, other solutions that already provided an image multi-serving service joined the Image CDN definition by either offering CDN functionality natively (ImageEngine) or integrating with one of the existing CDNs (Cloudinary/Akamai, Imgix/Fastly). While providing a universally agreed-on definition of what an Image CDN is may not be possible, generally speaking, an Image CDN supports the following three components: A Content Delivery Network (CDN) for the fast serving of images. Image manipulation and optimization, either on-the-fly through URL directives, in batch mode (through manual upload of images) or fully automatic (or a combination of these). Device Detection (also known as Device Intelligence), i.e. the ability to determine the properties of the requesting browser and/or device through analysis of the User-Agent string, HTTP Accept headers, Client-Hints or JavaScript. The following table summarizes the current situation with the main software CDNs in this space: Notable content delivery service providers Free cdnjs Cloudflare JSDelivr Traditional commercial Akamai Technologies Amazon CloudFront Aryaka Ateme CDN Azure CDN CacheFly CDNetworks CenterServ ChinaCache Cloudflare Cotendo Edgio Fastly Gcore Google Cloud CDN HP Cloud Services Incapsula Instart Internap LeaseWeb Lumen Technologies MetaCDN NACEVI OnApp GoDaddy OVHcloud Rackspace Cloud Files Speedera Networks StreamZilla Wangsu Science & Technology Yottaa Telco CDNs AT&T Inc. Bharti Airtel Bell Canada BT Group China Telecom Chunghwa Telecom Deutsche Telekom KT KPN Lumen Technologies Megafon NTT Pacnet PCCW Singtel SK Broadband Tata Communications Telecom Argentina Telefonica Telenor TeliaSonera Telin Telstra Telus TIM Türk Telekom Verizon Commercial using P2P for delivery BitTorrent, Inc. Internap Pando Networks Rawflow Multi MetaCDN Warpcache In-house Netflix
Technology
Networks
null
880974
https://en.wikipedia.org/wiki/Olive%20%28color%29
Olive (color)
Olive is a dark yellowish-green color, like that of unripe or green olives. As a color word in the English language, it appears in late Middle English. Variations Olivine Olivine is the typical color of the mineral olivine. The first recorded use of olivine as a color name in English was in 1912. Olive drab Olive drab is variously described as a "A brownish-green colour" (Oxford English Dictionary); "a shade of greenish-brown" (Webster's New World Dictionary); "a dark gray-green" (MacMillan English dictionary); "a grayish olive to dark olive brown or olive gray" (American Heritage Dictionary); or "A dull but fairly strong gray-green color" (Collins English Dictionary). It is widely used as a camouflage color for uniforms and equipment in the armed forces. The first recorded use of olive drab as a color name in English was in 1892. Drab is an older color name, from the middle of the 16th century. It refers to a dull light brown color, the color of cloth made from undyed homespun wool. It took its name from the old French word for cloth, drap. There are many shades and variations of olive drab. Various shades were used on United States Army uniforms in World War II. The shade used for enlisted soldier's uniforms at the beginning of the war was officially called Olive Drab #33 (OD33), while officer's uniforms used the much darker Olive Drab #51 (OD51). Field equipment was in Olive Drab #3 (OD3), a very light, almost khaki shade. In 1943 new field uniforms and equipment were produced in the darker Olive Drab #7 (OD7). This was in turn replaced by the slightly grayer Olive Green 107 (OG-107) in 1952, which continued as the color of combat uniforms through the Vietnam War until the adoption in 1981 of the four-color-camouflage-patterned M81 Battle Dress Uniform, which retained olive drab as one of the color swatches in the pattern. The shade used for painting vehicles is defined by Federal Standard 595 in the United States. As a solid color, it is not as effective for camouflage as multi-color patterns, though it is still used by the U.S. military to color webbing and accessories. The armies of Israel, India, Cuba, and Venezuela wear solid-color olive drab uniforms. In the American novel A Separate Peace, Finny says to Gene, "...and in these times of war, we all see olive drab, and we all know it is the patriotic color. All others aren't about the war; they aren't patriotic." Pantone 448 C, "the ugliest color in the world" commonly used in plain tobacco packaging, was initially described as a shade of olive green. Black olive Black olive is a color in the RAL color matching system. It is designated as RAL 6015. The color "black olive" is a representation of the color of black olives. Olive in culture Ethnography The term "olive-skinned" is sometimes used to denote shades of medium-toned skin that is darker than the average color for White people, especially in connection with a Mediterranean ethnicity.
Physical sciences
Colors
Physics
881136
https://en.wikipedia.org/wiki/Fuchsia%20%28color%29
Fuchsia (color)
Fuchsia (, ) is a vivid pinkish-purplish-red color, named after the color of the flower of the fuchsia plant, which was named by a French botanist, Charles Plumier, after the 16th-century German botanist Leonhart Fuchs. The color fuchsia was introduced as the color of a new aniline dye called fuchsine, patented in 1859 by the French chemist François-Emmanuel Verguin. The fuchsine dye was renamed magenta later in the same year, to celebrate a victory of the French army at the Battle of Magenta on 4 June 1859 near the Italian city of that name. The first recorded use of fuchsia as a color name in English was in 1892. In print and design In color printing and design, there are more variations between magenta and fuchsia. Fuchsia is usually a more pinkish-purplish color, whereas magenta is more reddish. Fuchsia flowers themselves contain a wide variety of purples. Fuchsia was a very popular aesthetic for fashion during the 2000s. Fuchsine The first synthetic dye of the color fuchsia, called fuchsine, was patented in 1859 by François-Emmanuel Verguin. It was later renamed magenta, and became highly popular under that name. Fuchsia (web color) In the system of additive colors, the RGB color model used to create all the colors on a computer or television display, the colors magenta and fuchsia are exactly the same, and have the same hex number, #FF00FF. The name fuchsia is used on the HTML web color list for this color, while the name magenta is used on the X11 web color list. They are both composed the same way, by combining an equal amount of blue and red light at full brightness, as shown in the image on the left. Variations of fuchsia French fuchsia At right is displayed the color French fuchsia, which is the tone of fuchsia called fuchsia in a color list popular in France. Fuchsia rose Fuchsia rose is the color that was chosen as the 2001 Pantone color of the year by Pantone. Red-purple Red-purple is the color that is called Rojo-Púrpura (the Spanish word for "red-purple") in the Guía de coloraciones (Guide to colorations) by Rosa Gallego and Juan Carlos Sanz, a color dictionary published in 2005 that is widely popular in the Hispanophone realm. Although red-purple is a seldom-used color name in English, in Spanish it is regarded one of the major tones of purple. Fuchsia purple The color fuchsia purple is displayed at right. The source of this color is the "Pantone Textile Paper eXtended (TPX)" color list, color #18-2436 TPX—Fuchsia Purple. Deep fuchsia Deep fuchsia is the color that is called fuchsia in the List of Crayola crayon colors. Fandango Displayed at right is the color fandango. The first recorded use of fandango as a color name in English was in 1925. Antique fuchsia Displayed at right is the color antique fuchsia. The first recorded use of antique fuchsia as a color name in English was in 1928. The source of this color is the Plochere Color System, a color system formulated in 1948 that is widely used by interior designers. Crayola color fuchsia In 1949, the color names of Crayola crayons were reformed and became more scientific, more of the names of the colors of the crayons being based on the names of colors in the original 1930 edition of the Dictionary of Color and the color names of the Munsell color system. Crayola crayons set up a color naming system similar to that used in the Munsell Color Wheel, except that violet instead of purple was used as the secondary color on the color wheel between red and blue. The web color fuchsia is equivalent to the pure chroma on Munsell Color Wheel of the Munsell color system that is designated as "5RP" (reddish purple) i.e., a purple that is shaded toward red (the color we can achieve today with computers is a much more saturated pure color wheel chroma hue than the original color chip shown on the Munsell color wheel diagram in the Munsell color system article). In 1972, a new Crayola crayon color was introduced called hot magenta which is the closest equivalent to the web color fuchsia in Crayola crayons. (See List of Crayola crayon colors.) The color shown in the color box above is the color "fuchsia" in A Dictionary of Color. That is why the name fuchsia was chosen as the equivalent to one of the three secondary additive primary colors, electric magenta, because A Dictionary of Color was the primary reference on color names (besides the Munsell Book of Color) before the introduction of personal computers. The color shown above is somewhat brighter than most actual flowers of the fuchsia plant. The color shown as magenta in A Dictionary of Color is a somewhat different color than the color shown in that book as fuchsia—it is the original color magenta now called rich magenta or magenta (dye) (see the article on magenta for a color box displaying a sample of this original magenta).
Physical sciences
Colors
Physics
881311
https://en.wikipedia.org/wiki/Aperture%20synthesis
Aperture synthesis
Aperture synthesis or synthesis imaging is a type of interferometry that mixes signals from a collection of telescopes to produce images having the same angular resolution as an instrument the size of the entire collection. At each separation and orientation, the lobe-pattern of the interferometer produces an output which is one component of the Fourier transform of the spatial distribution of the brightness of the observed object. The image (or "map") of the source is produced from these measurements. Astronomical interferometers are commonly used for high-resolution optical, infrared, submillimetre and radio astronomy observations. For example, the Event Horizon Telescope project derived the first image of a black hole using aperture synthesis. Technical issues Aperture synthesis is possible only if both the amplitude and the phase of the incoming signal are measured by each telescope. For radio frequencies, this is possible by electronics, while for optical frequencies, the electromagnetic field cannot be measured directly and correlated in software, but must be propagated by sensitive optics and interfered optically. Accurate optical delay and atmospheric wavefront aberration correction are required, a very demanding technology that became possible only in the 1990s. This is why imaging with aperture synthesis has been used successfully in radio astronomy since the 1950s and in optical/infrared astronomy only since the turn of the millennium. See astronomical interferometer for more information. In order to produce a high quality image, a large number of different separations between different telescopes is required (the projected separation between any two telescopes as seen from the radio source is called a baseline) – as many different baselines as possible are required in order to get a good quality image. The number of baselines (nb) for an array of n telescopes is given by nb=(n2 − n)/2. (This is or nC2). For example, the Very Large Array has 27 telescopes giving 351 independent baselines at once, and can give high quality images. In contrast to radio arrays, the largest optical arrays currently have only 6 telescopes, giving poorer image quality from the 15 baselines between the telescopes. Most radio frequency aperture synthesis interferometers use the rotation of the Earth to increase the number of different baselines included in an observation (see diagram on right). Taking data at different times provides measurements with different telescope separations and angles without the need for additional telescopes or moving the telescopes manually, as the rotation of the Earth moves the telescopes to new baselines. The use of Earth rotation was discussed in detail in the 1950 paper A preliminary survey of the radio stars in the Northern Hemisphere. Some instruments use artificial rotation of the interferometer array instead of Earth rotation, such as in aperture masking interferometry. History The concept of aperture synthesis was first formulated in 1946 by Australian radio astronomers Ruby Payne-Scott and Joseph Pawsey. Working from Dover Heights in Sydney, Payne-Scott carried out the earliest interferometer observations in radio astronomy on 26 January 1946 using an Australian Army radar as a radio telescope. Aperture synthesis imaging was later developed at radio wavelengths by Martin Ryle and coworkers from the Radio Astronomy Group at Cambridge University. Martin Ryle and Tony Hewish jointly received a Nobel Prize for this and other contributions to the development of radio interferometry. The radio astronomy group in Cambridge went on to found the Mullard Radio Astronomy Observatory near Cambridge in the 1950s. During the late 1960s and early 1970s, as computers (such as the Titan) became capable of handling the computationally intensive Fourier transform inversions required, they used aperture synthesis to create a 'One-Mile' and later a '5 km' effective aperture using the One-Mile and Ryle telescopes, respectively. The technique was subsequently further developed in very-long-baseline interferometry to obtain baselines of thousands of kilometers and even in optical telescopes. The term aperture synthesis can also refer to a type of radar system known as synthetic aperture radar, but this is technically unrelated to the radio astronomy method and developed independently. Originally it was thought necessary to make measurements at essentially every baseline length and orientation out to some maximum: such a fully sampled Fourier transform formally contains the information exactly equivalent to the image from a conventional telescope with an aperture diameter equal to the maximum baseline, hence the name aperture synthesis. It was rapidly discovered that in many cases, useful images could be made with a relatively sparse and irregular set of baselines, especially with the help of non-linear deconvolution algorithms such as the maximum entropy method. The alternative name synthesis imaging acknowledges the shift in emphasis from trying to synthesize the complete aperture (allowing image reconstruction by Fourier transform) to trying to synthesize the image from whatever data is available, using powerful but computationally expensive algorithms. Note that aperture synthesis is technically and historically independent of synthetic aperture radar, which is a Doppler technique, originally developed in the early 1950s by Carl A. Wiley. The operating principles are unrelated.
Technology
Telescope
null
881856
https://en.wikipedia.org/wiki/Anhydrite
Anhydrite
Anhydrite, or anhydrous calcium sulfate, is a mineral with the chemical formula CaSO4. It is in the orthorhombic crystal system, with three directions of perfect cleavage parallel to the three planes of symmetry. It is not isomorphous with the orthorhombic barium (baryte) and strontium (celestine) sulfates, as might be expected from the chemical formulas. Distinctly developed crystals are somewhat rare, the mineral usually presenting the form of cleavage masses. The Mohs hardness is 3.5, and the specific gravity is 2.9. The color is white, sometimes greyish, bluish, or purple. On the best developed of the three cleavages, the lustre is pearly; on other surfaces it is glassy. When exposed to water, anhydrite readily transforms to the more commonly occurring gypsum, (CaSO4·2H2O) by the absorption of water. This transformation is reversible, with gypsum or calcium sulfate hemihydrate forming anhydrite by heating to around under normal atmospheric conditions. Anhydrite is commonly associated with calcite, halite, and sulfides such as galena, chalcopyrite, molybdenite, and pyrite in vein deposits. Occurrence Anhydrite is most frequently found in evaporite deposits with gypsum; it was, for instance, first discovered in 1794 in a salt mine near Hall in Tirol. In this occurrence, depth is critical since nearer the surface anhydrite has been altered to gypsum by absorption of circulating ground water. From an aqueous solution, calcium sulfate is deposited as crystals of gypsum, but when the solution contains an excess of sodium or potassium chloride, anhydrite is deposited if the temperature is above . This is one method by which the mineral has been prepared artificially and is identical with its mode of origin in nature. The mineral is common in salt basins. Tidal flat nodules Anhydrite occurs in a tidal flat environment in the Persian Gulf sabkhas as massive diagenetic replacement nodules. Cross sections of these nodular masses have a netted appearance and have been referred to as chicken-wire anhydrite. Nodular anhydrite occurs as replacement of gypsum in a variety of sedimentary depositional environments. Salt dome cap rocks Massive amounts of anhydrite occur when salt domes form a caprock. Anhydrite is 1–3% of the minerals in salt domes and is generally left as a cap at the top of the salt when the halite is removed by pore waters. The typical cap rock is a salt, topped by a layer of anhydrite, topped by patches of gypsum, topped by a layer of calcite. Interaction of anhydrite with hydrocarbons at high temperature in oil fields can reduce sulfate () into hydrogen sulfide (H2S) with a concomitant precipitation of calcite. The process is known as thermochemical sulfate reduction (TSR). Igneous rocks Anhydrite has been found in some igneous rocks, for example in the intrusive dioritic pluton of El Teniente, Chile and in trachyandesite pumice erupted by El Chichón volcano, Mexico. Naming history The name anhydrite was given by A. G. Werner in 1804, because of the absence of water of crystallization, as contrasted with the presence of water in gypsum. Some obsolete names for the species are muriacite and karstenite; the former, an earlier name, being given under the impression that the substance was a chloride (muriate). A peculiar variety occurring as contorted concretionary masses is known as tripe-stone, and a scaly granular variety, from Volpino, near Bergamo, in Lombardy, as vulpinite; the latter is cut and polished for ornamental purposes. A semi-transparent light blue-grey variety from Peru is referred to by the trade name angelite. Other uses The Catalyst Science Discovery Centre in Widnes, England, has a relief carving of an anhydrite kiln, made from a piece of anhydrite, for the United Sulphuric Acid Corporation. Extensive structural damage in the German city of Staufen im Breisgau has occurred since a 2007 geothermal drilling project allowed subsurface water to invade a layer of anhydrite below the city, causing extensive but uneven ground swelling as pockets of the anhydrite converted to gypsum.
Physical sciences
Minerals
Earth science
881901
https://en.wikipedia.org/wiki/Ceiling%20fan
Ceiling fan
A ceiling fan is a fan mounted on the ceiling of a room or space, usually electrically powered, that uses hub-mounted rotating blades to circulate air. They cool people effectively by increasing air speed. Fans do not reduce air temperature or relative humidity, unlike air-conditioning equipment, but create a cooling effect by helping to evaporate sweat and increase heat exchange via convection. Fans add a small amount of heat to the room mainly due to waste heat from the motor, and partially due to friction. Fans use significantly less power than air conditioning as cooling air is thermodynamically expensive. In the winter, fans move warmer air, which naturally rises, back down to occupants. This can affect both thermostat readings and occupants' comfort, thereby improving the energy efficiency of climate control. Many ceiling fan units also double as light fixtures, eliminating the need for separate overhead lights in a room. History Punkah style ceiling fans are based on the earliest form of the fan, which was first invented in India around 500 BC. These were cut from an Indian palmyra leaf which forms its rather large blade, moving slowly in a pendular manner. Originally operated manually by a cord and nowadays powered electrically using a belt-driven system, these punkahs move air by going to and fro. In comparison to a rotating fan, it creates a gentle breeze rather than an airflow. Some of the first rotary ceiling fans appeared in the early 1860s, and 1870s in the United States. At that time, they were not powered by any form of electric motor. Instead, a stream of running water was used, in conjunction with a turbine, to drive a system of belts which would turn the blades of two-blade fan units. These systems could accommodate several fan units, and became popular in stores, restaurants, and offices. Some of these systems survive today, and can be seen in parts of the southern United States where they originally proved useful. The electrically powered ceiling fan was invented in 1882 by Philip Diehl. He had engineered the electric motor used in the first electrically powered Singer sewing machines, and in 1882 he adapted that motor for use in a ceiling-mounted fan. Each fan had its own self-contained motor unit, with no need for belt drive. Almost immediately he faced fierce competition due to the commercial success of the ceiling fan. He continued to make improvements to his invention and created a light kit fitted to the ceiling fan to combine both functions in one unit. By World War I most ceiling fans were made with four blades instead of the original two, which made fans quieter and allowed them to circulate more air. The early turn-of-the-century companies who successfully commercialized the sale of ceiling fans in the United States were what is today known as the Hunter Fan Company, Robbins & Myers, Century Electric, Westinghouse Corporation and Emerson Electric. By the 1920s, ceiling fans became commonplace in the United States and had started to take hold internationally. From the Great Depression of the 1930s, until the introduction of electric air conditioning in the 1950s, ceiling fans slowly faded out of vogue in the U.S., almost falling into total disuse in the U.S. by the 1960s; those that remained were considered items of nostalgia. Meanwhile, ceiling fans became very popular in other countries, particularly those with hot climates, such as India, Pakistan,Bangladesh and the Middle East, where a lack of infrastructure and/or financial resources made energy-hungry and complex freon-based air conditioning equipment impractical. In 1973, Texas entrepreneur H. W. (Hub) Markwardt began importing ceiling fans into the United States that were manufactured in India by Crompton Greaves, Ltd. Crompton Greaves had been manufacturing ceiling fans since 1937 through a joint venture formed by Greaves Cotton of India and Crompton Parkinson of England. These Indian manufactured ceiling fans caught on slowly at first, but Markwardt's Encon Industries branded ceiling fans (which stood for ENergy CONservation) eventually found great success during the energy crisis of the late 1970s and early 1980s since they consumed less energy than the antiquated shaded pole motors used in most other American made fans. The fans became the energy-saving appliances for residential and commercial use by supplementing expensive air conditioning units with a column of gentle airflow. Due to this renewed commercial success using ceiling fans effectively as an energy conservation application, many American manufacturers also started to produce, or significantly increase the production of, ceiling fans. In addition to the imported Encon ceiling fans, the Casablanca Fan Company was founded in 1974. Other American manufacturers of the time included the Hunter Fan Co. (which was then a division of Robbins & Myers, Inc), FASCO (F. A. Smith Co.), and Emerson Electric; which was often branded as Sears-Roebuck. Smaller, short-lived companies include NuTone, Southern Fan Co., A&G Machinery Co., Homestead, Hallmark, Union, Lasko, and Evergo. Through the 1980s and 1990s, ceiling fans remained popular in the United States. Many small American importers, most of them rather short-lived, started importing ceiling fans. Throughout the 1980s, the balance of sales between American-made ceiling fans and those imported from manufacturers in India, Taiwan, Hong Kong and eventually China changed dramatically with imported fans taking the lion's share of the market by the late 1980s. Even the most basic U.S-made fans sold for $200 to $500, while the most expensive imported fans rarely exceeded $150. Ceiling fan technology has not evolved much since 1980, with a notable exception being the semi-recent increase in availability of energy-efficient, remote/app controlled brushless DC fans to the masses. However, important inroads have been made in design by companies such as Monte Carlo, Minka Aire, Quorum, Craftmade, Litex and Fanimation - offering higher price ceiling fans with more decorative value. In 2001, Washington Post writer Patricia Dane Rogers wrote, "Like so many other mundane household objects, these old standbys are going high-style and high-tech." Uses Ceiling fans have multiple functions. Fans increase mixing in a ventilated space, which leads to more homogenous environmental conditions. Moving air is generally preferred over stagnant air, especially in warm or neutral environments, so fans are useful in increasing occupant satisfaction. Because fans do not change air temperature and humidity, but move it around, fans can aid in both the heating and cooling of a space. Because of this, ceiling fans are often an instrumental element of low energy HVAC, passive cooling or natural ventilation systems in buildings. Depending on the energy use of the fan system, fans can be an efficient way to improve thermal comfort by allowing for a higher ambient air temperature while keeping occupants comfortable. Fans are an especially economic choice in warm, humid environments. Ceiling fans can be controlled together in a shared space, and can also be individually controlled in a home or office setting.  In an office environment, individually controlled ceiling fans can have a significant positive impact on thermal comfort, which has been shown to increase productivity and satisfaction among occupants. Ceiling fans aid in the distribution of fresh air in both mechanically ventilated and naturally ventilated spaces. In naturally ventilated spaces, ceiling fans are effective at drawing in and circulating fresh outdoor air.  In mechanically ventilated spaces, fans can be focused to channel and circulate conditioned air in a room. Direction The direction that a fan spins should change based on whether the room needs to be heated or cooled. Unlike air conditioners, fans only move air—they do not directly change its temperature. Therefore, ceiling fans that have a mechanism for reversing the direction in which the blades push air (most commonly an electrical switch on the unit's switch housing, motor housing, or lower canopy) can help in both heating and cooling. While ceiling fan manufacturers (mainly Emerson) have had electrically reversible motors in production since the 1930s, most fans made before the mid-1970s are either not reversible at all or mechanically reversible (have adjustable blade pitch) instead of an electrically reversible motor. In this case, the blades should be pitched with the upturned edge leading for downdraft, and with the downturned edge leading for updraft. Hunter's "Adaptair" mechanism is perhaps the most well-known example of mechanical reversibility. For cooling, the fan's direction of rotation should usually be set so that air is blown usually counter-clockwise from beneath, but dependent upon manufacturer. The blades should lead with the upturned edge as they spin. The breeze created by a ceiling fan creates a wind chill effect, speeding the evaporation of perspiration on human skin, which makes the body's natural cooling mechanism much more efficient. As a result of this phenomenon, the air conditioning thermostat can be set a few degrees higher than normal when a fan is in operation, greatly reducing power consumption. Since the fan works directly on the body, rather than by changing the temperature of the air, it is recommended to switch all ceiling fans off when a room is unoccupied, to further reduce power consumption. In some cases, like when a fan is near walls like in a hallway, updraft may cause better airflow. Another example of how updraft can cause better cooling is when the ceiling fan is in middle of a bedroom with a loft bed near a wall, meaning breeze can be felt better when airflow is coming from the top. For heating, ceiling fans should be set to blow the air upward. Air naturally stratifies, i.e. warmer air rises to the ceiling while cooler air sinks, meaning that colder air settles near the floor where people spend most of their time. A ceiling fan, with its direction of rotation set so that the warmer air on the ceiling is pushed down along the walls and into the room, heating the cooler air. This avoids blowing a stream of air directly at the occupants of the room, which would tend to cool them. This action works to equalize, or even out the temperature in the room, making it cooler at ceiling level, but warmer near the floor. Thus the heating thermostat in the area can be set a few degrees lower to save energy while maintaining the same level of comfort. Though reversible models of industrial-grade ceiling fans do exist, most are not reversible. High ceiling heights in most industrial applications render reversibility unnecessary. Instead, industrial ceiling fans typically de-stratify heat by blowing hot air at ceiling level directly down toward the floor. Blade shape Residential ceiling fans, which are almost always reversible, typically use flat, paddle-like blades, which are equally effective in downdraft and updraft. Industrial ceiling fans typically are not reversible and operate only in downdraft, and therefore are able to make effective use of blades that are contoured to have a downdraft bias. More recently, however, residential ceiling fan designers have been making increasing use of contoured blades in an effort to boost ceiling fan efficiency. This contour, while serving to effectively boost the fan's performance while operating in downdraft, can hinder performance when operating in updraft. Air conditioning The most commonplace use of ceiling fans today is in conjunction with an air conditioning unit. Without an operating ceiling fan, air conditioning units typically have both the tasks of cooling the air inside the room and circulating it. Provided the ceiling fan is properly sized for the room in which it is operating, its efficiency of moving air far exceeds that of an air conditioning unit, therefore, for peak efficiency, the air conditioner should be set to a low fan setting and the ceiling fan should be used to circulate the air. Flicker and strobing Ceiling fans are usually installed in a space with other lighting fixtures, but if the fan is positioned too close to a light panel or fixture, a strobe or flicker effect may occur. A strobe or flicker effect is a phenomenon which occurs when light brightens and dims consistently as it penetrates and passes through a moving ceiling fan. This is due to the fan blades intermittently blocking the light, causing shadows to appear across the room's interior surface leading to visual discomfort. The rotating area of a moving fan blade can commonly obstruct the light source when a ceiling fan is positioned underneath an artificial lighting fixture, which can be increasingly distracting to occupants within the space. To ensure that the ceiling fans seamlessly co-exists with the lighting fixtures to avoid strobing, it is recommended that the horizontal separation between the blade and the lighting fixture is maximized. In addition, increasing the vertical distance between the light and the blade will reduce the concentration and frequency of strobing. Never position a light fixture directly above a ceiling fan's blades, and downlight and point source fixtures should be set such that their beam angles don't cross them. Generally, to ensure uniformly adequate light levels, any recessed ceiling lighting and fixtures that emit light above the level of the fan blades should be placed as far away from the ceiling fan as possible. Another recommended strategy is to ensure that the light’s angle of dispersion or the field angle is reduced, which minimizes the strobing effect from the fan blades. It is well known that human eyes can detect flicker at low frequencies (between 60 and 90 hertz), but not at high frequencies (beyond 100 hertz), which is also known as non-visible flicker. The strobe effect can have significant physiological and psychological effect on humans. Two test rooms were utilized in an experiment to compare the effects of visual flicker induced the ceiling fan. The findings revealed statistical proof that one out of three cognitive performances (digit-span task) may have been reduced slightly as a result of an increased effect of visual flicker. Parts The key components of a ceiling fan are the following: An electric motor Blades (also known as paddles or wings) usually made from solid wood, plywood, steel, aluminium, MDF or plastic Blade irons (also known as blade brackets, blade arms, blade holders, or flanges), which hold the blades and connect them to the motor. Flywheel, a metal, plastic, or tough rubber double-torus that is attached to the motor shaft and to which the blade irons may be attached. The flywheel inner ring is locked to the shaft by a lock-screw and the blade irons to the outer ring by screws or bolts that feed into tapped metal inserts. Rubber or plastic flywheels may become brittle and break, a common cause of fan failure. Replacing the flywheel may require disconnecting wiring and requires removing the switch housing that's on the way for the flywheel to be removed and replaced. Rotor, an alternative to blade irons. First patented by industrial designer Ron Rezek in 1991, the one-piece die-cast rotor receives and secures the blades and bolts right to the motor, eliminating most balance problems and minimizing exposed fasteners. A mechanism for mounting the fan to the ceiling such as: Ball-and-socket system. With this system, there is a metal or plastic hemisphere mounted on the end of the downrod; this hemisphere rests in a ceiling-mounted metal bracket, or self-supporting canopy, and allows the fan to move freely (which is very useful on vaulted ceilings). J-hook and Shackle clamp. A type of mounting system where the ceiling fan hangs on a hardened metal hook, screwed into the ceiling or bolted through a steel I-beam. The fan can be mounted directly on a ceiling hook, making the junction box optional. A porcelain or rubber grommet is used to reduce vibration and to electrically isolate the fan from the ceiling hook. This type of mounting is most common on antique ceiling fans and ceiling fans made for industrial use. A variation of this system using a U-bracket secured to the ceiling by means of lag bolts is often used on heavy-duty ceiling fans with electrically reversible motors in order to reduce the risk of the fan unscrewing itself from the ceiling while running in clockwise. This type of mount is ideally suited to the RC flat roof with metal hooks and has become ubiquitous in South Asia, including Bangladesh, India, Pakistan, etc. Flush mount (also known as "low profile" or "hugger" ceiling fans). These are specially designed fans with no downrod or canopy like a traditional mount fan. The motor housing appears to be directly attached to the ceiling, that is where the name "hugger" comes from. They are ideal for rooms with low ceilings ranging in height between 2.286 m and 2.5908 m. A disadvantage to this design is that since the blades are mounted so close to the ceiling, air movement is greatly reduced. Some ball-and-socket fans can be mounted using a low-ceiling adapter, purchased specially from the fan's manufacturer. This allows the same design to be used in both a high and low ceiling environment, simplifying the buying decision for consumers. In recent years, it has become increasingly common for a ball-and-socket fan to be designed such that the canopy (ceiling cover piece) can optionally be screwed directly into the top of the motor housing, thus eliminating the need for a downrod. The whole fan can be secured directly onto the ceiling mounting bracket; this is often referred to as a dual-mount or tri-mount. Other components, which vary by model and style, can include: A downrod, a metal pipe used to suspend the fan from the ceiling. Downrods come in many lengths and widths, depending on the fan type. A decorative encasement for the motor (known as the "motor housing"). A switch housing (also known as a "switch cup" or "nose column"), a metal or plastic cylinder mounted below and in the center of the fan's motor. The switch housing is used to conceal and protect various components, which can include wires, capacitors, and switches; on fans that require oiling, it often conceals the oil reservoir which lubricates the bearings. The switch housing also makes for a convenient place to mount a light kit. Blade badges, decorative adornments attached to the visible underside of the blades for the purpose of concealing the screws used to attach the blades to the blade irons. Assorted switches used for turning the fan on and off, adjusting the speed at which the blades rotate, changing the direction in which the blades rotate, and operating any lamps that may be present. Some fans have remote controls to adjust speed and turn the light off and on. Lamps Uplights, which are installed on top of the fan's motor housing and project light up onto the ceiling, for aesthetic reasons (to "create ambience") Downlights, often referred to as a "light kit", which add ambient light to a room and can be used to replace any ceiling-mounted lamps that were displaced by the installation of a ceiling fan Decorative lights mounted inside the motor housing — in this type of setup, the motor housing side-band often has glass or acrylic panel sections, which allow light to shine though. Operation The way in which a fan is operated depends on its manufacturer, style, and the era in which it was made. Operating methods include: Pull-chain/pull-cord control. This style of fan is equipped with a metal-bead chain or cloth cord which, when pulled, cycles the fan through the operational speed(s) and then back to off. These fans typically have between one and four speeds. Fans with lights usually have a second pull chain which is to control light, and it's usually on/off, but sometimes it's three way, in which case it would be some lights, other lights, all lights, and off. Some fans, usually outdoor rated or Canadian, have another pull chain to change direction. Variable-speed control. During the 1970s and into the mid-1980s, fans were often produced with a solid-state variable-speed control. This was a dial mounted either on the body of the fan or in a gang box at the wall, and when turned in either direction, continuously varied the speed at which the blades rotated—similar to a dimmer switch for a light fixture. A few fans substituted a rotary click-type switch for the infinite-speed dial, providing a set number of set speeds (usually ranging from four to ten). Different fan manufacturers used variable-speed controls in different ways: The variable-speed dial controlling the fan entirely; to turn the fan on, the user turns the knob until it clicks out of the "off" position, and can then choose the fan's speed. Variable speed pull-chain. This setup is similar to the variable-speed dial discussed above, except that a "dual chain" setup is used to turn the potentiometer shaft. A pull-chain present along with the variable-speed control; the dial can be set in one place and left there, with the pull-chain serving only to turn the fan on and off. Many of these fans have an option to wire an optional light kit to this pull-chain in order to control both the fan and the light with one chain. Using this method, the user can have either the fan or light on individually, both on, or both off. Vari-Lo. A pull-chain and variable-speed control are present. Such a fan has two speeds controlled by a pull-chain: high (full power, independent of the position of the variable-speed control), and "Vari-Lo" (speed determined by the position of the variable-speed control). In some cases, maximum speed on Vari-Lo setting is slower than high. Wall-mounted control. Some fans have their control(s) mounted on the wall instead of on the fans themselves; these are very common with industrial and HVLS fans. Such controls are usually proprietary and/or specialized switches. Mechanical wall control. This style of switch takes varying physical forms. The wall control, which contains a motor speed regulator of some sort, determines how much power is delivered to the fan and therefore how fast it spins. Older such controls employed a choke— a large iron-cored coil— as their regulator; these controls were typically large, boxy, and surface-mounted on the wall. They had anywhere from four to eight speeds. Newer versions of this type of control do not use a choke as such, but much smaller capacitors and/or solid-state circuitry; the switch is typically mounted in a standard in-wall gang box. The old one is called electrical fan regulator that works on the principle of reducing voltage and the new one is called electronic fan regulator that works on the principle of switching that control the time duration of the power supply. The new electronic fan regulator is more power efficient. Digital wall control. With this style of control, all of the fan's functions— on/off status, speed, the direction of rotation, and any attached light fixtures— are controlled by a computerized wall control, which typically does not require any special wiring. Instead, it uses the normal house wiring to send coded electrical pulses to the fan, which decodes and acts on them using a built-in set of electronics. This style of control typically has anywhere from three to seven speeds. Wireless remote control. In recent years, remote controls have dropped in price to become cost-effective for controlling ceiling fans. They may be supplied with fans or fitted to an existing fan. The hand-held remote transmits radio frequency or infrared control signals to a receiver unit installed in the fan. However, these may not be ideal for commercial installations as the controllers require batteries. They can also get misplaced, especially in installs with many fans. Directional Switch. Most ceiling fans typically feature a small slide switch on the motor body of the fan itself, which controls the direction in which the fan rotates. In one position, the fan is caused to rotate clockwise, in the other position the fan is caused to rotate counter-clockwise. Given that the fan blades are typically slanted, this results in the air either being drawn upwards or brought downwards. While the user can select which they prefer, typically air is blown downwards in summer and lifted upwards in winter. The downwards blowing is experienced as "cooling" in summer, while the upwards convection brings ceiling-hugging warm air back down throughout the room in winter. Classifications Ceiling fans can be classified into three main categories based on their use and functionality. Each type offers some unique advantages over the others and hence is suitable for a specific application. These include household, industrial and large-diameter fans. Household fans usually have 4, 5 or 6 wooden blades, a decorative motor housing, and a standard three speed motor with pull-chain switch control. These fans come in two varieties, with or without a light kit, depending on the price and consumer preferences. Commercial or industrial ceiling fans are typically used in stores, schools, churches, offices, factories, and warehouses. Such a fan is designed to be more cost-effective and energy-efficient than its household counterpart. Industrial or commercial ceiling fans typically use three or four blades, typically made of either steel or aluminum, and operate at high speed. These energy-efficient ceiling fans are designed to push massive amounts of air across large, wide open spaces. From the late 1970s to the mid-1980s, metal-bladed industrial ceiling fans were popular in lower-income American households, likely due to them being priced lower than wood-bladed models. Industrial style ceiling fans are very popular for household applications in Asia and the Middle East. HVLS fans are large-diameter ceiling fans, intended for large spaces such as large warehouses, hangars, shopping malls, railway platforms and gymnasiums. These fans generally spin at a lower speed but due to their large diameter, ranging between 7' and 24' (2.1m and 7.3m), can provide a large area with a gentle breeze. Modern HVLS fans use airfoil-style blades for optimized air movement at a reduced energy cost. One of the most notable manufactures of HVLS fans is Big Ass Fans. Indoor/outdoor ceiling fans are designed for use in partially enclosed or open outdoor spaces. The body and blades are made of materials and finishes that are not as drastically affected by moisture, temperature swings, or humidity as traditional materials and finishes. UL Damp-rated fans are suitable for covered outdoor areas like patios and porches that aren't directly exposed to rainwater from above, as well as moisture-prone indoor areas such as bathrooms and laundry rooms. In open places where the fan may come in contact with water, one must use wet-rated fans. UL Wet-rated fans have a completely sealed motor which can withstand direct exposure to rainwater, snow and can even be washed off with a garden hose. Both industrial and residential fans come in dry-rated as well as damp and wet-rated varieties. Types Many styles of ceiling fans have been developed over the years in response to several different factors such as growing energy-consumption consciousness and changes in decorating styles. The advent and evolution of new technologies have also played a major role in ceiling fan development. Following is a list of major ceiling fan styles and their defining characteristics: Cast-iron ceiling fans. These account for almost all ceiling fans made since their invention in 1882 through the mid-1960s. A cast-iron housing encases a very heavy-duty motor, usually of the shaded-pole variety. These motors are lubricated by means of a thrust bearing submerged in an oil-bath and must be oiled periodically, usually once or twice per year. Because these fans are so sturdily built, and due to their utter lack of electronic components, it is not uncommon to see cast-iron fans aged eighty years or more running strong and still in use today. The Hunter 'Original''' (manufactured by the Hunter Fan Co.) is by far the most recognizable example of a cast-iron ceiling fan today. It has enjoyed the longest production run of any fan in history, dating from 1906 to the present day. The Hunter Original employed a shaded-pole motor from its inception until 1984 (the 91.44 cm Original remained shaded pole before it was replaced with the 106.68 cm Original in 1985), at which point it was changed to a much more efficient permanent split-capacitor motor. Though the fan's physical appearance remains virtually unchanged, the motor was downgraded in 2002 when production was shipped to Taiwan; the motor, though still oil-lubricated, was switched to a "skeletal" design, as discussed below, with a shortened main shaft that inadvertently caused reliability issues. In 2015, this motor design was revised, and once again employs a full-length main shaft; the key element to the longevity of the pre-2002 motors. 20 pole Induction "Pancake" motor ceiling fans. These fans with highly efficient cast aluminum housings, were invented in 1957 by Crompton-Greaves, Ltd of India and were first imported into the United States in 1973 by Encon Industries. This Crompton-Greaves motor was developed through a joint venture with Crompton-Parkinson of England and took 20 years to perfect. It is considered the most energy-efficient motor ever manufactured for ceiling fans (apart from the DC motor) since it consumes less energy than a household incandescent light bulb. Stack-motor ceiling fans. In the late 1970s, due to rising energy costs prompted by the energy crisis, Emerson adapted their "K63" motor, commonly used in household appliances and industrial machinery, to be used in ceiling fans. This new "stack" motor, along with Encon's cast aluminum 20 pole motor, proved to be powerful, yet energy-efficient, and aided in the comeback of ceiling fans in America, since it was far less expensive to operate than air conditioning. With this design (which consists of a basic stator and rotor), the fan's blades mount to a central hub, known as a flywheel. The flywheel which is made of either metal or reinforced rubber can be mounted either flush with the fan's motor housing (concealed) or prominently below the fan's motor housing (known as a "dropped flywheel"). Many manufacturers used and/or developed their own stack motors, including (but not limited to) Casablanca, Emerson, FASCO, Hunter, and NuTone. Some manufacturers trademarked their personal incarnation of this motor: for example, Emerson's "K63" and later "K55" motors, Fanimation's "FDK-2100", and Casablanca's "XLP-2000" and "XLP-2100". The earliest stack-motor fan was the Emerson, which was an earlier version of the model that was later called "Heat-Fan", a utilitarian fan with a dropped metal flywheel and blades made of fiberglass and later moulded plastic depending on the model. This fan was produced in numerous different forms from 1962 through 2005 and, while targeted at commercial settings, also found great success in residential settings. Casablanca Fan Co. also made stack-motor fans with concealed flywheels rather than dropped flywheels. While this motor is not nearly as widely used as in the 1970s and 1980s, it can still be found in Razzetii Italy brand. One disadvantage of this type of fan is that the flywheel, if it is made from rubber, will dry out and crack over time and eventually break; this is usually not dangerous, but it renders the fan inoperable until the flywheel is replaced. Direct-drive ceiling fans employ a motor with a stationary inner core with a shell, made of cast iron, cast aluminum, or stamped steel, that revolves around it (commonly called a "spinner" motor). The blades are attached directly to this shell. Direct-drive motors are the least expensive motors to produce, and on the whole are the most prone to failure and noise generation. While the very first motors of this type (first used in the 1960s) were relatively heavy-duty, the quality of these motors has dropped significantly in recent years. This type of motor has become the de facto standard for today's fans; it is used in all Hampton Bay and Harbor Breeze ceiling fans sold today, and has commonly been used by most other brands. Spinner-motor fans, sometimes incorrectly referred to as "spinners", employ a direct-drive (spinner) motor and do have a stationary decorative cover (motor housing). "Spinner-motor" fans account for nearly all fans manufactured from the late 1980s to the present. Spinner fans employ a direct-drive motor and do not have a stationary decorative cover (motor housing). This accounts for most industrial-style fans (though such fans sometimes have more moderate-quality motors), and inexpensive residential fans commonly found in Brazil, South Asia, Southeast Asia and many Middle Eastern countries. Skeletal motors, which are a high-end subset of direct-drive motors, can be found on some higher-quality fans. Examples of skeletal motors include Hunter's "AirMax" motor, Casablanca's "XTR200" motor, and the motors made by Sanyo for use in ceiling fans sold under the Lasko name, and post-2002 Hunter "Original" ceiling fans. Skeletal motors differ from regular direct-drive motors in that: They have an open-frame ("skeletal") design, which allows for far better ventilation and therefore a longer lifespan. This is in comparison to a regular direct-drive motor's design, in which the motor's inner workings are completely enclosed within a tight metal shell which may or may not have openings for ventilation; even when openings are present, they are almost always small to the point of being inadequate. These are typically larger than regular direct-drive motors and, as a result, are more powerful and less prone to burning out. Friction-drive ceiling fans. This short-lived type of ceiling fan was attempted by companies such as Emerson and NuTone in the late 1970s with little success. Its advantage was its tremendously low power consumption, but the fans were unreliable and very noisy, in addition to being grievously under-powered. Friction-drive ceiling fans employ a low-torque motor that is mounted transversely in relation to the flywheel. A rubber wheel mounted on the end of the motor's shaft drove a hub (via contact friction, hence the name) which, in turn, drove the flywheel. It was a system based on the fact that a low-torque motor spinning quickly can drive a large, heavy device at a slow speed without great energy consumption (see Gear ratio). Gear-drive ceiling fans. These were similar to (and even less common than) the friction drive models; however, instead of a rubber wheel on the motor shaft using friction to turn the flywheel, a toothed gear on the end of the motor shaft meshed with gear teeth formed into the flywheel, thus rotating it. The company "Panama" made gear driven ceiling fans and sold them exclusively through the "Family Handyman" magazine in the 1980s, and some HVLS ceiling fans have a gearbox motor. Internal belt-drive ceiling fans. These were also similar in design to gear-drive and friction-drive fans; however, instead of a rubber friction wheel or toothed gear, a small rubber belt linked the motor to the flywheel. The most notable internal belt-drive ceiling fans were the earliest models produced by the Casablanca Fan Co. and a model sold by Toastmaster. Belt-driven ceiling fans. As stated earlier in this article, the first ceiling fans used a water-powered system of belts to turn the blades of fan units (which consisted of nothing more than blades mounted on a flywheel). For period-themed decor, a few companies (notably Fanimation and Woolen Mill) have created reproduction belt-drive fan systems. The reproduction systems feature an electric motor as the driving force, in place of the water-powered motor. Orbit fans use a mechanism to oscillate 360 degrees. They are also typically flushed to the ceiling like hugger type fans. They are also very small in size, usually, about 40.64 cm and have a similar construction to that of many pedestal fans and desk fans, and usually have finger guards. These are once again, popular mostly in many developing countries as they are a cheap alternative to traditional paddle type ceiling fans. Many American manufacturers, such as "Fanimation" have started producing high quality designer versions of such fans. Mini ceiling fans are mostly found in less developed places, such as the Philippines and Indonesia, and today are constructed similarly to most oscillating pedestal and table fans, predominantly out of plastic. These fans, hence the name "mini" ceiling fan are relatively small in size, usually ranging from 40.64 cm to 91.44 cm, however, some still span to sizes as large as 106.68 cm in diameter. Additionally, unlike traditional ceiling fans, these fans typically use synchronous motors. Bladeless ceiling fans. This type was introduced in 2012 by Exhale fans and uses a bladeless turbine to push air outwards from the fan, which is also the case of regular ceiling fans on updraft mode. These fans feature a brushless DC motor instead of a normal direct-drive motor. A pendulum fan or flap fan is a type of low velocity ceiling fan that can be used for air circulation around a targeted area. The back and forth motion increases turbulence around cooling sources, like chilled waterfalls at the Lavin Bernick Center at Tulane, helping to cool a greater volume of air. Brushed DC ceiling fans. Before the current switched from DC to AC, there were productions of brushed DC ceiling fans. Those are wired directly to DC wires. Brushless DC ceiling fans. This type of fans uses BLDC technology which offers much higher efficiency than normal fans driven with traditional AC motors. These are quieter than AC motor fans due to the fact that they are commutated electronically and use permanent magnet rotors. Among the other advantages, these fans offer are high efficiency, lower noise level, less rotor heat, integration of remote control and other convenience technologies etc. The only drawbacks are the high cost and the presence of complex electronics which may be more prone to failure and difficult to service. However, with the advent of new technologies and better quality control techniques, the latter is becoming less of a concern. Those are wired to AC wires along with AC/DC adaptor. Smart ceiling fans. These fans can be controlled by Google Assistant, Amazon Alexa Assistant, Apple Homekit and Wifi. A vast majority of these fans use BLDC motors due to their microcontroller based design, flexibility in fine controls and firmware upgrade capability. The speed, brightness and timing of the fans can be adjusted with a smartphone app. Effects on airborne transmission and distribution Ceiling fans provide a more affordable and energy-efficient alternative to air-conditioning, especially when used in conjunction with warmer room air temperature. Overall, the use of ceiling fans results in a lesser impact on global warming when looking at carbon generation suppression. In addition to improving thermal comfort and reducing energy consumption from air-conditioning, ceiling fans have also been studied as a tool that could potentially affect airborne transmission and distribution of infection. Ceiling fans affect the air and pollutant distribution in a space, including infectious aerosols. Ceiling fans may decrease the risk of transmission of infectious aerosols. An experiment using gas to simulate exhaled droplet nuclei found that ceiling fans could reduce the concentration of aerosols at the exposed person's breathing zone by more than 20%. Ceiling fans were proven to have a more significant effect on the droplet and airborne transmission when the coughing infected person is located directly under the ceiling fan. Ceiling fans offer better protection from cough exposure for people located closer to the fan center, where the directed airflow changing particle trajectory downward to the floor is the greatest. A more realistic experiment using tracer particles found ceiling fans are effective in reducing the infection risks by 47% in short-range, but marginally increase infections risks in long-range transmissions. The benefits of ceiling fans are highest when the room is well ventilated, when masking measures are in place, and when the pathogen is not highly contagious. However if there are more people at long range distances, ceiling fans may cause more people to get sick by dispersing exhaled pathogens that are highly contagious, like measles and the SARS-CoV-2 Omicron variant, even if the room follows the ASHRAE 241 recommendations. Safety concerns with installation A typical ceiling fan weighs between 3.6 and 22.7 kg when fully assembled. While many junction boxes can support that weight while the fan is hanging still, a fan in operation exerts many additional stresses—notably torsion—on the object from which it is hung; this can cause an improper junction box to fail. For this reason, in the United States the National Electric Code (document NFPA 70, Article 314) states that ceiling fans must be supported by an electrical junction box listed for that use. It is a common mistake for homeowners to replace a light fixture with a ceiling fan without upgrading to a proper junction box. Ultimately, the weight of the fan must be carried by a strong structural element of the ceiling, such as a ceiling joist. Should an improperly mounted fan fall, especially a 22.7 kg cast iron fan, the result could be catastrophic. Low-hanging fans/danger to limbs Another concern with installing a ceiling fan relates to the height of the blades relative to the floor. Building codes throughout the United States prohibit residential ceiling fans from being mounted with the blades closer than seven feet from the floor; this sometimes proves, however, to not be high enough. If a ceiling fan is turned on and a person fully extends his or her arms into the air, as sometimes happens during normal tasks such as dressing, stretching or changing bedsheets, it is possible for the blades to strike their hands, potentially causing injury. Also, if one is carrying a long and awkward object, one end may inadvertently enter the path of rotation of a ceiling fan's blades, which can cause damage to the fan. Building codes throughout the United States also prohibit industrial ceiling fans from being mounted with the blades closer than 10 feet from the floor for these reasons. In other countries, ceiling fans usually come with a warning to install the fan so that the blades are 2.3 meters above the floor or higher, as instructed by the IEC and similar bodies. This rule applies to all "high level fans" including but not limited to ceiling fans. In Australia, building codes require fans to be mounted at least 2.1 meters high. MythBusters: "Killer Ceiling Fan" In 2004, MythBusters'' tested the idea that a ceiling fan is capable of decapitation if an individual was to stick his or her neck into a running fan. Two versions of the myth were tested, with the first being the "jumping kid", involving a kid jumping up and down on a bed, jumping too high and entering the fan from below and the second being the "lover's leap", involving a husband leaping towards his bed and entering the fan side-on. Kari Byron and Scottie Chapman purchased a regular household fan and also an industrial fan, which has metal blades as opposed to wood and a more powerful motor. They busted the myth in both scenarios with both household and industrial fans, as tests proved that residential ceiling fans are, apparently by design, largely incapable of causing more than a minor injury, having low-torque motors that stop quickly when blocked and blades composed of light materials that tend to break easily if impacted at speed (the household fan test of the "lover's leap" scenario actually broke the fan blades.) They did find that industrial fans, with their steel blades and higher speeds, proved capable of causing injury and laceration - building codes require industrial fans to be mounted with blades 3.048 m above the floor, and the industrial fan test of the "lover's leap" scenario produced a lethal injury where the fan sliced through the jugular and into the vertebrae - but still lost energy rapidly once blocked and were unable to decapitate the test dummy. Wobble Wobbling is usually caused by the weight of fan blades being out of balance with each other. This can happen due to a variety of factors, including blades being warped, blade irons being bent, blades or blade irons not being screwed on straight, or weight variation between blades. Also, if all the blades do not exert an equal force on the air (because they have different angles, for instance), the vertical reaction forces can cause wobbling. Wobble can also be caused by a motor flaw, but that very rarely occurs. Wobbling is not affected by the way in which the fan is mounted or the mounting surface. Contrary to popular misconception, wobbling alone will not cause a ceiling fan to fall. Ceiling fans are secured by clevis pins locked with either split pins or R-clips, so wobbling will not have an effect on the fan's security, unless of course, the pins/clips were not secured. To date, there are no reports of a fan wobbling itself off the ceiling and falling. However, a severe wobble can cause light fixture shades or covers to gradually loosen over time and potentially fall, posing a risk of injury to anyone under the fan, and also from any resulting broken glass. When the MythBusters were designing a fan with the goal of chopping off someone's head, Scottie used an edge finder to find the exact center of their blades with the aim of eliminating potentially very dangerous wobbling of their steel blades. Wobbling may be reduced by measuring the tip of each blade from a fixed point on the ceiling (or floor) and ensuring each is equal. If the fan has a metal plate between the motor and blade, this may be gently adjusted by bending. It can also be reduced by making sure all blades have the same pitch, and all blades have the same distance from adjacent blades. It can also be reduced by having balancing weight on the blades. Even a very slight wobble can also cause a pull chain to swing, if fan is at right RPM, and as the pull chain swings, it can weaken the part that flexes, which can eventually cause it to break, meaning that a pull chain can fall on someone. Wobble in some case can cause wires inside the motor to wriggle, and then eventually reach the top of the motor, which can then yank the wires out of the windings. That is fixable, but it may not be very easy to fix. Humming Humming is often caused by using a dimmer switch or a solid state speed control (those are usually made for industrial setting where humming noise is acceptable) to control the fan speed, since those controls cause chopping current, which causes windings to vibrate. Humming can also be caused by a bad start/run capacitor, or a capacitor with a wrong capacitance size for the motor. A bad or wrong start/run capacitor causes the winding current phase on main windings and auxiliary windings to not sync properly and can cause a hum. Also, humming may be reduced by having windings varnished.
Technology
Heating and cooling
null
5028779
https://en.wikipedia.org/wiki/Clock%20domain%20crossing
Clock domain crossing
In digital electronic design a clock domain crossing (CDC), or simply clock crossing, is the traversal of a signal in a synchronous digital circuit from one clock domain into another. If a signal does not assert long enough and is not registered, it may appear asynchronous on the incoming clock boundary. A synchronous system is composed of a single electronic oscillator that generates a clock signal, and its clock domain—the memory elements directly clocked by that signal from that oscillator, and the combinational logic attached to the outputs of those memory elements. Because of speed-of-light delays, timing skew, etc., the size of a clock domain in such a synchronous system is inversely proportional to the frequency of the clock. In early computers, typically all the digital logic ran in a single clock domain. Because of transmission line loss and distortion it is difficult to carry digital signals above 66 MHz on standard PCB traces (the clock signal is the highest frequency in a synchronous digital system), CPUs that run faster than that speed invariably are single-chip CPUs with a phase-locked loop (PLL) or other on-chip oscillator, keeping the fastest signals on-chip. At first, each CPU chip ran in its own single clock domain, and the rest of the digital logic of the computer ran in another slower clock domain. A few modern CPUs have such a high speed clock, that designers are forced to create several different clock domains on a single CPU chip. Different clock domains have clocks which have a different frequency, a different phase (due to either differing clock latency or a different clock source), or both. Either way the relationship between the clock edges in the two domains cannot be relied upon. Synchronizing a single bit signal to a clock domain with a higher frequency can be accomplished by registering the signal through a flip-flop that is clocked by the source domain, thus holding the signal long enough to be detected by the higher frequency clocked destination domain. CDC metastability issues can occur between asynchronous clock domains; this is in contrast to reset domain crossing metastability, which can occur between synchronous & asynchronous clock domains. To avoid issues with CDC metastability in the destination clock domain, a minimum of 2 stages of re-synchronization flip-flops are included in the destination domain. Synchronizing a single bit signal traversing into clock domain with a slower frequency is more cumbersome. This typically requires a register in each clock domain with a form of feedback from the destination domain to the source domain, indicating that the signal was detected. Other potential clock domain crossing design errors include glitches and data loss. In some cases, clock gating can result in two clock domains where the "slower" domain changes from one second to the next.
Technology
Digital logic
null
5030515
https://en.wikipedia.org/wiki/Type%20locality%20%28geology%29
Type locality (geology)
Type locality, also called type area, is the locality where a particular rock type, stratigraphic unit or mineral species is first identified. If the stratigraphic unit in a locality is layered, it is called a stratotype, whereas the standard of reference for unlayered rocks is the type locality. The concept is similar to type site in archaeology. Examples of geological type localities Rocks and minerals Aragonite: Molina de Aragón, Guadalajara, Spain Autunite: Autun, France Benmoreite: Ben More (Mull), Scotland Blairmorite: Blairmore, Alberta, Canada Boninite: Bonin Islands, Japan Comendite: Comende, San Pietro Island, Sardinia Cummingtonite: Cummington, Massachusetts Dunite: Dun Mountain, New Zealand Essexite: Essex County, Massachusetts, US Fayalite: Horta, Fayal Island, Azores, Portugal Harzburgite: Bad Harzburg, Germany Icelandite: Thingmuli (Þingmúli), Iceland Ijolite: Iivaara, Kuusamo, Finland Kimberlite: Kimberley, Northern Cape, South Africa Komatiite: Komati River, South Africa Labradorite: Paul's Island, Labrador, Canada Lherzolite: Étang de Lers, France (Old spelling was: Étang de Lherz.) Mimetite: Treue Freundschaft Mine, Johanngeorgenstadt, Germany Mugearite: Mugeary, Skye, Scotland Mullite: Isle of Mull, Scotland] Pantellerite: Pantelleria, off Sicily Portlandite: Scawt Hill, Ballygalley, Larne, County Antrim, Northern Ireland Rodingite: Roding River, New Zealand Sovite: Norsjø, Norway Strontianite: Strontian, Scotland (also the element strontium derived from the mineral) Temagamite: Copperfields Mine, Temagami, Ontario, Canada Tilleyite: Crestmore Quarry, Riverside County, California Tonalite: Tonale Pass Trondhjemite: Follstad, Støren, Norway Uraninite: Joachimsthal, Austria-Hungary (now Jáchymov, Czech Republic) Websterite: Webster in North Carolina. Widgiemoolthalite: Widgiemooltha, Western Australia, Australia Ytterbite (a.k.a. gadolinite): Ytterby, Sweden Formations Bearpaw Formation: Bear Paw Mountains, Montana, US Burgess Shale: Burgess Pass on Mount Burgess, Alberta–British Columbia, Canada Calvert Formation: Calvert Cliffs State Park, Maryland, US Chapel Island Formation: Newfoundland, Canada Chattanooga Shale: Chattanooga, Tennessee, US Chazy Formation: Chazy, New York, US Fort Payne Formation: Fort Payne, Alabama, US Gault Formation: Copt Point, Folkestone, UK Holston Formation: Holston River, Tennessee, US Jacobsville Sandstone: Jacobsville, Michigan, US Ogallala Formation: High Plains, US St. Louis Limestone: St. Louis, Missouri, US Ste. Genevieve Limestone: Ste. Genevieve, Missouri, US Temple Butte Formation: Temple Butte, Grand Canyon, US Upper Greensand Formation: Weald, Sussex, Hampshire Waulsortian mudmound: Waulsort, Namur, Belgium
Physical sciences
Stratigraphy
Earth science
5031184
https://en.wikipedia.org/wiki/Soda%E2%80%93lime%20glass
Soda–lime glass
Soda–lime glass, also called soda–lime–silica glass, is the transparent glass, used for windowpanes and glass containers (bottles and jars) for beverages, food, and some commodity items. It is the most prevelant type of glass made. Some glass bakeware is made of soda-lime glass, as opposed to the more common borosilicate glass. Soda–lime glass accounts for about 90% of manufactured glass. Production The manufacturing process for soda–lime glass consists in melting the raw materials, which are the silica, soda (), hydrated lime (), dolomite , which provides the magnesium oxide), and aluminium oxide; along with small quantities of fining agents (e.g., sodium sulfate (Na2SO4), sodium chloride (NaCl), etc.) in a glass furnace at temperatures locally up to 1675 °C. The soda and the lime serve as a flux lowering the melting temperature of silica (1580 °C) as well as causing the mixture to soften as it heats, starting at as low as 700 °C. The temperature is only limited by the quality of the furnace structure material and by the glass composition. Relatively inexpensive minerals such as trona, sand, and feldspar are usually used instead of pure chemicals. Green and brown bottles are obtained from raw materials containing iron oxide. The mix of raw materials is termed batch. Applications Soda–lime glass is divided technically into glass used for windows, called flat glass, and glass for containers, called container glass. The two types differ in the application, production method (float process for windows, blowing and pressing for containers), and chemical composition. Flat glass has a higher magnesium oxide and sodium oxide content than container glass, and a lower silica, calcium oxide, and aluminium oxide content. From the lower content of highly water-soluble ions (sodium and magnesium) in container glass comes its slightly higher chemical durability against water, which is required especially for storage of beverages and food. Typical compositions and properties Soda–lime glass is relatively inexpensive, chemically stable, reasonably hard, and extremely workable. Because it can be resoftened and remelted numerous times, it is ideal for glass recycling. It is used in preference to chemically-pure silica (SiO2), otherwise known as fused quartz. Whereas pure silica has excellent resistance to thermal shock, being able to survive immersion in water while red hot, its high melting temperature (1723 °C) and viscosity make it difficult to work with. Other substances are therefore added to simplify processing. One is the "soda", or sodium oxide (Na2O), which is added in the form of sodium carbonate or related precursors. Soda lowers the glass-transition temperature. However, the soda makes the glass water-soluble, which is usually undesirable. To provide for better chemical durability, the "lime" is also added. This is calcium oxide (CaO), generally obtained from limestone. In addition, magnesium oxide (MgO) and alumina, which is aluminium oxide (Al2O3), contribute to the durability. The resulting glass contains about 70 to 74% silica by weight. Soda–lime glass undergoes a steady increase in viscosity with decreasing temperature, permitting operations of steadily increasing precision. The glass is readily formable into objects when it has a viscosity of 104 poises, typically reached at a temperature around 900 °C. The glass is softened and undergoes steady deformation when viscosity is less than 108 poises, near 700 °C. Though apparently hardened, soda–lime glass can nonetheless be annealed to remove internal stresses with about 15 minutes at 1014 poises, near 500 °C. The relationship between viscosity and temperature is largely logarithmic, with an Arrhenius equation strongly dependent on the composition of the glass, but the activation energy increases at higher temperatures. The following table lists some physical properties of soda–lime glasses. Unless otherwise stated, the glass compositions and many experimentally determined properties are taken from one large study. Those values marked in italic font have been interpolated from similar glass compositions (see calculation of glass properties) due to the lack of experimental data. Coefficient of restitution (glass sphere vs. glass wall): 0.97 ± 0.01 Thermal conductivity: 0.7–1.3 W/(m·K) Hardness (Mohs scale): 6 Knoop hardness: 585 kg/mm2 + 20
Technology
Materials
null
5031967
https://en.wikipedia.org/wiki/Virtual%20power%20plant
Virtual power plant
A virtual power plant (VPP) is a system that integrates multiple, possibly heterogeneous, power resources to provide grid power. A VPP typically sells its output to an electric utility. VPPs allow energy resources that are individually too small to be of interest to a utility to aggregate and market their power. As of 2024, VPPs operated in the United States, Europe, and Australia. One study reported that VPPs during peak demand periods are up to 60% more cost effective than peaker plants. Distributed energy resources VPPs typically aggregate large numbers of distributed energy resources (DER). Resources can be dispatchable or non-dispatchable, controllable or flexible load (CL or FL). Resources can include microCHPs, natural gas-fired reciprocating engines, small-scale wind power plants (WPP), photovoltaics (PV), run-of-river hydroelectricity plants, small hydro, biomass, backup generators, and energy storage systems such as home or vehicle batteries (ESS), and devices whose consumption is adjustable (such as water heaters, and appliances). The numbers and heterogeneity mean that system output is not dependent on any single resource, offering the potential for stable output even if the output of any single resource is not predictable. Vehicle to Grid technology allows electric vehicles that are connected to the grid to participate in VPPs. The VPP then controls the rate at which each vehicle charges/discharges (accepts/delivers power). The VPP can slow or reverse the rate at which vehicles charge. Conversely, when the grid has surplus power, vehicles can charge freely. The same principle applies to other systems, such as heat pumps or air conditioners that can lower their power demands to reduce demand. VPPs based on storage can ramp at higher rates than thermal generators (such as fossil fuel plants), which is especially valuable in grids that experience a duck curve and must satisfy high ramping requirements in the morning and evening. Operation Power delivery is controlled by a management system. The distributed nature of VPPs requires software to respond appropriately and securely to power requests, utility billing, payments to resource owners, etc. Services Typically, the VPP provides power (only) when requested by the utility. Peak shaving With the appropriate resources, a VPP can deliver incremental power on short notice, allowing it to help utilities manage peak loads that would otherwise require purchasing expensive power from a peaker plant (typically operating a simple cycle or combined cycle natural gas turbine). Load following Given sufficient scale, a VPP can operate as a load-following generator, supplying output dynamically as demand changes throughout the day/night cycle. Ancillary services Virtual power plants can provide ancillary services that help maintain grid stability such as frequency regulation and providing operating reserve. These services are primarily used to maintain the instantaneous balance of electrical supply and demand. These services must respond to signals to increase or decrease load on the order of seconds to minutes. Energy trading A VPP generates revenue that is distributed among the resources that supply the power, encouraging resource owners to join the enterprise. Energy markets are wholesale commodity markets that deal specifically with electrical energy. Market prices fluctuate with demand and when other resources fail (e.g., when the wind does not blow). The VPP behaves as a conventional dispatchable power plant from the point of view of other market participants. A VPP acts as an arbitrageur between diverse energy trading floors (i.e., bilateral and PPA contracts, forward and futures markets, and the pool). Five risk-hedging strategies have been applied to VPP decision-making problems to measure the level of conservatism of VPPs' decisions in energy trading floors (e.g., day-ahead electricity market, derivatives exchange market, and bilateral contracts): IGDT: Information Gap Decision Theory RO: Robust optimization CVaR: Conditional value at risk FSD: First-order Stochastic Dominance SSD: Second-order Stochastic Dominance Markets United States In the United States, virtual power plants deal with the supply side and help manage demand, and ensure reliability of grid functions through demand response (DR) and other load-shifting approaches, in real time. In 2023 the Department of Energy estimated VPP capacity at around 30 to 60 GW, some 4% to 8% of peak electricity demand. Texas has two Tesla-operated VPPs. Eligible Tesla Electric members automatically join the Virtual Power Plant, made up of Tesla Powerwall batteries. As such the VPP takes power when the grid needs support. Tesla pays the owner a monthly fee in addition to payment per unit of energy delivered. California has two electric markets: private retail and wholesale. As of 2022 PG&E paid VPP providers $2/kWh during peak demand. As of August/September 2022, SunRun VPP often delivered 80 MW at peak times, and Tesla VPP supplied 68 MW. Vermont’s Green Mountain Power, works with Tesla to offer a Powerwall to participating customers at a discounted rate. Three Massachusetts utilities, National Grid, Eversource, and Cape Light Compact implemented a VPP. Europe The Institute for Solar Energy Supply Technology of Germany's University of Kassel pilot-tested a VPP that linked solar, wind, biogas, and pumped-storage hydroelectricity to provide load-following power from renewable sources. VPPs are commonly referred to as aggregators. One VPP operated on the Scottish Inner Hebrides island of Eigg. Next Kraftwerke from Cologne, Germany operates a VPP in seven European countries providing peak-load resources, power trading and grid balancing services. The company aggregates energy from biogas, solar and wind as well as large-scale power consumers. Distribution network operator, UK Power Networks, and Powervault, a battery manufacturer and power aggregator, created London's first VPP in 2018, installing a fleet of battery systems at 40+ homes across the London Borough of Barnet, offering capacity of 0.32 MWh. This scheme was expanded through a second contract in St Helier, London in 2020. In September 2019, SMS plc entered the VPP sector in the United Kingdom following the acquisition of Irish energy tech start-up, Solo Energy. In October 2020, Tesla launched its Tesla Energy Plan in the UK in partnership with Octopus Energy, allowing households to join its VPP. Participant homes are powered with renewable energy either from solar panels or from Octopus Energy. In June 2024, German companies Enpal and Entrix announced plans to create Europe's largest Virtual Power Plant (VPP). The VPP will integrate a large number of decentralized energy resources including solar panels, batteries, and electric vehicles. Enpal, already a leading solar installer with more than 70,000 installed systems, plans to connect thousands of households with solar power and storage units to the VPP, offering greater energy independence and grid stability. Australia In August 2020, Tesla began installing a 5 kW rooftop solar system and 13.5 kWh Powerwall battery at each Housing SA premises, at no cost to the tenant. As South Australia's largest virtual power plant, the battery and solar systems were centrally managed, collectively delivering 20 MW of generation capacity and 54 MWh of energy storage. In August 2016, AGL Energy announced a 5MW virtual-power-plant scheme for Adelaide, Australia. The company planned to supply battery and photovoltaic systems from Sunverge Energy, of San Francisco, to 1000 households and businesses. The systems cost consumers AUD $3500 and was expected to recoup the expense in 7years under current distribution network tariffs. The scheme is worth AUD $20million and is billed as the largest in the world.
Technology
Power transmission
null
5034416
https://en.wikipedia.org/wiki/Naja
Naja
Naja is a genus of venomous elapid snakes commonly known as cobras (or "true cobras"). Members of the genus Naja are the most widespread and the most widely recognized as "true" cobras. Various species occur in regions throughout Africa, Southwest Asia, South Asia, and Southeast Asia. Several other elapid species are also called "cobras", such as the king cobra and the rinkhals, but neither is a true cobra, in that they do not belong to the genus Naja, but instead each belong to monotypic genera Hemachatus (the rinkhals) and Ophiophagus (the king cobra/hamadryad). Until recently, the genus Naja had 20 to 22 species, but it has undergone several taxonomic revisions in recent years, so sources vary greatly. Wide support exists, though, for a 2009 revision that synonymised the genera Boulengerina and Paranaja with Naja. According to that revision, the genus Naja now includes 38 species. Etymology The origin of this genus name is from the Sanskrit nāga (with a hard "g") meaning "snake". Some hold that the Sanskrit word is cognate with English "snake", Germanic: *snēk-a-, Proto-IE: *(s)nēg-o-, but Manfred Mayrhofer calls this etymology "not credible", and suggests a more plausible etymology connecting it with Sanskrit nagna, "hairless" or "naked". Description Naja species vary in length and most are relatively slender-bodied snakes. Most species are capable of attaining lengths of . Maximum lengths for some of the larger species of cobras are around , with the forest cobra arguably being the longest species. All have a characteristic ability to raise the front quarters of their bodies off the ground and flatten their necks to appear larger to a potential predator. Fang structure is variable; all species except the Indian cobra (Naja naja), Egyptian Cobra (Naja Haje )and Caspian cobra (Naja oxiana) have some degree of adaptation to spitting. Venom All species in the genus Naja are capable of delivering a fatal bite to a human. Most species have strongly neurotoxic venom, which attacks the nervous system, causing paralysis, but many also have cytotoxic features that cause swelling and necrosis, and have a significant anticoagulant effect. Some also have cardiotoxic components to their venom. Several Naja species, referred to as spitting cobras, have a specialized venom delivery mechanism, in which their front fangs, instead of ejecting venom downward through an elongated discharge orifice (similar to a hypodermic needle), have a shortened, rounded opening in the front surface, which ejects the venom forward, out of the mouth. While typically referred to as "spitting", the action is more like squirting. The range and accuracy with which they can shoot their venom varies from species to species, but it is used primarily as a defense mechanism. The venom has little or no effect on unbroken skin, but if it enters the eyes, it can cause a severe burning sensation and temporary or even permanent blindness if not washed out immediately and thoroughly. A recent study showed that all three spitting cobra lineages have evolved higher pain-inducing activity through increased phospholipase A2 levels, which potentiate the algesic action of the cytotoxins present in most cobra venoms. The timing of the origin of spitting in African and Asian Naja species corresponds to the separation of the human and chimpanzee evolutionary lineages in Africa and the arrival of Homo erectus in Asia. The authors therefore hypothesise that the arrival of bipedal, tool-using primates may have triggered the evolution of spitting in cobras. The Caspian cobra (N. oxiana) of Central Asia is the most venomous Naja species. According to a 2019 study by Kazemi-Lomedasht et al, the murine via intravenous injection (IV) value for Naja oxiana (Iranian specimens) was estimated to be 0.14 mg/kg (0.067-0.21 mg/kg) more potent than the sympatric Pakistani Naja naja karachiensis and Naja naja indusi found in far north and northwest India and adjacent Pakistani border areas (0.22 mg/kg), the Thai Naja kaouthia (0.2 mg/kg), and Naja philippinensis at 0.18 mg/kg (0.11-0.3 mg/kg). Latifi (1984) listed a subcutaneous value of 0.2 mg/kg (0.16-0.47 mg/kg) for N. oxiana. The crude venom of N. oxiana produced the lowest known lethal dose (LCLo) of 0.005 mg/kg, the lowest among all cobra species ever recorded, derived from an individual case of envenomation by intracerebroventricular injection. The Banded water cobra's was estimated to be 0.17 mg/kg via IV according to Christensen (1968). The Philippine cobra (N. philippinensis) has an average murine of 0.18 mg/kg IV (Tan et al, 2019). Minton (1974) reported 0.14 mg/kg IV for the Philippine cobra. The Samar cobra (Naja samarensis), another cobra species endemic to the southern islands of the Philippines, is reported to have a of 0.2 mg/kg, similar in potency to the monocled cobras (Naja kaouthia) found only in Thailand and eastern Cambodia, which also have a of 0.2 mg/kg. The spectacled cobras that are sympatric with N. oxiana, in Pakistan and far northwest India, also have a high potency of 0.22 mg/kg. Other highly venomous species are the forest cobras and/or water cobras (Boulengerina subgenus). The murine intraperitoneal of Naja annulata and Naja christyi venoms were 0.143 mg/kg (range of 0.131 mg/kg to 0.156 mg/kg) and 0.120 mg/kg, respectively. Christensen (1968) also listed an IV of 0.17 mg/kg for N. annulata. The Chinese cobra (N. atra) is also highly venomous. Minton (1974) listed a value of 0.3 mg/kg intravenous (IV), while Lee and Tseng list a value of 0.67 mg/kg subcutaneous injection (SC). The of the Cape cobra (N. nivea) according to Minton, 1974 was 0.35 mg/kg (IV) and 0.4 mg/kg (SC). The Senegalese cobra (N. senegalensis) has a murine of 0.39 mg/kg (Tan et al, 2021) via IV. The Egyptian cobra (N. haje) of Ugandan locality had an IV of 0.43 mg/kg (0.35–0.52 mg/kg). The Naja species are a medically important group of snakes due to the number of bites and fatalities they cause across their geographical range. They range throughout Africa (including some parts of the Sahara where Naja haje can be found), Southwest Asia, Central Asia, South Asia, East Asia, and Southeast Asia. Roughly 30% of bites by some cobra species are dry bites, thus do not cause envenomation (a dry bite is a bite by a venomous snake that does not inject venom). Brown (1973) noted that cobras with a higher rates of 'sham strikes' tend to be more venomous, while those with a less toxic venom tend to envenomate more frequently when attempting to bite. This can vary even between specimens of the same species. This is unlike related elapids, such as those species belonging to Dendroaspis (mambas) and Bungarus (kraits), with mambas tending to almost always envenomate and kraits tending to envenomate more often than they attempt 'sham strikes'. Many factors influence the differences in cases of fatality among different species within the same genus. Among cobras, the cases of fatal outcome of bites in both treated and untreated victims can be quite large. For example, mortality rates among untreated cases of envenomation by the cobras as a whole group ranges from 6.5–10% for N kaouthia. to about 80% for N. oxiana. Mortality rate for Naja atra is between 15 and 20%, 5–10% for N. nigricollis, 50% for N. nivea, 20–25% for N. naja, In cases where victims of cobra bites are medically treated using normal treatment protocol for elapid type envenomation, differences in prognosis depend on the cobra species involved. The vast majority of envenomated patients treated make quick and complete recoveries, while other envenomated patients who receive similar treatment result in fatalities. The most important factors in the difference of mortality rates among victims envenomated by cobras is the severity of the bite and which cobra species caused the envenomation. The Caspian cobra (N. oxiana) and the Philippine cobra (N. philippinensis) are the two cobra species with the most toxic venom based on studies on mice. Both species cause prominent neurotoxicity and progression of life-threatening symptoms following envenomation. Death has been reported in as little as 30 minutes in cases of envenomation by both species. N. philippinensis purely neurotoxic venom causes prominent neurotoxicity with minimal local tissue damage and pain and patients respond very well to antivenom therapy if treatment is administered rapidly after envenomation. Envenomation caused by N. oxiana is much more complicated. In addition to prominent neurotoxicity, very potent cytotoxic and cardiotoxic components are in this species' venom. Local effects are marked and manifest in all cases of envenomation: severe pain, severe swelling, bruising, blistering, and tissue necrosis. Renal damage and cardiotoxicity are also clinical manifestations of envenomation caused by N. oxiana, though they are rare and secondary. The untreated mortality rate among those envenomed by N. oxiana approaches 80%, the highest among all species within the genus Naja. Antivenom is not as effective for envenomation by this species as it is for other Asian cobras within the same region, like the Indian cobra (N. naja) and due to the dangerous toxicity of this species' venom, massive amounts of antivenom are often required for patients. As a result, a monovalent antivenom serum is being developed by the Razi Vaccine and Serum Research Institute in Iran. Response to treatment with antivenom is generally poor among patients, so mechanical ventilation and endotracheal intubation is required. As a result, mortality among those treated for N. oxiana envenomation is still relatively high (up to 30%) compared to all other species of cobra (<1%). Taxonomy The genus contains several species complexes of closely related and often similar-looking species, some of them only recently described or defined. Several recent taxonomic studies have revealed species not included in the current listing in ITIS: Naja anchietae (Bocage, 1879), Anchieta's cobra, is regarded as a subspecies of N. haje by Mertens (1937) and of N. annulifera by Broadley (1995). It is regarded as a full species by Broadley and Wüster (2004). Naja arabica Scortecci, 1932, the Arabian cobra, has long been considered a subspecies of N. haje, but was recently raised to the status of species. Naja ashei Broadley and Wüster, 2007, Ashe's spitting cobra, is a newly described species found in Africa and also a highly aggressive snake; it can spit a large amount of venom. Naja nigricincta Bogert, 1940, was long regarded as a subspecies of N. nigricollis, but was recently found to be a full species (with N. n. woodi as a subspecies). Naja senegalensis Trape et al., 2009, is a new species encompassing what were previously considered to be the West African savanna populations of N. haje. Naja peroescobari Ceríaco et al. 2017, is a new species encompassing what was previously considered the São Tomé population of N. melanoleuca. Naja guineensis Broadley et al., 2018, is a new species encompassing what were previously considered to be the West African forest populations of N. melanoleuca. Naja savannula Broadley et al., 2018, is a new species encompassing what were previously considered to be the West African savanna populations of N. melanoleuca. Naja subfulva Laurent, 1955, previously regarded as a subspecies of N. melanoleuca, was recently recognized as a full species. Two recent molecular phylogenetic studies have also supported the incorporation of the species previously assigned to the genera Boulengerina and Paranaja into Naja, as both are closely related to the forest cobra (Naja melanoleuca). In the most comprehensive phylogenetic study to date, 5 putative new species were initially identified, of which 3 have since been named. The controversial amateur herpetologist Raymond Hoser proposed the genus Spracklandus for the African spitting cobras. Wallach et al. suggested that this name was not published according to the Code and suggested instead the recognition of four subgenera within Naja: Naja for the Asiatic cobras, Boulengerina for the African forest, water and burrowing cobras, Uraeus for the Egyptian and Cape cobra group and Afronaja for the African spitting cobras. International Commission on Zoological Nomenclature issued an opinion that it "finds no basis under the provisions of the Code for regarding the name Spracklandus as unavailable". Asiatic cobras are believed to further be split into two groups of southeastern Asian cobras (N. siamensis, N. sumatrana, N. philippinensis, N. samarensis, N. sputatrix, and N. mandalayensis) and western and northern Asian cobras (N. oxiana, N. kaouthia, N. sagittifera, and N. atra) with Naja naja serving as a basal lineage to all species. Species Not including the nominate subspecies † Extinct T Type species
Biology and health sciences
Reptiles
null
5035905
https://en.wikipedia.org/wiki/Shadow%20zone
Shadow zone
A seismic shadow zone is an area of the Earth's surface where seismographs cannot detect direct P waves and/or S waves from an earthquake. This is due to liquid layers or structures within the Earth's surface. The most recognized shadow zone is due to the core-mantle boundary where P waves are refracted and S waves are stopped at the liquid outer core; however, any liquid boundary or body can create a shadow zone. For example, magma reservoirs with a high enough percent melt can create seismic shadow zones. Background The earth is made up of different structures: the crust, the mantle, the inner core and the outer core. The crust, mantle, and inner core are typically solid; however, the outer core is entirely liquid. A liquid outer core was first shown in 1906 by Geologist Richard Oldham. Oldham observed seismograms from various earthquakes and saw that some seismic stations did not record direct S waves, particularly ones that were 120° away from the hypocenter of the earthquake. In 1913, Beno Gutenberg noticed the abrupt change in seismic velocities of the P waves and disappearance of S waves at the core-mantle boundary. Gutenberg attributed this due to a solid mantle and liquid outer core, calling it the Gutenberg discontinuity. Seismic wave properties The main observational constraint on identifying liquid layers and/or structures within the earth come from seismology. When an earthquake occurs, seismic waves radiate out spherically from the earthquake's hypocenter. Two types of body waves travel through the Earth: primary seismic waves (P waves) and secondary seismic waves (S waves). P waves travel with motion in the same direction as the wave propagates and S waves travel with motion perpendicular to the wave propagation (transverse). The P waves are refracted by the liquid outer core of the Earth and are not detected between 104° and 140° (between approximately 11,570 and 15,570 km or 7,190 and 9,670 mi) from the hypocenter. This is due to Snell's law, where a seismic wave encounters a boundary and either refracts or reflects. In this case, the P waves refract due to density differences and greatly reduce in velocity. This is considered the P wave shadow zone. The S waves cannot pass through the liquid outer core and are not detected more than 104° (approximately 11,570 km or 7,190 mi) from the epicenter. This is considered the S wave shadow zone. However, P waves that travel refract through the outer core and refract to another P wave (PKP wave) on leaving the outer core can be detected within the shadow zone. Additionally, S waves that refract to P waves on entering the outer core and then refract to an S wave on leaving the outer core can also be detected in the shadow zone (SKS waves). The reason for this is P wave and S wave velocities are governed by different properties in the material which they travel through and the different mathematical relationships they share in each case. The three properties are: incompressibility (), density () and rigidity (). P wave velocity is equal to: S wave velocity is equal to: S wave velocity is entirely dependent on the rigidity of the material it travels through. Liquids have zero rigidity, making the S wave velocity zero when traveling through a liquid. Overall, S waves are shear waves, and shear stress is a type of deformation that cannot occur in a liquid. Conversely, P waves are compressional waves and are only partially dependent on rigidity. P waves still maintain some velocity (can be greatly reduced) when traveling through a liquid. Other observations and implications Although the core-mantle boundary casts the largest shadow zone, smaller structures, such as magma bodies, can also cast a shadow zone. For example, in 1981, Páll Einarsson conducted a seismic investigation on the Krafla Caldera in Northeast Iceland. In this study, Einarsson placed a dense array of seismometers over the caldera and recorded earthquakes that occurred. The resulting seismograms showed both an absence of S waves and/or small S wave amplitudes. Einarsson attributed these results to be caused by a magma reservoir. In this case, the magma reservoir has enough percent melt to cause S waves to be directly affected. In areas where there are no S waves being recorded, the S waves are encountering enough liquid, that no solid grains are touching. In areas where there are highly attenuated (small aptitude) S waves, there is still a percentage of melt, but enough solid grains are touching where S waves can travel through the part of the magma reservoir. Between 2014 and 2018, a geophysicist in Taiwan, Cheng-Horng Lin investigated the magma reservoir beneath the Tatun Volcanic Group in Taiwan. Lin's research group used deep earthquakes and seismometers on or near the Tatun Volcanic Group to identify changes P and S waveforms. Their results showed P wave delays and the absence of S waves in various locations. Lin attributed this finding to be due to a magma reservoir with at least 40% melt that casts an S wave shadow zone. However, a recent study done by National Chung Cheng University used a dense array of seismometers and only saw S wave attenuation associated with the magma reservoir. This research study investigated the cause of the S wave shadow zone Lin observed and attributed it to either a magma diapir above the subducting Philippine Sea plate. Though it was not a magma reservoir, there was still a structure with enough melt/liquid to cause an S wave shadow zone. The existence of shadow zones, more specifically S wave shadow zones, could have implications on the eruptibility of volcanoes throughout the world. When volcanoes have enough percent melt to go below the rheological lockup (percent crystal fraction when a volcano is eruptive or not eruptive), this makes the volcanoes eruptible. Determining the percent melt of a volcano could help with predictive modeling and assess current and future hazards. In an actively erupting volcano, Mt. Etna in Italy, a study was done in 2021 that showed both an absence of S waves in some regions and highly attenuated S waves in others, depending on where the receivers are located above the magma chamber. Previously, in 2014, a study was done to model the mechanism leading to December 28, 2014, eruption. This study showed that an eruption could be triggered between 30 and 70% melt.
Physical sciences
Seismology
Earth science
5036574
https://en.wikipedia.org/wiki/Flow%20visualization
Flow visualization
Flow visualization or flow visualisation in fluid dynamics is used to make the flow patterns visible, in order to get qualitative or quantitative information on them. Overview Flow visualization is the art of making flow patterns visible. Most fluids (air, water, etc.) are transparent, thus their flow patterns are invisible to the naked eye without methods to make them this visible. Historically, such methods included experimental methods. With the development of computer models and CFD simulating flow processes (e.g. the distribution of air-conditioned air in a new car), purely computational methods have been developed. Methods of visualization In experimental fluid dynamics, flows are visualized by three methods: Surface flow visualization: This reveals the flow streamlines in the limit as a solid surface is approached. Colored oil applied to the surface of a wind tunnel model provides one example (the oil responds to the surface shear stress and forms a pattern). Particle tracer methods: Particles, such as smoke or microspheres, can be added to a flow to trace the fluid motion. We can illuminate the particles with a sheet of laser light in order to visualize a slice of a complicated fluid flow pattern. Assuming that the particles faithfully follow the streamlines of the flow, we can not only visualize the flow but also measure its velocity using the particle image velocimetry or particle tracking velocimetry methods. Particles with densities that match that of the fluid flow will exhibit the most accurate visualization. Optical methods: Some flows reveal their patterns by way of changes in their optical refractive index. These are visualized by optical methods known as the shadowgraph, schlieren photography, and interferometry. More directly, dyes can be added to (usually liquid) flows to measure concentrations; typically employing the light attenuation or laser-induced fluorescence techniques. In scientific visualization flows are visualized with two main methods: Analytical methods that analyse a given flow and show properties like streamlines, streaklines, and pathlines. The flow can either be given in a finite representation or as a smooth function. Texture advection methods that "bend" textures (or images) according to the flow. As the image is always finite (the flow through could be given as a smooth function), these methods will visualize approximations of the real flow. Application In computational fluid dynamics the numerical solution of the governing equations can yield all the fluid properties in space and time. This overwhelming amount of information must be displayed in a meaningful form. Thus flow visualization is equally important in computational as in experimental fluid dynamics.
Physical sciences
Fluid mechanics
Physics
24177780
https://en.wikipedia.org/wiki/Solutional%20cave
Solutional cave
A solutional cave, solution cave, or karst cave is a cave usually formed in a soluble rock like limestone (Calcium carbonate CaCO3). It is the most frequently occurring type of cave. It can also form in other rocks, including chalk, dolomite, marble, salt beds, and gypsum. Process Bedrock is dissolved by carbonic acid in rainwater, groundwater, or humic acids from decaying vegetation, that seeps through bedding planes, faults, joints, and the like. Over time, the surface terrain breaks up into clints separated by grikes and punctuated by sinkholes into which streams may disappear, crevices expand as the walls are dissolved to become caves or cave system. These may turn into large caverns or dolines when the roof collapses. The portions of a solutional cave that are below the water table or the local level of the groundwater are flooded. Limestone caves The largest and most abundant solutional caves are located in limestone. Limestone caves are often adorned with calcium carbonate formations produced through slow precipitation. These include flowstones, stalactites, stalagmites, helictites, soda straws, calcite rafts, and columns. These secondary mineral deposits in caves are called "speleothems". Carbonic acid dissolution Limestone dissolves under the action of rainwater and groundwater charged with H2CO3 (carbonic acid) and naturally occurring organic acids. The dissolution process produces a distinctive landform known as "karst", characterized by sinkholes and underground drainage. Solutional caves in this landform—topography are often called karst caves. Sulfuric acid dissolution Lechuguilla Cave in New Mexico and nearby Carlsbad Caverns are now believed to be examples of another type of solutional cave. They were formed by H2S (hydrogen sulfide) gas rising from below, where reservoirs of petroleum give off sulfurous fumes. This gas mixes with ground water and forms H2SO4 (sulfuric acid). The acid then dissolves the limestone from below, rather than from above, by acidic water percolating to the surface. Examples Australia Jenolan Caves, New South Wales Malaysia List of caves in Malaysia Taiwan Black Dwarf Cave, Pingtung County United States Jewel Cave National Monument, South Dakota Mammoth Cave National Park, Kentucky Russell Cave National Monument, Alabama Wind Cave National Park, South Dakota Oregon Caves National Monument and Preserve, Oregon Cumberland Caverns, Tennessee Vietnam Hang Sơn Đoòng, Quảng Bình Province Germany König-Otto-Tropfsteinhöhle
Physical sciences
Caves
Earth science
24179592
https://en.wikipedia.org/wiki/Future%20of%20Earth
Future of Earth
The biological and geological future of Earth can be extrapolated based on the estimated effects of several long-term influences. These include the chemistry at Earth's surface, the cooling rate of the planet's interior, gravitational interactions with other objects in the Solar System, and a steady increase in the Sun's luminosity. An uncertain factor is the influence of human technology such as climate engineering, which could cause significant changes to the planet. For example, the current Holocene extinction is being caused by technology, and the effects may last for up to five million years. In turn, technology may result in the extinction of humanity, leaving the planet to gradually return to a slower evolutionary pace resulting solely from long-term natural processes. Over time intervals of hundreds of millions of years, random celestial events pose a global risk to the biosphere, which can result in mass extinctions. These include impacts by comets or asteroids and the possibility of a near-Earth supernova—a massive stellar explosion within a radius of the Sun. Other large-scale geological events are more predictable. Milankovitch's theory predicts that the planet will continue to undergo glacial periods at least until the Quaternary glaciation comes to an end. These periods are caused by the variations in eccentricity, axial tilt, and precession of Earth's orbit. As part of the ongoing supercontinent cycle, plate tectonics will probably create a supercontinent in 250–350 million years. Sometime in the next 1.5–4.5 billion years, Earth's axial tilt may begin to undergo chaotic variations, with changes in the axial tilt of up to 90°. The luminosity of the Sun will steadily increase, causing a rise in the solar radiation reaching Earth and resulting in a higher rate of weathering of silicate minerals. This will affect the carbonate–silicate cycle, which will reduce the level of carbon dioxide in the atmosphere. In about 600 million years from now, the level of carbon dioxide will fall below the level needed to sustain C3 carbon fixation photosynthesis used by trees. Some plants use the C4 carbon fixation method to persist at carbon dioxide concentrations as low as ten parts per million. However, in the long term, plants will likely die off altogether. The extinction of plants would cause the demise of almost all animal life since plants are the base of much of the animal food chain. In about one billion years, solar luminosity will be 10% higher, causing the atmosphere to become a "moist greenhouse", resulting in a runaway evaporation of the oceans. As a likely consequence, plate tectonics and the entire carbon cycle will end. Then, in about 2–3 billion years, the planet's magnetic dynamo may cease, causing the magnetosphere to decay, leading to an accelerated loss of volatiles from the outer atmosphere. Four billion years from now, the increase in Earth's surface temperature will cause a runaway greenhouse effect, creating conditions more extreme than present-day Venus and heating Earth's surface enough to melt it. By that point, all life on Earth will be extinct. Finally, the planet will likely be absorbed by the Sun in about 7.5 billion years, after the star has entered the red giant phase and expanded beyond the planet's current orbit. Human influence Humans play a key role in the biosphere, with the large human population dominating many of Earth's ecosystems. This has resulted in a widespread, ongoing mass extinction of other species during the present geological epoch, now known as the Holocene extinction. The large-scale loss of species caused by human influence since the 1950s has been called a biotic crisis, with an estimated 10% of the total species lost as of 2007. At current rates, about 30% of species are at risk of extinction in the next hundred years. The Holocene extinction event is the result of habitat destruction, the widespread distribution of invasive species, poaching, and climate change. In the present day, human activity has had a significant impact on the surface of the planet. More than a third of the land surface has been modified by human actions, and humans use about 20% of global primary production. The concentration of carbon dioxide in the atmosphere has increased by close to 50% since the start of the Industrial Revolution. The consequences of a persistent biotic crisis have been predicted to last for at least five million years. It could result in a decline in biodiversity and homogenization of biotas, accompanied by a proliferation of species that are opportunistic, such as pests and weeds. Novel species may emerge; in particular taxa that prosper in human-dominated ecosystems may rapidly diversify into many new species. Microbes are likely to benefit from the increase in nutrient-enriched environmental niches. No new species of existing large vertebrates are likely to arise and food chains will probably be shortened. There are multiple scenarios for known risks that can have a global impact on the planet. From the perspective of humanity, these can be subdivided into survivable risks and terminal risks. Risks that humans pose to themselves include climate change, the misuse of nanotechnology, a nuclear holocaust, warfare with a programmed superintelligence, a genetically engineered disease, or a disaster caused by a physics experiment. Similarly, several natural events may pose a doomsday threat, including a highly virulent disease, the impact of an asteroid or comet, runaway greenhouse effect, and resource depletion. There may be the possibility of an infestation by an extraterrestrial lifeform. The actual odds of these scenarios occurring are difficult if not impossible to deduce. Should the human species become extinct, then the various features assembled by humanity will begin to decay. The largest structures have an estimated decay half-life of about 1,000 years. The last surviving structures would most likely be open-pit mines, large landfills, major highways, wide canal cuts, and earth-fill flank dams. A few massive stone monuments like the pyramids at the Giza Necropolis or the sculptures at Mount Rushmore may still survive in some form after a million years. Cataclysmic astronomical events As the Sun orbits the Milky Way, wandering stars such as Gliese 710 may approach close enough to have a disruptive influence on the Solar System. A close stellar encounter may cause a significant reduction in the perihelion distances of comets in the Oort cloud—a spherical region of icy bodies orbiting within half a light-year of the Sun. Such an encounter can trigger a 40-fold increase in the number of comets reaching the inner Solar System. Impacts from these comets can trigger a mass extinction of life on Earth. These disruptive encounters occur an average of once every 45 million years. There is a 1% chance every billion years that a star will pass within of the Sun, potentially disrupting the Solar System. The mean time for the Sun to collide with another star in the solar neighborhood is approximately 30 trillion () years, which is much longer than the estimated age of the Universe, at approximately 13.8 billion years. This can be taken as an indication of the low likelihood of such an event occurring during the lifetime of the Earth. Based on results from the Gaia telescope's second data release from April 2018, an estimated 694 stars will approach the Solar System to less than 5 parsecs in the next 15 million years. Of these, 26 have a good probability to come within and 7 within . The energy released from the impact of an asteroid or comet with a diameter of or larger is sufficient to create a global environmental disaster and cause a statistically significant increase in the number of species extinctions. Among the deleterious effects resulting from a major impact event is a cloud of fine dust ejecta blanketing the planet, blocking some direct sunlight from reaching the Earth's surface thus lowering land temperatures by about within a week and halting photosynthesis for several months (similar to a nuclear winter). The mean time between major impacts is estimated to be at least 100 million years. During the last 540 million years, simulations demonstrated that such an impact rate is sufficient to cause five or six mass extinctions and 20 to 30 lower severity events. This matches the geologic record of significant extinctions during the Phanerozoic Eon. Such events can be expected to continue. A supernova is a cataclysmic explosion of a star. Within the Milky Way galaxy, supernova explosions occur on average once every 40 years. During the history of Earth, multiple such events have likely occurred within a distance of 100 light-years; known as a near-Earth supernova. Explosions inside this distance can contaminate the planet with radioisotopes and possibly impact the biosphere. Gamma rays emitted by a supernova react with nitrogen in the atmosphere, producing nitrous oxides. These molecules cause a depletion of the ozone layer that protects the surface from ultraviolet (UV) radiation from the Sun. An increase in UV-B radiation of only 10–30% is sufficient to cause a significant impact on life; particularly to the phytoplankton that form the base of the oceanic food chain. A supernova explosion at a distance of 26 light-years will reduce the ozone column density by half. On average, a supernova explosion occurs within 32 light-years once every few hundred million years, resulting in a depletion of the ozone layer lasting several centuries. Over the next two billion years, there will be about 20 supernova explosions and one gamma ray burst that will have a significant impact on the planet's biosphere. The incremental effect of gravitational perturbations between the planets causes the inner Solar System as a whole to behave chaotically over long time periods. This does not significantly affect the stability of the Solar System over intervals of a few million years or less, but over billions of years, the orbits of the planets become unpredictable. Computer simulations of the Solar System's evolution over the next five billion years suggest that there is a small (less than 1%) chance that a collision could occur between Earth and either Mercury, Venus, or Mars. During the same interval, the odds that Earth will be scattered out of the Solar System by a passing star are on the order of 1 in 100,000 (0.001%). In such a scenario, the oceans would freeze solid within several million years, leaving only a few pockets of liquid water about underground. There is a remote chance that Earth will instead be captured by a passing binary star system, allowing the planet's biosphere to remain intact. The odds of this happening are about 1 in 3 million. Orbit and rotation The gravitational perturbations of the other planets in the Solar System combine to modify the orbit of Earth and the orientation of its rotation axis. These changes can influence the planetary climate. Despite such interactions, highly accurate simulations show that overall, Earth's orbit is likely to remain dynamically stable for billions of years into the future. In all 1,600 simulations, the planet's semimajor axis, eccentricity, and inclination remained nearly constant. Glaciation Historically, there have been cyclical ice ages in which glacial sheets periodically covered the higher latitudes of the continents. Ice ages may occur because of changes in ocean circulation and continentality induced by plate tectonics. The Milankovitch theory predicts that glacial periods occur during ice ages because of astronomical factors in combination with climate feedback mechanisms. The primary astronomical drivers are a higher than normal orbital eccentricity, a low axial tilt (or obliquity), and the alignment of the northern hemisphere's summer solstice with the aphelion. Each of these effects occur cyclically. For example, the eccentricity changes over time cycles of about 100,000 and 400,000 years, with the value ranging from less than 0.01 up to 0.05. This is equivalent to a change of the semiminor axis of the planet's orbit from 99.95% of the semimajor axis to 99.88%, respectively. Earth is passing through an ice age known as the quaternary glaciation, and is presently in the Holocene interglacial period. This period would normally be expected to end in about 25,000 years. However, the increased rate at which humans release carbon dioxide into the atmosphere may delay the onset of the next glacial period until at least 50,000–130,000 years from now. On the other hand, a global warming period of finite duration (based on the assumption that fossil fuel use will cease by the year 2200) will probably only impact the glacial period for about 5,000 years. Thus, a brief period of global warming induced by a few centuries' worth of greenhouse gas emission would only have a limited impact in the long term. Obliquity The tidal acceleration of the Moon slows the rotation rate of the Earth and increases the Earth-Moon distance. Friction effects—between the core and mantle and between the atmosphere and surface—can dissipate the Earth's rotational energy. These combined effects are expected to increase the length of the day by more than 1.5 hours over the next 250 million years, and to increase the obliquity by about a half degree. The distance to the Moon will increase by about 1.5 Earth radii during the same period. Based on computer models, the presence of the Moon appears to stabilize the obliquity of the Earth, which may help the planet to avoid dramatic climate changes. This stability is achieved because the Moon increases the precession rate of the Earth's rotation axis, thereby avoiding resonances between the precession of the rotation and precession of the planet's orbital plane (that is, the precession motion of the ecliptic). However, as the semimajor axis of the Moon's orbit continues to increase, this stabilizing effect will diminish. At some point, perturbation effects will probably cause chaotic variations in the obliquity of the Earth, and the axial tilt may change by angles as high as 90° from the plane of the orbit. This is expected to occur between 1.5 and 4.5 billion years from now. A high obliquity would probably result in dramatic changes in the climate and may destroy the planet's habitability. When the axial tilt of the Earth exceeds 54°, the yearly insolation at the equator is less than that at the poles. The planet could remain at an obliquity of 60° to 90° for periods as long as 10 million years. Geodynamics Tectonics-based events will continue to occur well into the future and the surface will be steadily reshaped by tectonic uplift, extrusions, and erosion. Mount Vesuvius can be expected to erupt about 40 times over the next 1,000 years. During the same period, about five to seven earthquakes of magnitude 8 or greater should occur along the San Andreas Fault, while about 50 events of magnitude 9 may be expected worldwide. Mauna Loa should experience about 200 eruptions over the next 1,000 years, and the Old Faithful Geyser will likely cease to operate. The Niagara Falls will continue to retreat upstream, reaching Buffalo in about 30,000–50,000 years. Supervolcano events are the most impactful geological hazards, generating over of fragmented material and covering thousands of square kilometers with ash deposits. However, they are comparatively rare, occurring on average every 100,000 years. In 10,000 years, the post-glacial rebound of the Baltic Sea will have reduced the depth by about . The Hudson Bay will decrease in depth by 100 m over the same period. After 100,000 years, the island of Hawaii will have shifted about to the northwest. The planet may be entering another glacial period by this time. Continental drift The theory of plate tectonics demonstrates that the continents of the Earth are moving across the surface at the rate of a few centimeters per year. This is expected to continue, causing the plates to relocate and collide. Continental drift is facilitated by two factors: the energy generated within the planet and the presence of a hydrosphere. With the loss of either of these, continental drift will come to a halt. The production of heat through radiogenic processes is sufficient to maintain mantle convection and plate subduction for at least the next 1.1 billion years. At present, the continents of North and South America are moving westward from Africa and Europe. Researchers have produced several scenarios about how this will continue in the future. These geodynamic models can be distinguished by the subduction flux, whereby the oceanic crust moves under a continent. In the introversion model, the younger, interior, Atlantic Ocean becomes preferentially subducted and the current migration of North and South America is reversed. In the extroversion model, the older, exterior, Pacific Ocean remains preferentially subducted and North and South America migrate toward eastern Asia. As the understanding of geodynamics improves, these models will be subject to revision. In 2008, for example, a computer simulation was used to predict that a reorganization of the mantle convection will occur over the next 100 million years, creating a new supercontinent composed of Africa, Eurasia, Australia, Antarctica and South America to form around Antarctica. Regardless of the outcome of the continental migration, the continued subduction process causes water to be transported to the mantle. After a billion years from the present, a geophysical model gives an estimate that 27% of the current ocean mass will have been subducted. If this process were to continue unmodified into the future, the subduction and release would reach an equilibrium after 65% of the current ocean mass has been subducted. Introversion Christopher Scotese and his colleagues have mapped out the predicted motions several hundred million years into the future as part of the Paleomap Project. In their scenario, 50 million years from now the Mediterranean Sea may vanish, and the collision between Europe and Africa will create a long mountain range extending to the current location of the Persian Gulf. Australia will merge with Indonesia, and Baja California will slide northward along the coast. New subduction zones may appear off the eastern coast of North and South America, and mountain chains will form along those coastlines. The migration of Antarctica to the north will cause all of its ice sheets to melt. This, along with the melting of the Greenland ice sheets, will raise the average ocean level by . The inland flooding of the continents will result in climate changes. As this scenario continues, by 100 million years from the present, the continental spreading will have reached its maximum extent and the continents will then begin to coalesce. In 250 million years, North America will collide with Africa. South America will wrap around the southern tip of Africa. The result will be the formation of a new supercontinent (sometimes called Pangaea Ultima), with the Pacific Ocean stretching across half the planet. Antarctica will reverse direction and return to the South Pole, building up a new ice cap. Extroversion The first scientist to extrapolate the current motions of the continents was Canadian geologist Paul F. Hoffman of Harvard University. In 1992, Hoffman predicted that the continents of North and South America would continue to advance across the Pacific Ocean, pivoting about Siberia until they begin to merge with Asia. He dubbed the resulting supercontinent, Amasia. Later, in the 1990s, Roy Livermore calculated a similar scenario. He predicted that Antarctica would start to migrate northward, and East Africa and Madagascar would move across the Indian Ocean to collide with Asia. In an extroversion model, the closure of the Pacific Ocean would be complete in about 350 million years. This marks the completion of the current supercontinent cycle, wherein the continents split apart and then rejoin each other about every 400–500 million years. Once the supercontinent is built, plate tectonics may enter a period of inactivity as the rate of subduction drops by an order of magnitude. This period of stability could cause an increase in the mantle temperature at the rate of every 100 million years, which is the minimum lifetime of past supercontinents. As a consequence, volcanic activity may increase. Supercontinent The formation of a supercontinent can dramatically affect the environment. The collision of plates will result in mountain building, thereby shifting weather patterns. Sea levels may fall because of increased glaciation. The rate of surface weathering can rise, increasing the rate at which organic material is buried. Supercontinents can cause a drop in global temperatures and an increase in atmospheric oxygen. This, in turn, can affect the climate, further lowering temperatures. All of these changes can result in more rapid biological evolution as new niches emerge. The formation of a supercontinent insulates the mantle. The flow of heat will be concentrated, resulting in volcanism and the flooding of large areas with basalt. Rifts will form and the supercontinent will split up once more. The planet may then experience a warming period as occurred during the Cretaceous period, which marked the split-up of the previous Pangaea supercontinent. Solidification of the outer core The iron-rich core region of the Earth is divided into a diameter solid inner core and a diameter liquid outer core. The rotation of the Earth creates convective eddies in the outer core region that cause it to function as a dynamo. This generates a magnetosphere about the Earth that deflects particles from the solar wind, which prevents significant erosion of the atmosphere from sputtering. As heat from the core is transferred outward toward the mantle, the net trend is for the inner boundary of the liquid outer core region to freeze, thereby releasing thermal energy and causing the solid inner core to grow. This iron crystallization process has been ongoing for about a billion years. In the modern era, the radius of the inner core is expanding at an average rate of roughly per year, at the expense of the outer core. Nearly all of the energy needed to power the dynamo is being supplied by this process of inner core formation. The inner core is expected to consume most or all of the outer core 3–4 billion years from now, resulting in an almost completely solidified core composed of iron and other heavy elements. The surviving liquid envelope will mainly consist of lighter elements that will undergo less mixing. Alternatively, if at some point plate tectonics cease, the interior will cool less efficiently, which would slow down or even stop the inner core's growth. In either case, this can result in the loss of the magnetic dynamo. Without a functioning dynamo, the magnetic field of the Earth will decay in a geologically short time period of roughly 10,000 years. The loss of the magnetosphere will cause an increase in erosion of light elements, particularly hydrogen, from the Earth's outer atmosphere into space, resulting in less favorable conditions for life. Solar evolution The energy generation of the Sun is based upon thermonuclear fusion of hydrogen into helium. This occurs in the core region of the star using the proton–proton chain reaction process. Because there is no convection in the solar core, the helium concentration builds up in that region without being distributed throughout the star. The temperature at the core of the Sun is too low for nuclear fusion of helium atoms through the triple-alpha process, so these atoms do not contribute to the net energy generation that is needed to maintain hydrostatic equilibrium of the Sun. At present, nearly half the hydrogen at the core has been consumed, with the remainder of the atoms consisting primarily of helium. As the number of hydrogen atoms per unit mass decreases, so too does their energy output provided through nuclear fusion. This results in a decrease in pressure support, which causes the core to contract until the increased density and temperature bring the core pressure into equilibrium with the layers above. The higher temperature causes the remaining hydrogen to undergo fusion at a more rapid rate, thereby generating the energy needed to maintain the equilibrium. The result of this process has been a steady increase in the energy output of the Sun. When the Sun first became a main sequence star, it radiated only 70% of the current luminosity. The luminosity has increased in a nearly linear fashion to the present, rising by 1% every 110 million years. Likewise, in three billion years the Sun is expected to be 33% more luminous. The hydrogen fuel at the core will finally be exhausted in five billion years, when the Sun will be 67% more luminous than at present. Thereafter, the Sun will continue to burn hydrogen in a shell surrounding its core until the luminosity reaches 121% above the present value. This marks the end of the Sun's main-sequence lifetime, and thereafter it will pass through the subgiant stage and evolve into a red giant. By this time, the collision of the Milky Way and Andromeda galaxies should be underway. Although this could result in the Solar System being ejected from the newly combined galaxy, it is considered unlikely to have any adverse effect on the Sun or its planets. Climate impact The rate of weathering of silicate minerals will increase as rising temperatures speed chemical processes up. This, in turn, will decrease the level of carbon dioxide in the atmosphere, as reactions with silicate minerals convert carbon dioxide gas into solid carbonates. Within the next 600 million years from the present, the concentration of carbon dioxide will fall below the critical threshold needed to sustain C3 photosynthesis: about 50 parts per million. At this point, trees and forests in their current forms will no longer be able to survive. This decline in plant life is likely to be a long-term decline rather than a sharp drop. Plant groups will likely die one by one well before the 50 parts per million level is reached. The first plants to disappear will be C3 herbaceous plants, followed by deciduous forests, evergreen broad-leaf forests and finally evergreen conifers. However, C4 carbon fixation can continue at much lower concentrations, down to above 10 parts per million; thus, plants using photosynthesis may be able to survive for at least 0.8 billion years and possibly as long as 1.2 billion years from now, after which rising temperatures will make the biosphere unsustainable. Researchers at Caltech have suggested that once C3 plants die off, the lack of biological production of oxygen and nitrogen will cause a reduction in Earth's atmospheric pressure, which will counteract the temperature rise, and allow enough carbon dioxide to persist for photosynthesis to continue. This would allow life to survive up to 2 billion years from now, at which point water would be the limiting factor. Currently, plants represent about 5% of Earth's plant biomass and 1% of its known plant species. For example, about 50% of all grass species (Poaceae) use the photosynthetic pathway, as do many species in the herbaceous family Amaranthaceae. When the carbon dioxide levels fall to the limit where photosynthesis is barely sustainable, the proportion of carbon dioxide in the atmosphere is expected to oscillate up and down. This will allow land vegetation to flourish each time the level of carbon dioxide rises due to tectonic activity and respiration from animal life; however, the long-term trend is for the plant life on land to die off altogether as most of the remaining carbon in the atmosphere becomes sequestered in the Earth. Plants—and, by extension, animals—could survive longer by evolving other strategies such as requiring less carbon dioxide for photosynthetic processes, becoming carnivorous, adapting to desiccation, or associating with fungi. These adaptations are likely to appear near the beginning of the moist greenhouse (see further). The loss of higher plant life will result in the eventual loss of oxygen as well as ozone due to the respiration of animals, chemical reactions in the atmosphere, and volcanic eruptions. Modeling of the decline in oxygenation predicts that it may drop to 1% of the current atmospheric levels by one billion years from now. This decline will result in less attenuation of DNA-damaging UV, as well as the death of animals; the first animals to disappear would be large mammals, followed by small mammals, birds, amphibians and large fish, reptiles and small fish, and finally invertebrates. Before this happens, it is expected that life would concentrate at refugia of lower temperatures such as high elevations where less land surface area is available, thus restricting population sizes. Smaller animals would survive better than larger ones because of lesser oxygen requirements, while birds would fare better than mammals thanks to their ability to travel large distances looking for cooler temperatures. Based on oxygen's half-life in the atmosphere, animal life would last at most 100 million years after the loss of higher plants. Some cyanobacteria and phytoplankton could outlive plants due to their tolerance for carbon dioxide levels as low as 1 ppm, and may survive for around the same time as animals before carbon dioxide becomes too depleted to support any form of photosynthesis. In their work The Life and Death of Planet Earth, authors Peter D. Ward and Donald Brownlee have argued that some form of animal life may continue even after most of the Earth's plant life has disappeared. Ward and Brownlee use fossil evidence from the Burgess Shale in British Columbia, Canada, to determine the climate of the Cambrian Explosion, and use it to predict the climate of the future when rising global temperatures caused by a warming Sun and declining oxygen levels result in the final extinction of animal life. Initially, they expect that some insects, lizards, birds, and small mammals may persist, along with sea life; however, without oxygen replenishment by plant life, they believe that animals would probably die off from asphyxiation within a few million years. Even if sufficient oxygen were to remain in the atmosphere through the persistence of some form of photosynthesis, the steady rise in global temperature would result in a gradual loss of biodiversity. As temperatures rise, the last of animal life will be driven toward the poles, possibly underground. They would become primarily active during the polar night, aestivating during the polar day due to the intense heat. Much of the surface would become a barren desert and life would primarily be found in the oceans. However, due to a decrease in the amount of organic matter entering the oceans from land as well as a decrease in dissolved oxygen, sea life would disappear too, following a similar path to that on Earth's surface. This process would start with the loss of freshwater species and conclude with invertebrates, particularly those that do not depend on living plants such as termites or those near hydrothermal vents such as worms of the genus Riftia. As a result of these processes, multicellular life forms may be extinct in about 800 million years, and eukaryotes in 1.3 billion years, leaving only the prokaryotes. Loss of oceans One billion years from now, about 27% of the modern ocean will have been subducted into the mantle. If this process were allowed to continue uninterrupted, it would reach an equilibrium state where 65% of the present day surface reservoir would remain at the surface. Once the solar luminosity is 10% higher than its current value, the average global surface temperature will rise to . The atmosphere will become a "moist greenhouse" leading to a runaway evaporation of the oceans. At this point, models of the Earth's future environment demonstrate that the stratosphere would contain increasing levels of water. These water molecules will be broken down through photodissociation by solar UV, allowing hydrogen to escape the atmosphere. The net result would be a loss of the world's seawater in about 1 to 1.5 billion years from the present, depending on the model. There will be one of two variations of this future warming feedback: the "moist greenhouse" where water vapor dominates the troposphere while water vapor starts to accumulate in the stratosphere (if the oceans evaporate very quickly), and the "runaway greenhouse" where water vapor becomes a dominant component of the atmosphere (if the oceans evaporate too slowly). In this ocean-free era, there would continue to be surface reservoirs as water is steadily released from the deep crust and mantle, which could contain an amount of water equivalent to several times that present in the Earth's oceans. Some water may be retained at the poles and there may be occasional rainstorms, but for the most part, the planet would be a desert with large dunefields covering its equator, and a few salt flats on what was once the ocean floor, similar to the ones in the Atacama Desert in Chile. With no water, plate tectonics would likely stop and the most visible signs of geological activity would be shield volcanoes located above mantle hotspots. In these arid conditions the planet may retain some microbial and possibly even multicellular life. Most of these microbes will be halophiles and life could find refuge in the atmosphere as has been proposed to have happened on Venus. However, the increasingly extreme conditions will likely lead to the extinction of the prokaryotes between 1.6 billion years and 2.8 billion years from now, with the last of them living in residual ponds of water at high latitudes and heights or in caverns with trapped ice. However, underground life could last longer. What proceeds after this depends on the level of tectonic activity. A steady release of carbon dioxide by volcanic eruption could cause the atmosphere to enter a "super-greenhouse" state like that of the planet Venus. But, as stated above, without surface water, plate tectonics would probably come to a halt and most of the carbonates would remain securely buried until the Sun becomes a red giant and its increased luminosity heats the rock to the point of releasing the carbon dioxide. However, as pointed out by Peter Ward and Donald Brownlee in their book The Life and Death of Planet Earth, according to NASA Ames scientist Kevin Zahnle, it is highly possible that plate tectonics may stop long before the loss of the oceans, due to the gradual cooling of the Earth's core, which could happen in just 500 million years. This could potentially turn the Earth back into a water world, and even perhaps drowning all remaining land life. The loss of the oceans could be delayed until 2 billion years in the future if the atmospheric pressure were to decline. A lower atmospheric pressure would reduce the greenhouse effect, thereby lowering the surface temperature. This could occur if natural processes were to remove the nitrogen from the atmosphere. Studies of organic sediments have shown that at least of nitrogen has been removed from the atmosphere over the past four billion years, which is enough to effectively double the current atmospheric pressure if it were to be released. This rate of removal would be sufficient to counter the effects of increasing solar luminosity for the next two billion years. By 2.8 billion years from now, the surface temperature of the Earth will have reached , even at the poles. At this point, any remaining life will be extinguished due to the extreme conditions. What happens beyond this depends on how much water is left on the surface. If all of the water on Earth has evaporated by this point (via the "moist greenhouse" at ~1 Gyr from now), the planet will stay in the same conditions with a steady increase in the surface temperature until the Sun becomes a red giant. If not and there are still pockets of water left, and they evaporate too slowly, then in about 3–4 billion years, once the amount of water vapor in the lower atmosphere rises to 40%, and the luminosity from the Sun reaches 35–40% more than its present-day value, a "runaway greenhouse" effect will ensue, causing the atmosphere to warm and raising the surface temperature to around . This is sufficient to melt the surface of the planet. However, most of the atmosphere is expected to be retained until the Sun has entered the red giant stage. With the extinction of life, 2.8 billion years from now, it is expected that Earth's biosignatures will disappear, to be replaced by signatures caused by non-biological processes. Red giant stage Once the Sun changes from burning hydrogen within its core to burning hydrogen in a shell around its core, the core will start to contract, and the outer envelope will expand. The total luminosity will steadily increase over the following billion years until it reaches 2,730 times its current luminosity at the age of 12.167 billion years. Most of Earth's atmosphere will be lost to space. Its surface will consist of a lava ocean with floating continents of metals and metal oxides and icebergs of refractory materials, with its surface temperature reaching more than . The Sun will experience more rapid mass loss, with about 33% of its total mass shed with the solar wind. The loss of mass will mean that the orbits of the planets will expand. The orbital distance of Earth will increase to at most 150% of its current value (that is, ). The most rapid part of the Sun's expansion into a red giant will occur during the final stages, when the Sun will be about 12 billion years old. It is likely to expand to swallow both Mercury and Venus, reaching a maximum radius of . Earth will interact tidally with the Sun's outer atmosphere, which would decrease Earth's orbital radius. Drag from the chromosphere of the Sun would reduce Earth's orbit. These effects will counterbalance the impact of mass loss by the Sun, and the Sun will likely engulf Earth in about 7.59 billion years from now. The drag from the solar atmosphere may cause the orbit of the Moon to decay. Once the orbit of the Moon closes to a distance of , it will cross Earth's Roche limit, meaning that tidal interaction with Earth would break apart the Moon, turning it into a ring system. Most of the orbiting rings will begin to decay, and the debris will impact Earth. Hence, even if the Sun does not swallow the Earth, the planet may be left moonless. The ablation and vaporization caused by Earth's fall on a decaying trajectory towards the Sun may remove Earth's mantle, leaving just the core, which will finally be destroyed after at most 200 years. Earth's sole legacy will be a very slight increase (0.01%) of the solar metallicity following this event. Beyond and ultimate fate After fusing helium in its core to carbon, the Sun will begin to collapse again, evolving into a compact white dwarf star after ejecting its outer atmosphere as a planetary nebula. The predicted final mass is 54% of the present value, most likely consisting primarily of carbon and oxygen. Currently, the Moon is moving away from Earth at a rate of per year. In 50 billion years, if the Earth and Moon are not engulfed by the Sun, they will become tidelocked into a larger, stable orbit, with each showing only one face to the other. Thereafter, the tidal action of the Sun will extract angular momentum from the system, causing the orbit of the Moon to decay and the Earth's rotation to accelerate. In about 65 billion years, it is estimated that the Moon may collide with the Earth, due to the remaining energy of the Earth–Moon system being sapped by the remnant Sun, causing the Moon to slowly move inwards toward the Earth. Beyond this point, the ultimate fate of the Earth (if it survives) depends on what happens. On a time scale of 1015 (1 quadrillion) years the remaining planets in the Solar System will be ejected from the system by close encounters with other stellar remnants, and Earth will continue to orbit through the galaxy for around 1019 (10 quintillion) years before it is ejected or falls into a supermassive black hole. If Earth is not ejected during a stellar encounter, then its orbit will decay via gravitational radiation until it collides with the Sun in 1020 (100 quintillion) years. If proton decay can occur and Earth is ejected to intergalactic space, then it will last around 1038 (100 undecillion) years before evaporating into radiation.
Physical sciences
Solar System
Astronomy
21208368
https://en.wikipedia.org/wiki/Pumpkin
Pumpkin
A pumpkin is a cultivated winter squash in the genus Cucurbita. The term is most commonly applied to round, orange-colored squash varieties, but does not possess a scientific definition. It may be used in reference to many different squashes of varied appearance and belonging to multiple species in the Cucurbita genus. The use of the word "pumpkin" is thought to have originated in New England in North America, derived from a word for melon, or a native word for round. The term is sometimes used interchangeably with "squash" or "winter squash", and is commonly used for some cultivars of Cucurbita argyrosperma, Cucurbita ficifolia, Cucurbita maxima, Cucurbita moschata, and Cucurbita pepo. C. pepo pumpkins are among the oldest known domesticated plants, with evidence of their cultivation dating to between 7000 BCE and 5500 BCE. Wild species of Cucurbita and the earliest domesticated species are native to North America (parts of present-day northeastern Mexico and the southern United States), but cultivars are now grown globally for culinary, decorative, and other culturally-specific purposes. The pumpkin's thick shell contains edible seeds and pulp. Pumpkin pie is a traditional part of Thanksgiving meals in Canada and the United States and pumpkins are frequently used as autumnal seasonal decorations and carved as jack-o'-lanterns for decoration around Halloween. Commercially canned pumpkin purée and pie fillings are usually made of different pumpkin varieties from those intended for decorative use. Etymology and terminology According to the Oxford English Dictionary, the English word pumpkin derives from the Ancient Greek word (romanized ), meaning 'melon'. Under this theory, the term transitioned through the Latin word and the Middle French word to the Early Modern English , which was changed to pumpkin by 17th-century English colonists, shortly after encountering pumpkins upon their arrival in what is now the northeastern United States. There is a proposed alternate derivation for pumpkin from the Massachusett word , meaning 'grows forth round'. This term could have been used by the Wampanoag people (who speak the dialect of Massachusett) when introducing pumpkins to English Pilgrims at Plymouth Colony, located in present-day Massachusetts. (The English word squash is derived from a Massachusett word, variously transcribed as , , or, in the closely related Narragansett language, .) Researchers have noted that the term pumpkin and related terms like ayote and calabaza are applied to a range of winter squash with varying size and shape. The term tropical pumpkin is sometimes used for pumpkin cultivars of the species Cucurbita moschata. Description Pumpkin fruits are a type of berry known as a pepo. Characteristics commonly used to define pumpkin include smooth and slightly ribbed skin and deep yellow to orange color, although white, green, and other pumpkin colors also exist. While Cucurbita pepo pumpkins generally weigh between , giant pumpkins can exceed a tonne in mass. Most are varieties of C. maxima that were developed through the efforts of botanical societies and enthusiast farmers. The largest cultivars frequently reach weights of over . In October 2023, the record for heaviest pumpkin was set at 1,246.9 kg (2,749 lbs.). History The oldest evidence of Cucurbita pepo are pumpkin fragments found in Mexico that are dated between 7,000 and 5,500 BC. Pumpkins and other squash species, alongside maize and beans, feature in the Three Sisters method of companion planting practiced by many North American indigenous societies. However, larger modern pumpkin cultivars are typically excluded, as their weight may damage the other crops. Within decades after Europeans began colonizing North America, illustrations of pumpkins similar to the modern cultivars Small Sugar pumpkin and Connecticut Field pumpkin were published in Europe. Cultivation Pumpkins are a warm-weather crop that is usually planted by early July in the Northern Hemisphere. Pumpkins require that soil temperatures deep are at least and that the soil holds water well. Pumpkin crops may suffer if there is a lack of water, because of temperatures below , or if grown in soils that become waterlogged. Within these conditions, pumpkins are considered hardy, and even if many leaves and portions of the vine are removed or damaged, the plant can quickly grow secondary vines to replace what was removed. Pumpkins produce both a male and female flower, with fertilization usually performed by bees. In America, pumpkins have historically been pollinated by the native squash bee, Peponapis pruinosa, but that bee has declined, probably partly due to pesticide (imidacloprid) sensitivity. Ground-based bees, such as squash bees and the eastern bumblebee, are better suited to manage the larger pollen particles that pumpkins create. One hive per acre (0.4 hectares, or five hives per 2 hectares) is recommended by the U.S. Department of Agriculture. If there are inadequate bees for pollination, gardeners may have to hand pollinate. Inadequately pollinated pumpkins usually start growing but fail to develop. Production In 2022, world production of pumpkins (including squash and gourds) was 23 million tonnes, with China accounting for 32% of the total. Ukraine, Russia, and the United States each produced about one million tonnes. In the United States As one of the most popular crops in the United States, in 2017 over of pumpkins were produced. The top pumpkin-producing states include Illinois, Indiana, Ohio, Pennsylvania, and California. Pumpkin is the state squash of Texas. According to the Illinois Department of Agriculture, 95 percent of the U.S. crop intended for processing is grown in Illinois. Indeed, 41 percent of the overall pumpkin crop for all uses originates in the state, more than five times that of the nearest competitor, California, whose pumpkin industry is centered in the San Joaquin Valley; and the majority of that comes from five counties in the central part of the state. Nestlé, operating under the brand name Libby's, produces 85 percent of the processed pumpkin in the United States at their plant in Morton, Illinois. In the fall of 2009, rain in Illinois devastated the Libby's pumpkin crop, which, combined with a relatively weak 2008 crop depleting that year's reserves, resulted in a shortage affecting the entire country during the Thanksgiving holiday season. Another shortage, somewhat less severe, affected the 2015 crop. The pumpkin crop in the western United States, which constitutes approximately three to four percent of the national crop, is grown primarily for the organic market. Terry County, Texas, has a substantial pumpkin industry, centered largely on miniature pumpkins. Illinois farmer Sarah Frey is called "the Pumpkin Queen of America" and sells around five million pumpkins annually, predominantly for use as Jack-o-lanterns. Nutrition In a amount, raw pumpkin provides of food energy and is an excellent source (20% or more the Daily Value, DV) of provitamin A beta-carotene and vitamin A (47% DV) (table). Vitamin C is present in moderate content (10% DV), but no other micronutrients are in significant amounts (less than 10% DV, table). Pumpkin is 92% water, 6.5% carbohydrate, 0.1% fat and 1% protein (table). Uses Culinary Most parts of the pumpkin plant are edible, including the fleshy shell, the seeds, the leaves, and the flowers. When ripe, the pumpkin can be boiled, steamed, or roasted. Shell and flesh In North America, pumpkins are part of the traditional autumn harvest, eaten roasted, as mashed pumpkin and in soups and pumpkin bread. Pumpkin pie is a traditional staple of the Canadian and American Thanksgiving holidays. Pumpkin purée is sometimes prepared and frozen for later use. Flowers In the southwestern United States and Mexico, pumpkin and squash flowers are a popular and widely available food item. They may be used to garnish dishes, or dredged in a batter then fried in oil. Leaves Pumpkin leaves are also eaten in Zambia, where they are called and are boiled and cooked with groundnut paste as a side dish. Seeds Pumpkin seeds, also known as pepitas, are edible and nutrient-rich. They are about 1.5 cm (0.5 in) long, flat, asymmetrically oval, light green in color and usually covered by a white husk, although some pumpkin varieties produce seeds without them. Pumpkin seeds are a popular snack that can be found hulled or semi-hulled at grocery stores. Per ounce serving, pumpkin seeds are a good source of protein, magnesium, copper and zinc. Pumpkin seed oil Pumpkin seed oil is a thick oil pressed from roasted seeds that appears red or green in color. When used for cooking or as a salad dressing, pumpkin seed oil is generally mixed with other oils because of its robust flavor. Pumpkin seed oil contains fatty acids such as oleic acid and alpha-linolenic acid. Animal feed Pumpkin seed meal from Cucurbita maxima and Cucurbita moschata have been demonstrated to improve the nutrition of eggs for human consumption, and Cucurbita pepo seed has successfully been used in place of soybean in chicken feed. Culture Halloween In the United States, the carved pumpkin was first associated with the harvest season in general, long before it became an emblem of Halloween. The practice of carving produce for Halloween originated from an Irish myth about a man named "Stingy Jack". The practice of carving pumpkin jack-o'-lanterns for the Halloween season developed from a traditional practice in Ireland as well as Scotland and other parts of the United Kingdom of carving lanterns from the turnip, mangelwurzel, or swede (rutabaga). These vegetables continue to be popular choices today as carved lanterns in Scotland and Northern Ireland, although the British purchased a million pumpkins for Halloween in 2004 reflecting the spread of pumpkin carving in the United Kingdom. Immigrants to North America began using the native pumpkins for carving, which are both readily available and much larger – making them easier to carve than turnips. Not until 1837 does jack-o'-lantern appear as a term for a carved vegetable lantern, and the carved pumpkin lantern association with Halloween is recorded in 1866. The traditional American pumpkin used for jack-o-lanterns is the Connecticut field variety. Kentucky field pumpkin is also among the pumpkin cultivars grown specifically for jack-o-lantern carving. Chunking Pumpkin chunking is a competitive activity in which teams build various mechanical devices designed to throw a pumpkin as far as possible. Catapults, trebuchets, ballistas and air cannons are the most common mechanisms. Pumpkin festivals and competitions Growers of giant pumpkins often compete to grow the most massive pumpkins. Festivals may be dedicated to the pumpkin and these competitions. In the United States, the town of Half Moon Bay, California, holds an annual Art and Pumpkin Festival, including the World Champion Pumpkin Weigh-Off. The record for the world's heaviest pumpkin, , was most recently set in 2023. A festival called Pumpkin Weeks (Kurpitsaviikot) is held every October in Salo, Finland, at which thousands of different-sized pumpkins and carved jack-o'-lanterns are presented to tourists. Folk medicine Pumpkins have been used as folk medicine by Native Americans to treat intestinal worms and urinary ailments, and this Native American remedy was adopted by American doctors in the early nineteenth century as an anthelmintic for the expulsion of worms. In Germany and southeastern Europe, seeds of C. pepo were also used as folk remedies to treat irritable bladder and benign prostatic hyperplasia. In China, C. moschata seeds were also used in traditional Chinese medicine for the treatment of the parasitic disease schistosomiasis and for the expulsion of tape worms. Folklore and fiction There is a connection in folklore and popular culture between pumpkins and the supernatural, such as: The custom of carving jack-o-lanterns from pumpkins derives from folklore about a lost soul wandering the earth. In the fairy tale Cinderella, the fairy godmother turns a pumpkin into a carriage for the title character, but at midnight it reverts to a pumpkin. In some adaptations of Washington Irving's ghost story The Legend of Sleepy Hollow, the headless horseman is said to use a pumpkin as a substitute head. In most folklore the carved pumpkin is meant to scare away evil spirits on All Hallows' Eve (that is, Halloween), when the dead were purported to walk the earth. Cultivars The species and varieties include many economically important cultivars with a variety of different shapes, colors, and flavors that are grown for different purposes. Variety is used here interchangeably with cultivar, but not with species or taxonomic variety.
Biology and health sciences
Cucurbitales
null
25584215
https://en.wikipedia.org/wiki/One-dimensional%20space
One-dimensional space
A one-dimensional space (1D space) is a mathematical space in which location can be specified with a single coordinate. An example is the number line, each point of which is described by a single real number. Any straight line or smooth curve is a one-dimensional space, regardless of the dimension of the ambient space in which the line or curve is embedded. Examples include the circle on a plane, or a parametric space curve. In physical space, a 1D subspace is called a "linear dimension" (rectilinear or curvilinear), with units of length (e.g., metre). In algebraic geometry there are several structures that are one-dimensional spaces but are usually referred to by more specific terms. Any field is a one-dimensional vector space over itself. The projective line over denoted is a one-dimensional space. In particular, if the field is the complex numbers then the complex projective line is one-dimensional with respect to (but is sometimes called the Riemann sphere, as it is a model of the sphere, two-dimensional with respect to real-number coordinates). For every eigenvector of a linear transformation T on a vector space V, there is a one-dimensional space A ⊂ V generated by the eigenvector such that T(A) = A, that is, A is an invariant set under the action of T. In Lie theory, a one-dimensional subspace of a Lie algebra is mapped to a one-parameter group under the Lie group–Lie algebra correspondence. More generally, a ring is a length-one module over itself. Similarly, the projective line over a ring is a one-dimensional space over the ring. In case the ring is an algebra over a field, these spaces are one-dimensional with respect to the algebra, even if the algebra is of higher dimensionality. Coordinate systems in one-dimensional space One dimensional coordinate systems include the number line.
Mathematics
Geometry: General
null
147845
https://en.wikipedia.org/wiki/Application-specific%20integrated%20circuit
Application-specific integrated circuit
An application-specific integrated circuit (ASIC ) is an integrated circuit (IC) chip customized for a particular use, rather than intended for general-purpose use, such as a chip designed to run in a digital voice recorder or a high-efficiency video codec. Application-specific standard product chips are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series. ASIC chips are typically fabricated using metal–oxide–semiconductor (MOS) technology, as MOS integrated circuit chips. As feature sizes have shrunk and chip design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 logic gates to over 100 million. Modern ASICs often include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks. Such an ASIC is often termed a SoC (system-on-chip). Designers of digital ASICs often use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs. Field-programmable gate arrays (FPGA) are the modern-day technology improvement on breadboards, meaning that they are not made to be application-specific as opposed to ASICs. Programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications. For smaller designs or lower production volumes, FPGAs may be more cost-effective than an ASIC design, even in production. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars. Therefore, device manufacturers typically prefer FPGAs for prototyping and devices with low production volume and ASICs for very large production volumes where NRE costs can be amortized across many devices. History Early ASICs used gate array technology. By 1967, Ferranti and Interdesign were manufacturing early bipolar gate arrays. In 1967, Fairchild Semiconductor introduced the Micromatrix family of bipolar diode–transistor logic (DTL) and transistor–transistor logic (TTL) arrays. Complementary metal–oxide–semiconductor (CMOS) technology opened the door to the broad commercialization of gate arrays. The first CMOS gate arrays were developed by Robert Lipp, in 1974 for International Microcircuits, Inc. (IMI). Metal–oxide–semiconductor (MOS) standard-cell technology was introduced by Fairchild and Motorola, under the trade names Micromosaic and Polycell, in the 1970s. This technology was later successfully commercialized by VLSI Technology (founded 1979) and LSI Logic (1981). A successful commercial application of gate array circuitry was found in the low-end 8-bit ZX81 and ZX Spectrum personal computers, introduced in 1981 and 1982. These were used by Sinclair Research (UK) essentially as a low-cost I/O solution aimed at handling the computer's graphics. Customization occurred by varying a metal interconnect mask. Gate arrays had complexities of up to a few thousand gates; this is now called mid-scale integration. Later versions became more generalized, with different base dies customized by both metal and polysilicon layers. Some base dies also include random-access memory (RAM) elements. Standard-cell designs In the mid-1980s, a designer would choose an ASIC manufacturer and implement their design using the design tools available from the manufacturer. While third-party design tools were available, there was not an effective link from the third-party design tools to the layout and actual semiconductor process performance characteristics of the various ASIC manufacturers. Most designers used factory-specific tools to complete the implementation of their designs. A solution to this problem, which also yielded a much higher density device, was the implementation of standard cells. Every ASIC manufacturer could create functional blocks with known electrical characteristics, such as propagation delay, capacitance and inductance, that could also be represented in third-party tools. Standard-cell design is the utilization of these functional blocks to achieve very high gate density and good electrical performance. Standard-cell design is intermediate between and in terms of its non-recurring engineering and recurring component costs as well as performance and speed of development (including time to market). By the late 1990s, logic synthesis tools became available. Such tools could compile HDL descriptions into a gate-level netlist. Standard-cell integrated circuits (ICs) are designed in the following conceptual stages referred to as electronics design flow, although these stages overlap significantly in practice: Requirements engineering: A team of design engineers starts with a non-formal understanding of the required functions for a new ASIC, usually derived from requirements analysis. Register-transfer level (RTL) design: The design team constructs a description of an ASIC to achieve these goals using a hardware description language. This process is similar to writing a computer program in a high-level language. Functional verification: Suitability for purpose is verified by functional verification. This may include such techniques as logic simulation through test benches, formal verification, emulation, or creating and evaluating an equivalent pure software model, as in Simics. Each verification technique has advantages and disadvantages, and most often several methods are used together for ASIC verification. Unlike most FPGAs, ASICs cannot be reprogrammed once fabricated and therefore ASIC designs that are not completely correct are much more costly, increasing the need for full test coverage. Logic synthesis: Logic synthesis transforms the RTL design into a large collection called of lower-level constructs called standard cells. These constructs are taken from a standard-cell library consisting of pre-characterized collections of logic gates performing specific functions. The standard cells are typically specific to the planned manufacturer of the ASIC. The resulting collection of standard cells and the needed electrical connections between them is called a gate-level netlist. Placement: The gate-level netlist is next processed by a placement tool which places the standard cells onto a region of an integrated circuit die representing the final ASIC. The placement tool attempts to find an optimized placement of the standard cells, subject to a variety of specified constraints. Routing: An electronics routing tool takes the physical placement of the standard cells and uses the netlist to create the electrical connections between them. Since the search space is large, this process will produce a "sufficient" rather than "globally optimal" solution. The output is a file which can be used to create a set of photomasks enabling a semiconductor fabrication facility, commonly called a 'fab' or 'foundry' to manufacture physical integrated circuits. Placement and routing are closely interrelated and are collectively called place and route in electronics design. Sign-off: Given the final layout, circuit extraction computes the parasitic resistances and capacitances. In the case of a digital circuit, this will then be further mapped into delay information from which the circuit performance can be estimated, usually by static timing analysis. This, and other final tests such as design rule checking and power analysis collectively called signoff are intended to ensure that the device will function correctly over all extremes of the process, voltage and temperature. When this testing is complete the photomask information is released for chip fabrication. These steps, implemented with a level of skill common in the industry, almost always produce a final device that correctly implements the original design, unless flaws are later introduced by the physical fabrication process. The design steps also called design flow, are also common to standard product design. The significant difference is that standard-cell design uses the manufacturer's cell libraries that have been used in potentially hundreds of other design implementations and therefore are of much lower risk than a full custom design. Standard cells produce a design density that is cost-effective, and they can also integrate IP cores and static random-access memory (SRAM) effectively, unlike gate arrays. Gate-array and semi-custom design Gate array design is a manufacturing method in which diffused layers, each consisting of transistors and other active devices, are predefined and electronics wafers containing such devices are "held in stock" or unconnected prior to the metallization stage of the fabrication process. The physical design process defines the interconnections of these layers for the final device. For most ASIC manufacturers, this consists of between two and nine metal layers with each layer running perpendicular to the one below it. Non-recurring engineering costs are much lower than full custom designs, as photolithographic masks are required only for the metal layers. Production cycles are much shorter, as metallization is a comparatively quick process; thereby accelerating time to market. Gate-array ASICs are always a compromise between rapid design and performance as mapping a given design onto what a manufacturer held as a stock wafer never gives 100% circuit utilization. Often difficulties in routing the interconnect require migration onto a larger array device with a consequent increase in the piece part price. These difficulties are often a result of the layout EDA software used to develop the interconnect. Pure, logic-only gate-array design is rarely implemented by circuit designers today, having been almost entirely replaced by field-programmable devices. The most prominent of such devices are field-programmable gate arrays (FPGAs) which can be programmed by the user and thus offer minimal tooling charges, non-recurring engineering, only marginally increased piece part cost, and comparable performance. Today, gate arrays are evolving into structured ASICs that consist of a large IP core like a CPU, digital signal processor units, peripherals, standard interfaces, integrated memories, SRAM, and a block of reconfigurable, uncommitted logic. This shift is largely because ASIC devices are capable of integrating large blocks of system functionality, and systems on a chip (SoCs) require glue logic, communications subsystems (such as networks on chip), peripherals, and other components rather than only functional units and basic interconnection. In their frequent usages in the field, the terms "gate array" and "semi-custom" are synonymous when referring to ASICs. Process engineers more commonly use the term "semi-custom", while "gate-array" is more commonly used by logic (or gate-level) designers. Full-custom design By contrast, full-custom ASIC design defines all the photolithographic layers of the device. Full-custom design is used for both ASIC design and for standard product design. The benefits of full-custom design include reduced area (and therefore recurring component cost), performance improvements, and also the ability to integrate analog components and other pre-designed—and thus fully verified—components, such as microprocessor cores, that form a system on a chip. The disadvantages of full-custom design can include increased manufacturing and design time, increased non-recurring engineering costs, more complexity in the computer-aided design (CAD) and electronic design automation systems, and a much higher skill requirement on the part of the design team. For digital-only designs, however, "standard-cell" cell libraries, together with modern CAD systems, can offer considerable performance/cost benefits with low risk. Automated layout tools are quick and easy to use and also offer the possibility to "hand-tweak" or manually optimize any performance-limiting aspect of the design. This is designed by using basic logic gates, circuits or layout specially for a design. Structured design Structured ASIC design (also referred to as "platform ASIC design") is a relatively new trend in the semiconductor industry, resulting in some variation in its definition. However, the basic premise of a structured ASIC is that both manufacturing cycle time and design cycle time are reduced compared to cell-based ASIC, by virtue of there being pre-defined metal layers (thus reducing manufacturing time) and pre-characterization of what is on the silicon (thus reducing design cycle time). Definition from Foundations of Embedded Systems states that: This is effectively the same definition as a gate array. What distinguishes a structured ASIC from a gate array is that in a gate array, the predefined metal layers serve to make manufacturing turnaround faster. In a structured ASIC, the use of predefined metallization is primarily to reduce cost of the mask sets as well as making the design cycle time significantly shorter. For example, in a cell-based or gate-array design the user must often design power, clock, and test structures themselves. By contrast, these are predefined in most structured ASICs and therefore can save time and expense for the designer compared to gate-array based designs. Likewise, the design tools used for structured ASIC can be substantially lower cost and easier (faster) to use than cell-based tools, because they do not have to perform all the functions that cell-based tools do. In some cases, the structured ASIC vendor requires customized tools for their device (e.g., custom physical synthesis) be used, also allowing for the design to be brought into manufacturing more quickly. Cell libraries, IP-based design, hard and soft macros Cell libraries of logical primitives are usually provided by the device manufacturer as part of the service. Although they will incur no additional cost, their release will be covered by the terms of a non-disclosure agreement (NDA) and they will be regarded as intellectual property by the manufacturer. Usually, their physical design will be pre-defined so they could be termed "hard macros". What most engineers understand as "intellectual property" are IP cores, designs purchased from a third-party as sub-components of a larger ASIC. They may be provided in the form of a hardware description language (often termed a "soft macro"), or as a fully routed design that could be printed directly onto an ASIC's mask (often termed a "hard macro"). Many organizations now sell such pre-designed cores – CPUs, Ethernet, USB or telephone interfaces – and larger organizations may have an entire department or division to produce cores for the rest of the organization. The company ARM only sells IP cores, making it a fabless manufacturer. Indeed, the wide range of functions now available in structured ASIC design is a result of the phenomenal improvement in electronics in the late 1990s and early 2000s; as a core takes a lot of time and investment to create, its re-use and further development cuts product cycle times dramatically and creates better products. Additionally, open-source hardware organizations such as OpenCores are collecting free IP cores, paralleling the open-source software movement in hardware design. Soft macros are often process-independent (i.e. they can be fabricated on a wide range of manufacturing processes and different manufacturers). Hard macros are process-limited and usually further design effort must be invested to migrate (port) to a different process or manufacturer. Multi-project wafers Some manufacturers and IC design houses offer multi-project wafer service (MPW) as a method of obtaining low cost prototypes. Often called shuttles, these MPWs, containing several designs, run at regular, scheduled intervals on a "cut and go" basis, usually with limited liability on the part of the manufacturer. The contract involves delivery of bare dies or the assembly and packaging of a handful of devices. The service usually involves the supply of a physical design database (i.e. masking information or pattern generation (PG) tape). The manufacturer is often referred to as a "silicon foundry" due to the low involvement it has in the process. Application-specific standard product An application-specific standard product or ASSP is an integrated circuit that implements a specific function that appeals to a wide market. As opposed to ASICs that combine a collection of functions and are designed by or for one customer, ASSPs are available as off-the-shelf components. ASSPs are used in all industries, from automotive to communications. As a general rule, if you can find a design in a data book, then it is probably not an ASIC, but there are some exceptions. For example, two ICs that might or might not be considered ASICs are a controller chip for a PC and a chip for a modem. Both of these examples are specific to an application (which is typical of an ASIC) but are sold to many different system vendors (which is typical of standard parts). ASICs such as these are sometimes called application-specific standard products (ASSPs). Examples of ASSPs are encoding/decoding chip, Ethernet network interface controller chip, etc.
Technology
Semiconductors
null
147853
https://en.wikipedia.org/wiki/Speed%20of%20sound
Speed of sound
The speed of sound is the distance travelled per unit of time by a sound wave as it propagates through an elastic medium. More simply, the speed of sound is how fast vibrations travel. At , the speed of sound in air, is about , or in or one mile in . It depends strongly on temperature as well as the medium through which a sound wave is propagating. At , the speed of sound in dry air (sea level 14.7 psi) is about . The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in dry air, deviating slightly from ideal behavior. In colloquial speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: typically, sound travels most slowly in gases, faster in liquids, and fastest in solids. For example, while sound travels at in air, it travels at in water (almost 4.3 times as fast) and at in iron (almost 15 times as fast). In an exceptionally stiff material such as diamond, sound travels at 12,000 m/s (39,370 ft/s), about 35 times its speed in air and about the fastest it can travel under normal conditions. In theory, the speed of sound is actually the speed of vibrations. Sound waves in solids are composed of compression waves (just as in gases and liquids) and a different type of sound wave called a shear wave, which occurs only in solids. Shear waves in solids usually travel at different speeds than compression waves, as exhibited in seismology. The speed of compression waves in solids is determined by the medium's compressibility, shear modulus, and density. The speed of shear waves is determined only by the solid material's shear modulus and density. In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound (in the same medium) is called the object's Mach number. Objects moving at speeds greater than the speed of sound () are said to be traveling at supersonic speeds. Earth In Earth's atmosphere, the speed of sound varies greatly from about at high altitudes to about at high temperatures. History Sir Isaac Newton's 1687 Principia includes a computation of the speed of sound in air as . This is too low by about 15%. The discrepancy is due primarily to neglecting the (then unknown) effect of rapidly fluctuating temperature in a sound wave (in modern terms, sound wave compression and expansion of air is an adiabatic process, not an isothermal process). This error was later rectified by Laplace. During the 17th century there were several attempts to measure the speed of sound accurately, including attempts by Marin Mersenne in 1630 (1,380 Parisian feet per second), Pierre Gassendi in 1635 (1,473 Parisian feet per second) and Robert Boyle (1,125 Parisian feet per second). In 1709, the Reverend William Derham, Rector of Upminster, published a more accurate measure of the speed of sound, at 1,072 Parisian feet per second. (The Parisian foot was . This is longer than the standard "international foot" in common use today, which was officially defined in 1959 as , making the speed of sound at 1,055 Parisian feet per second). Derham used a telescope from the tower of the church of St. Laurence, Upminster to observe the flash of a distant shotgun being fired, and then measured the time until he heard the gunshot with a half-second pendulum. Measurements were made of gunshots from a number of local landmarks, including North Ockendon church. The distance was known by triangulation, and thus the speed that the sound had travelled was calculated. Basic concepts The transmission of sound can be illustrated by using a model consisting of an array of spherical objects interconnected by springs. In real material terms, the spheres represent the material's molecules and the springs represent the bonds between them. Sound passes through the system by compressing and expanding the springs, transmitting the acoustic energy to neighboring spheres. This helps transmit the energy in-turn to the neighboring sphere's springs (bonds), and so on. The speed of sound through the model depends on the stiffness/rigidity of the springs, and the mass of the spheres. As long as the spacing of the spheres remains constant, stiffer springs/bonds transmit energy more quickly, while more massive spheres transmit energy more slowly. In a real material, the stiffness of the springs is known as the "elastic modulus", and the mass corresponds to the material density. Sound will travel more slowly in spongy materials and faster in stiffer ones. Effects like dispersion and reflection can also be understood using this model. Some textbooks mistakenly state that the speed of sound increases with density. This notion is illustrated by presenting data for three materials, such as air, water, and steel and noting that the speed of sound is higher in the denser materials. But the example fails to take into account that the materials have vastly different compressibility, which more than makes up for the differences in density, which would slow wave speeds in the denser materials. An illustrative example of the two effects is that sound travels only 4.3 times faster in water than air, despite enormous differences in compressibility of the two media. The reason is that the greater density of water, which works to slow sound in water relative to the air, nearly makes up for the compressibility differences in the two media. For instance, sound will travel 1.59 times faster in nickel than in bronze, due to the greater stiffness of nickel at about the same density. Similarly, sound travels about 1.41 times faster in light hydrogen (protium) gas than in heavy hydrogen (deuterium) gas, since deuterium has similar properties but twice the density. At the same time, "compression-type" sound will travel faster in solids than in liquids, and faster in liquids than in gases, because the solids are more difficult to compress than liquids, while liquids, in turn, are more difficult to compress than gases. A practical example can be observed in Edinburgh when the "One o'Clock Gun" is fired at the eastern end of Edinburgh Castle. Standing at the base of the western end of the Castle Rock, the sound of the Gun can be heard through the rock, slightly before it arrives by the air route, partly delayed by the slightly longer route. It is particularly effective if a multi-gun salute such as for "The Queen's Birthday" is being fired. Compression and shear waves In a gas or liquid, sound consists of compression waves. In solids, waves propagate as two different types. A longitudinal wave is associated with compression and decompression in the direction of travel, and is the same process in gases and liquids, with an analogous compression-type wave in solids. Only compression waves are supported in gases and liquids. An additional type of wave, the transverse wave, also called a shear wave, occurs only in solids because only solids support elastic deformations. It is due to elastic deformation of the medium perpendicular to the direction of wave travel; the direction of shear-deformation is called the "polarization" of this type of wave. In general, transverse waves occur as a pair of orthogonal polarizations. These different waves (compression waves and the different polarizations of shear waves) may have different speeds at the same frequency. Therefore, they arrive at an observer at different times, an extreme example being an earthquake, where sharp compression waves arrive first and rocking transverse waves seconds later. The speed of a compression wave in a fluid is determined by the medium's compressibility and density. In solids, the compression waves are analogous to those in fluids, depending on compressibility and density, but with the additional factor of shear modulus which affects compression waves due to off-axis elastic energies which are able to influence effective tension and relaxation in a compression. The speed of shear waves, which can occur only in solids, is determined simply by the solid material's shear modulus and density. Equations The speed of sound in mathematical notation is conventionally represented by c, from the Latin celeritas meaning "swiftness". For fluids in general, the speed of sound c is given by the Newton–Laplace equation: where is a coefficient of stiffness, the isentropic bulk modulus (or the modulus of bulk elasticity for gases); is the density. , where is the pressure and the derivative is taken isentropically, that is, at constant entropy s. This is because a sound wave travels so fast that its propagation can be approximated as an adiabatic process, meaning that there isn't enough time, during a pressure cycle of the sound, for significant heat conduction and radiation to occur. Thus, the speed of sound increases with the stiffness (the resistance of an elastic body to deformation by an applied force) of the material and decreases with an increase in density. For ideal gases, the bulk modulus K is simply the gas pressure multiplied by the dimensionless adiabatic index, which is about 1.4 for air under normal conditions of pressure and temperature. For general equations of state, if classical mechanics is used, the speed of sound c can be derived as follows: Consider the sound wave propagating at speed through a pipe aligned with the axis and with a cross-sectional area of . In time interval it moves length . In steady state, the mass flow rate must be the same at the two ends of the tube, therefore the mass flux is constant and . Per Newton's second law, the pressure-gradient force provides the acceleration: And therefore: If relativistic effects are important, the speed of sound is calculated from the relativistic Euler equations. In a non-dispersive medium, the speed of sound is independent of sound frequency, so the speeds of energy transport and sound propagation are the same for all frequencies. Air, a mixture of oxygen and nitrogen, constitutes a non-dispersive medium. However, air does contain a small amount of CO2 which is a dispersive medium, and causes dispersion to air at ultrasonic frequencies (greater than ). In a dispersive medium, the speed of sound is a function of sound frequency, through the dispersion relation. Each frequency component propagates at its own speed, called the phase velocity, while the energy of the disturbance propagates at the group velocity. The same phenomenon occurs with light waves; see optical dispersion for a description. Dependence on the properties of the medium The speed of sound is variable and depends on the properties of the substance through which the wave is travelling. In solids, the speed of transverse (or shear) waves depends on the shear deformation under shear stress (called the shear modulus), and the density of the medium. Longitudinal (or compression) waves in solids depend on the same two factors with the addition of a dependence on compressibility. In fluids, only the medium's compressibility and density are the important factors, since fluids do not transmit shear stresses. In heterogeneous fluids, such as a liquid filled with gas bubbles, the density of the liquid and the compressibility of the gas affect the speed of sound in an additive manner, as demonstrated in the hot chocolate effect. In gases, adiabatic compressibility is directly related to pressure through the heat capacity ratio (adiabatic index), while pressure and density are inversely related to the temperature and molecular weight, thus making only the completely independent properties of temperature and molecular structure important (heat capacity ratio may be determined by temperature and molecular structure, but simple molecular weight is not sufficient to determine it). Sound propagates faster in low molecular weight gases such as helium than it does in heavier gases such as xenon. For monatomic gases, the speed of sound is about 75% of the mean speed that the atoms move in that gas. For a given ideal gas the molecular composition is fixed, and thus the speed of sound depends only on its temperature. At a constant temperature, the gas pressure has no effect on the speed of sound, since the density will increase, and since pressure and density (also proportional to pressure) have equal but opposite effects on the speed of sound, and the two contributions cancel out exactly. In a similar way, compression waves in solids depend both on compressibility and density—just as in liquids—but in gases the density contributes to the compressibility in such a way that some part of each attribute factors out, leaving only a dependence on temperature, molecular weight, and heat capacity ratio which can be independently derived from temperature and molecular composition (see derivations below). Thus, for a single given gas (assuming the molecular weight does not change) and over a small temperature range (for which the heat capacity is relatively constant), the speed of sound becomes dependent on only the temperature of the gas. In non-ideal gas behavior regimen, for which the Van der Waals gas equation would be used, the proportionality is not exact, and there is a slight dependence of sound velocity on the gas pressure. Humidity has a small but measurable effect on the speed of sound (causing it to increase by about 0.1%–0.6%), because oxygen and nitrogen molecules of the air are replaced by lighter molecules of water. This is a simple mixing effect. Altitude variation and implications for atmospheric acoustics In the Earth's atmosphere, the chief factor affecting the speed of sound is the temperature. For a given ideal gas with constant heat capacity and composition, the speed of sound is dependent solely upon temperature; see below. In such an ideal case, the effects of decreased density and decreased pressure of altitude cancel each other out, save for the residual effect of temperature. Since temperature (and thus the speed of sound) decreases with increasing altitude up to , sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. The decrease of the speed of sound with height is referred to as a negative sound speed gradient. However, there are variations in this trend above . In particular, in the stratosphere above about , the speed of sound increases with height, due to an increase in temperature from heating within the ozone layer. This produces a positive speed of sound gradient in this region. Still another region of positive gradient occurs at very high altitudes, in the thermosphere above . Details Speed of sound in ideal gases and air For an ideal gas, K (the bulk modulus in equations above, equivalent to C, the coefficient of stiffness in solids) is given by Thus, from the Newton–Laplace equation above, the speed of sound in an ideal gas is given by where γ is the adiabatic index also known as the isentropic expansion factor. It is the ratio of the specific heat of a gas at constant pressure to that of a gas at constant volume () and arises because a classical sound wave induces an adiabatic compression, in which the heat of the compression does not have enough time to escape the pressure pulse, and thus contributes to the pressure induced by the compression; p is the pressure; ρ is the density. Using the ideal gas law to replace p with nRT/V, and replacing ρ with nM/V, the equation for an ideal gas becomes where cideal is the speed of sound in an ideal gas; R is the molar gas constant; k is the Boltzmann constant; γ (gamma) is the adiabatic index. At room temperature, where thermal energy is fully partitioned into rotation (rotations are fully excited) but quantum effects prevent excitation of vibrational modes, the value is for diatomic gases (such as oxygen and nitrogen), according to kinetic theory. Gamma is actually experimentally measured over a range from 1.3991 to 1.403 at , for air. Gamma is exactly for monatomic gases (such as argon) and it is for triatomic molecule gases that, like , are not co-linear (a co-linear triatomic gas such as is equivalent to a diatomic gas for our purposes here); T is the absolute temperature; M is the molar mass of the gas. The mean molar mass for dry air is about ; n is the number of moles; m is the mass of a single molecule. This equation applies only when the sound wave is a small perturbation on the ambient condition, and the certain other noted conditions are fulfilled, as noted below. Calculated values for cair have been found to vary slightly from experimentally determined values. Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of γ but was otherwise correct. Numerical substitution of the above values gives the ideal gas approximation of sound velocity for gases, which is accurate at relatively low gas pressures and densities (for air, this includes standard Earth sea-level conditions). Also, for diatomic gases the use of requires that the gas exists in a temperature range high enough that rotational heat capacity is fully excited (i.e., molecular rotation is fully used as a heat energy "partition" or reservoir); but at the same time the temperature must be low enough that molecular vibrational modes contribute no heat capacity (i.e., insignificant heat goes into vibration, as all vibrational quantum modes above the minimum-energy-mode have energies that are too high to be populated by a significant number of molecules at this temperature). For air, these conditions are fulfilled at room temperature, and also temperatures considerably below room temperature (see tables below). See the section on gases in specific heat capacity for a more complete discussion of this phenomenon. For air, we introduce the shorthand In addition, we switch to the Celsius temperature , which is useful to calculate air speed in the region near (). Then, for dry air, Substituting numerical values and using the ideal diatomic gas value of , we have Finally, Taylor expansion of the remaining square root in yields A graph comparing results of the two equations is to the right, using the slightly more accurate value of for the speed of sound at . Effects due to wind shear The speed of sound varies with temperature. Since temperature and sound velocity normally decrease with increasing altitude, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. Wind shear of 4 m/(s · km) can produce refraction equal to a typical temperature lapse rate of . Higher values of wind gradient will refract sound downward toward the surface in the downwind direction, eliminating the acoustic shadow on the downwind side. This will increase the audibility of sounds downwind. This downwind refraction effect occurs because there is a wind gradient; the fact that sound is carried along by the wind is not important. For sound propagation, the exponential variation of wind speed with height can be defined as follows: where U(h) is the speed of the wind at height h; ζ is the exponential coefficient based on ground surface roughness, typically between 0.08 and 0.52; dU/dH(h) is the expected wind gradient at height h. In the 1862 American Civil War Battle of Iuka, an acoustic shadow, believed to have been enhanced by a northeast wind, kept two divisions of Union soldiers out of the battle, because they could not hear the sounds of battle only (six miles) downwind. Tables In the standard atmosphere: T0 is (= = ), giving a theoretical value of (= = = = ). Values ranging from 331.3 to may be found in reference literature, however; T20 is (= = ), giving a value of (= = = = ); T25 is (= = ), giving a value of (= = = = ). In fact, assuming an ideal gas, the speed of sound c depends on temperature and composition only, not on the pressure or density (since these change in lockstep for a given temperature and cancel out). Air is almost an ideal gas. The temperature of the air varies with altitude, giving the following variations in the speed of sound using the standard atmosphere—actual conditions may vary. Given normal atmospheric conditions, the temperature, and thus speed of sound, varies with altitude: Effect of frequency and gas composition General physical considerations The medium in which a sound wave is travelling does not always respond adiabatically, and as a result, the speed of sound can vary with frequency. The limitations of the concept of speed of sound due to extreme attenuation are also of concern. The attenuation which exists at sea level for high frequencies applies to successively lower frequencies as atmospheric pressure decreases, or as the mean free path increases. For this reason, the concept of speed of sound (except for frequencies approaching zero) progressively loses its range of applicability at high altitudes. The standard equations for the speed of sound apply with reasonable accuracy only to situations in which the wavelength of the sound wave is considerably longer than the mean free path of molecules in a gas. The molecular composition of the gas contributes both as the mass (M) of the molecules, and their heat capacities, and so both have an influence on speed of sound. In general, at the same molecular mass, monatomic gases have slightly higher speed of sound (over 9% higher) because they have a higher γ (...) than diatomics do (). Thus, at the same molecular mass, the speed of sound of a monatomic gas goes up by a factor of This gives the 9% difference, and would be a typical ratio for speeds of sound at room temperature in helium vs. deuterium, each with a molecular weight of 4. Sound travels faster in helium than deuterium because adiabatic compression heats helium more since the helium molecules can store heat energy from compression only in translation, but not rotation. Thus helium molecules (monatomic molecules) travel faster in a sound wave and transmit sound faster. (Sound travels at about 70% of the mean molecular speed in gases; the figure is 75% in monatomic gases and 68% in diatomic gases). In this example we have assumed that temperature is low enough that heat capacities are not influenced by molecular vibration (see heat capacity). However, vibrational modes simply cause gammas which decrease toward 1, since vibration modes in a polyatomic gas give the gas additional ways to store heat which do not affect temperature, and thus do not affect molecular velocity and sound velocity. Thus, the effect of higher temperatures and vibrational heat capacity acts to increase the difference between the speed of sound in monatomic vs. polyatomic molecules, with the speed remaining greater in monatomics. Practical application to air By far, the most important factor influencing the speed of sound in air is temperature. The speed is proportional to the square root of the absolute temperature, giving an increase of about per degree Celsius. For this reason, the pitch of a musical wind instrument increases as its temperature increases. The speed of sound is raised by humidity. The difference between 0% and 100% humidity is about at standard pressure and temperature, but the size of the humidity effect increases dramatically with temperature. The dependence on frequency and pressure are normally insignificant in practical applications. In dry air, the speed of sound increases by about as the frequency rises from to . For audible frequencies above it is relatively constant. Standard values of the speed of sound are quoted in the limit of low frequencies, where the wavelength is large compared to the mean free path. As shown above, the approximate value 1000/3 = 333.33... m/s is exact a little below and is a good approximation for all "usual" outside temperatures (in temperate climates, at least), hence the usual rule of thumb to determine how far lightning has struck: count the seconds from the start of the lightning flash to the start of the corresponding roll of thunder and divide by 3: the result is the distance in kilometers to the nearest point of the lightning bolt. Or divide the number of seconds by 5 for an approximate distance in miles. Mach number Mach number, a useful quantity in aerodynamics, is the ratio of air speed to the local speed of sound. At altitude, for reasons explained, Mach number is a function of temperature. Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature. The assumption is that a particular pressure represents a particular altitude and, therefore, a standard temperature. Aircraft flight instruments need to operate this way because the stagnation pressure sensed by a Pitot tube is dependent on altitude as well as speed. Experimental methods A range of different methods exist for the measurement of the speed of sound in air. The earliest reasonably accurate estimate of the speed of sound in air was made by William Derham and acknowledged by Isaac Newton. Derham had a telescope at the top of the tower of the Church of St Laurence in Upminster, England. On a calm day, a synchronized pocket watch would be given to an assistant who would fire a shotgun at a pre-determined time from a conspicuous point some miles away, across the countryside. This could be confirmed by telescope. He then measured the interval between seeing gunsmoke and arrival of the sound using a half-second pendulum. The distance from where the gun was fired was found by triangulation, and simple division (distance/time) provided velocity. Lastly, by making many observations, using a range of different distances, the inaccuracy of the half-second pendulum could be averaged out, giving his final estimate of the speed of sound. Modern stopwatches enable this method to be used today over distances as short as 200–400 metres, and not needing something as loud as a shotgun. Single-shot timing methods The simplest concept is the measurement made using two microphones and a fast recording device such as a digital storage scope. This method uses the following idea. If a sound source and two microphones are arranged in a straight line, with the sound source at one end, then the following can be measured: The distance between the microphones (), called microphone basis. The time of arrival between the signals (delay) reaching the different microphones (). Then . Other methods In these methods, the time measurement has been replaced by a measurement of the inverse of time (frequency). Kundt's tube is an example of an experiment which can be used to measure the speed of sound in a small volume. It has the advantage of being able to measure the speed of sound in any gas. This method uses a powder to make the nodes and antinodes visible to the human eye. This is an example of a compact experimental setup. A tuning fork can be held near the mouth of a long pipe which is dipping into a barrel of water. In this system it is the case that the pipe can be brought to resonance if the length of the air column in the pipe is equal to where n is an integer. As the antinodal point for the pipe at the open end is slightly outside the mouth of the pipe it is best to find two or more points of resonance and then measure half a wavelength between these. Here it is the case that v = fλ. High-precision measurements in air The effect of impurities can be significant when making high-precision measurements. Chemical desiccants can be used to dry the air, but will, in turn, contaminate the sample. The air can be dried cryogenically, but this has the effect of removing the carbon dioxide as well; therefore many high-precision measurements are performed with air free of carbon dioxide rather than with natural air. A 2002 review found that a 1963 measurement by Smith and Harlow using a cylindrical resonator gave "the most probable value of the standard speed of sound to date." The experiment was done with air from which the carbon dioxide had been removed, but the result was then corrected for this effect so as to be applicable to real air. The experiments were done at but corrected for temperature in order to report them at . The result was for dry air at STP, for frequencies from to . Non-gaseous media Speed of sound in solids Three-dimensional solids In a solid, there is a non-zero stiffness both for volumetric deformations and shear deformations. Hence, it is possible to generate sound waves with different velocities dependent on the deformation mode. Sound waves generating volumetric deformations (compression) and shear deformations (shearing) are called pressure waves (longitudinal waves) and shear waves (transverse waves), respectively. In earthquakes, the corresponding seismic waves are called P-waves (primary waves) and S-waves (secondary waves), respectively. The sound velocities of these two types of waves propagating in a homogeneous 3-dimensional solid are respectively given by where K is the bulk modulus of the elastic materials; G is the shear modulus of the elastic materials; E is the Young's modulus; ρ is the density; ν is Poisson's ratio. The last quantity is not an independent one, as . The speed of pressure waves depends both on the pressure and shear resistance properties of the material, while the speed of shear waves depends on the shear properties only. Typically, pressure waves travel faster in materials than do shear waves, and in earthquakes this is the reason that the onset of an earthquake is often preceded by a quick upward-downward shock, before arrival of waves that produce a side-to-side motion. For example, for a typical steel alloy, , and , yielding a compressional speed csolid,p of . This is in reasonable agreement with csolid,p measured experimentally at for a (possibly different) type of steel. The shear speed csolid,s is estimated at using the same numbers. Speed of sound in semiconductor solids can be very sensitive to the amount of electronic dopant in them. One-dimensional solids The speed of sound for pressure waves in stiff materials such as metals is sometimes given for "long rods" of the material in question, in which the speed is easier to measure. In rods where their diameter is shorter than a wavelength, the speed of pure pressure waves may be simplified and is given by: where is Young's modulus. This is similar to the expression for shear waves, save that Young's modulus replaces the shear modulus. This speed of sound for pressure waves in long rods will always be slightly less than the same speed in homogeneous 3-dimensional solids, and the ratio of the speeds in the two different types of objects depends on Poisson's ratio for the material. Speed of sound in liquids In a fluid, the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces). Hence the speed of sound in a fluid is given by where is the bulk modulus of the fluid. Water In fresh water, sound travels at about at (see the
Physical sciences
Waves
null
147912
https://en.wikipedia.org/wiki/Power%20rule
Power rule
In calculus, the power rule is used to differentiate functions of the form , whenever is a real number. Since differentiation is a linear operation on the space of differentiable functions, polynomials can also be differentiated using this rule. The power rule underlies the Taylor series as it relates a power series with a function's derivatives. Statement of the power rule Let be a function satisfying for all , where . Then, The power rule for integration states that for any real number . It can be derived by inverting the power rule for differentiation. In this equation C is any constant. Proofs Proof for real exponents To start, we should choose a working definition of the value of where is any real number. Although it is feasible to define the v If then where is the natural logarithm function, or as was required. Therefore, applying the chain rule to we see that which simplifies to When we may use the same definition with where we now have This necessarily leads to the same result. Note that because does not have a conventional definition when is not a rational number, irrational power functions are not well defined for negative bases. In addition, as rational powers of −1 with even denominators (in lowest terms) are not real numbers, these expressions are only real valued for rational powers with odd denominators (in lowest terms). Finally, whenever the function is differentiable at the defining limit for the derivative is: which yields 0 only when is a rational number with odd denominator (in lowest terms) and and 1 when For all other values of the expression is not well-defined for as was covered above, or is not a real number, so the limit does not exist as a real-valued derivative. For the two cases that do exist, the values agree with the value of the existing power rule at 0, so no exception need be made. The exclusion of the expression (the case from our scheme of exponentiation is due to the fact that the function has no limit at (0,0), since approaches 1 as x approaches 0, while approaches 0 as y approaches 0. Thus, it would be problematic to ascribe any particular value to it, as the value would contradict one of the two cases, dependent on the application. It is traditionally left undefined. Proofs for integer exponents Proof by induction (natural numbers) Let . It is required to prove that The base case may be when or , depending on how the set of natural numbers is defined. When , When , Therefore, the base case holds either way. Suppose the statement holds for some natural number k, i.e. When ,By the principle of mathematical induction, the statement is true for all natural numbers n. Proof by binomial theorem (natural number) Let , where . Then, Since n choose 1 is equal to n, and the rest of the terms all contain h, which is 0, the rest of the terms cancel. This proof only works for natural numbers as the binomial theorem only works for natural numbers. Generalization to negative integer exponents For a negative integer n, let so that m is a positive integer. Using the reciprocal rule,In conclusion, for any integer , Generalization to rational exponents Upon proving that the power rule holds for integer exponents, the rule can be extended to rational exponents. Proof by chain rule This proof is composed of two steps that involve the use of the chain rule for differentiation. Let , where . Then . By the chain rule, . Solving for , Thus, the power rule applies for rational exponents of the form , where is a nonzero natural number. This can be generalized to rational exponents of the form by applying the power rule for integer exponents using the chain rule, as shown in the next step. Let , where so that . By the chain rule, From the above results, we can conclude that when is a rational number, Proof by implicit differentiation A more straightforward generalization of the power rule to rational exponents makes use of implicit differentiation. Let , where so that . Then,Differentiating both sides of the equation with respect to ,Solving for ,Since ,Applying laws of exponents,Thus, letting , we can conclude that when is a rational number. History The power rule for integrals was first demonstrated in a geometric form by Italian mathematician Bonaventura Cavalieri in the early 17th century for all positive integer values of , and during the mid 17th century for all rational powers by the mathematicians Pierre de Fermat, Evangelista Torricelli, Gilles de Roberval, John Wallis, and Blaise Pascal, each working independently. At the time, they were treatises on determining the area between the graph of a rational power function and the horizontal axis. With hindsight, however, it is considered the first general theorem of calculus to be discovered. The power rule for differentiation was derived by Isaac Newton and Gottfried Wilhelm Leibniz, each independently, for rational power functions in the mid 17th century, who both then used it to derive the power rule for integrals as the inverse operation. This mirrors the conventional way the related theorems are presented in modern basic calculus textbooks, where differentiation rules usually precede integration rules. Although both men stated that their rules, demonstrated only for rational quantities, worked for all real powers, neither sought a proof of such, as at the time the applications of the theory were not concerned with such exotic power functions, and questions of convergence of infinite series were still ambiguous. The unique case of was resolved by Flemish Jesuit and mathematician Grégoire de Saint-Vincent and his student Alphonse Antonio de Sarasa in the mid 17th century, who demonstrated that the associated definite integral, representing the area between the rectangular hyperbola and the x-axis, was a logarithmic function, whose base was eventually discovered to be the transcendental number e. The modern notation for the value of this definite integral is , the natural logarithm. Generalizations Complex power functions If we consider functions of the form where is any complex number and is a complex number in a slit complex plane that excludes the branch point of 0 and any branch cut connected to it, and we use the conventional multivalued definition , then it is straightforward to show that, on each branch of the complex logarithm, the same argument used above yields a similar result: . In addition, if is a positive integer, then there is no need for a branch cut: one may define , or define positive integral complex powers through complex multiplication, and show that for all complex , from the definition of the derivative and the binomial theorem. However, due to the multivalued nature of complex power functions for non-integer exponents, one must be careful to specify the branch of the complex logarithm being used. In addition, no matter which branch is used, if is not a positive integer, then the function is not differentiable at 0.
Mathematics
Differential calculus
null
147918
https://en.wikipedia.org/wiki/Industrial%20robot
Industrial robot
An industrial robot is a robot system used for manufacturing. Industrial robots are automated, programmable and capable of movement on three or more axes. Typical applications of robots include welding, painting, assembly, disassembly, pick and place for printed circuit boards, packaging and labeling, palletizing, product inspection, and testing; all accomplished with high endurance, speed, and precision. They can assist in material handling. In the year 2023, an estimated 4,281,585 industrial robots were in operation worldwide according to International Federation of Robotics (IFR). Types and features There are six types of industrial robots. Articulated robots Articulated robots are the most common industrial robots. They look like a human arm, which is why they are also called robotic arm or manipulator arm. Their articulations with several degrees of freedom allow the articulated arms a wide range of movements. Autonomous robot An autonomous robot is a robot that acts without recourse to human control. The first autonomous robots environment were known as Elmer and Elsie, which were constructed in the late 1940s by W. Grey Walter. They were the first robots in history that were programmed to "think" the way biological brains do and meant to have free will. Elmer and Elsie were often labeled as tortoises because of how they were shaped and the manner in which they moved. They were capable of phototaxis which is the movement that occurs in response to light stimulus. Cartesian coordinate robots Cartesian robots, also called rectilinear, gantry robots, and x-y-z robots have three prismatic joints for the movement of the tool and three rotary joints for its orientation in space. To be able to move and orient the effector organ in all directions, such a robot needs 6 axes (or degrees of freedom). In a 2-dimensional environment, three axes are sufficient, two for displacement and one for orientation. Cylindrical coordinate robots The cylindrical coordinate robots are characterized by their rotary joint at the base and at least one prismatic joint connecting its links. They can move vertically and horizontally by sliding. The compact effector design allows the robot to reach tight work-spaces without any loss of speed. Spherical coordinate robots Spherical coordinate robots only have rotary joints. They are one of the first robots to have been used in industrial applications. They are commonly used for machine tending in die-casting, plastic injection and extrusion, and for welding. SCARA robots SCARA is an acronym for Selective Compliance Assembly Robot Arm. SCARA robots are recognized by their two parallel joints which provide movement in the X-Y plane. Rotating shafts are positioned vertically at the effector. SCARA robots are used for jobs that require precise lateral movements. They are ideal for assembly applications. Delta robots Delta robots are also referred to as parallel link robots. They consist of parallel links connected to a common base. Delta robots are particularly useful for direct control tasks and high maneuvering operations (such as quick pick-and-place tasks). Delta robots take advantage of four bar or parallelogram linkage systems. Furthermore, industrial robots can have a serial or parallel architecture. Serial manipulators Serial architectures a.k.a. serial manipulators are very common industrial robots; they are designed as a series of links connected by motor-actuated joints that extend from a base to an end-effector. SCARA, Stanford manipulators are typical examples of this category. Parallel architecture A parallel manipulator is designed so that each chain is usually short, simple and can thus be rigid against unwanted movement, compared to a serial manipulator. Errors in one chain's positioning are averaged in conjunction with the others, rather than being cumulative. Each actuator must still move within its own degree of freedom, as for a serial robot; however in the parallel robot the off-axis flexibility of a joint is also constrained by the effect of the other chains. It is this closed-loop stiffness that makes the overall parallel manipulator stiff relative to its components, unlike the serial chain that becomes progressively less rigid with more components. Lower mobility parallel manipulators and concomitant motion A full parallel manipulator can move an object with up to 6 degrees of freedom (DoF), determined by 3 translation 3T and 3 rotation 3R coordinates for full 3T3R mobility. However, when a manipulation task requires less than 6 DoF, the use of lower mobility manipulators, with fewer than 6 DoF, may bring advantages in terms of simpler architecture, easier control, faster motion and lower cost. For example, the 3 DoF Delta robot has lower 3T mobility and has proven to be very successful for rapid pick-and-place translational positioning applications. The workspace of lower mobility manipulators may be decomposed into 'motion' and 'constraint' subspaces. For example, 3 position coordinates constitute the motion subspace of the 3 DoF Delta robot and the 3 orientation coordinates are in the constraint subspace. The motion subspace of lower mobility manipulators may be further decomposed into independent (desired) and dependent (concomitant) subspaces: consisting of 'concomitant' or 'parasitic' motion which is undesired motion of the manipulator. The debilitating effects of concomitant motion should be mitigated or eliminated in the successful design of lower mobility manipulators. For example, the Delta robot does not have parasitic motion since its end effector does not rotate. Autonomy Robots exhibit varying degrees of autonomy. Some robots are programmed to faithfully carry out specific actions over and over again (repetitive actions) without variation and with a high degree of accuracy. These actions are determined by programmed routines that specify the direction, acceleration, velocity, deceleration, and distance of a series of coordinated motions Other robots are much more flexible as to the orientation of the object on which they are operating or even the task that has to be performed on the object itself, which the robot may even need to identify. For example, for more precise guidance, robots often contain machine vision sub-systems acting as their visual sensors, linked to powerful computers or controllers. Artificial intelligence is becoming an increasingly important factor in the modern industrial robot. History The earliest known industrial robot, conforming to the ISO definition was completed by "Bill" Griffith P. Taylor in 1937 and published in Meccano Magazine, March 1938. The crane-like device was built almost entirely using Meccano parts, and powered by a single electric motor. Five axes of movement were possible, including grab and grab rotation. Automation was achieved using punched paper tape to energise solenoids, which would facilitate the movement of the crane's control levers. The robot could stack wooden blocks in pre-programmed patterns. The number of motor revolutions required for each desired movement was first plotted on graph paper. This information was then transferred to the paper tape, which was also driven by the robot's single motor. Chris Shute built a complete replica of the robot in 1997. George Devol applied for the first robotics patents in 1954 (granted in 1961). The first company to produce a robot was Unimation, founded by Devol and Joseph F. Engelberger in 1956. Unimation robots were also called programmable transfer machines since their main use at first was to transfer objects from one point to another, less than a dozen feet or so apart. They used hydraulic actuators and were programmed in joint coordinates, i.e. the angles of the various joints were stored during a teaching phase and replayed in operation. They were accurate to within 1/10,000 of an inch (note: although accuracy is not an appropriate measure for robots, usually evaluated in terms of repeatability - see later). Unimation later licensed their technology to Kawasaki Heavy Industries and GKN, manufacturing Unimates in Japan and England respectively. For some time, Unimation's only competitor was Cincinnati Milacron Inc. of Ohio. This changed radically in the late 1970s when several big Japanese conglomerates began producing similar industrial robots. In 1969 Victor Scheinman at Stanford University invented the Stanford arm, an all-electric, 6-axis articulated robot designed to permit an arm solution. This allowed it accurately to follow arbitrary paths in space and widened the potential use of the robot to more sophisticated applications such as assembly and welding. Scheinman then designed a second arm for the MIT AI Lab, called the "MIT arm." Scheinman, after receiving a fellowship from Unimation to develop his designs, sold those designs to Unimation who further developed them with support from General Motors and later marketed it as the Programmable Universal Machine for Assembly (PUMA). Industrial robotics took off quite quickly in Europe, with both ABB Robotics and KUKA Robotics bringing robots to the market in 1973. ABB Robotics (formerly ASEA) introduced IRB 6, among the world's first commercially available all electric micro-processor controlled robot. The first two IRB 6 robots were sold to Magnusson in Sweden for grinding and polishing pipe bends and were installed in production in January 1974. Also in 1973 KUKA Robotics built its first robot, known as FAMULUS, also one of the first articulated robots to have six electromechanically driven axes. Interest in robotics increased in the late 1970s and many US companies entered the field, including large firms like General Electric, and General Motors (which formed joint venture FANUC Robotics with FANUC LTD of Japan). U.S. startup companies included Automatix and Adept Technology, Inc. At the height of the robot boom in 1984, Unimation was acquired by Westinghouse Electric Corporation for 107 million U.S. dollars. Westinghouse sold Unimation to Stäubli Faverges SCA of France in 1988, which is still making articulated robots for general industrial and cleanroom applications and even bought the robotic division of Bosch in late 2004. Only a few non-Japanese companies ultimately managed to survive in this market, the major ones being: Adept Technology, Stäubli, the Swedish-Swiss company ABB Asea Brown Boveri, the German company KUKA Robotics and the Italian company Comau. Technical description Defining parameters Number of axes – two axes are required to reach any point in a plane; three axes are required to reach any point in space. To fully control the orientation of the end of the arm(i.e. the wrist) three more axes (yaw, pitch, and roll) are required. Some designs (e.g. the SCARA robot) trade limitations in motion possibilities for cost, speed, and accuracy. Degrees of freedom – this is usually the same as the number of axes. Working envelope – the region of space a robot can reach. Kinematics – the actual arrangement of rigid members and joints in the robot, which determines the robot's possible motions. Classes of robot kinematics include articulated, cartesian, parallel and SCARA. Carrying capacity or payload – how much weight a robot can lift. Speed – how fast the robot can position the end of its arm. This may be defined in terms of the angular or linear speed of each axis or as a compound speed i.e. the speed of the end of the arm when all axes are moving. Acceleration – how quickly an axis can accelerate. Since this is a limiting factor a robot may not be able to reach its specified maximum speed for movements over a short distance or a complex path requiring frequent changes of direction. Accuracy – how closely a robot can reach a commanded position. When the absolute position of the robot is measured and compared to the commanded position the error is a measure of accuracy. Accuracy can be improved with external sensing for example a vision system or Infra-Red. See robot calibration. Accuracy can vary with speed and position within the working envelope and with payload (see compliance). Repeatability – how well the robot will return to a programmed position. This is not the same as accuracy. It may be that when told to go to a certain X-Y-Z position that it gets only to within 1 mm of that position. This would be its accuracy which may be improved by calibration. But if that position is taught into controller memory and each time it is sent there it returns to within 0.1mm of the taught position then the repeatability will be within 0.1mm. Accuracy and repeatability are different measures. Repeatability is usually the most important criterion for a robot and is similar to the concept of 'precision' in measurement—see accuracy and precision. ISO 9283 sets out a method whereby both accuracy and repeatability can be measured. Typically a robot is sent to a taught position a number of times and the error is measured at each return to the position after visiting 4 other positions. Repeatability is then quantified using the standard deviation of those samples in all three dimensions. A typical robot can, of course make a positional error exceeding that and that could be a problem for the process. Moreover, the repeatability is different in different parts of the working envelope and also changes with speed and payload. ISO 9283 specifies that accuracy and repeatability should be measured at maximum speed and at maximum payload. But this results in pessimistic values whereas the robot could be much more accurate and repeatable at light loads and speeds. Repeatability in an industrial process is also subject to the accuracy of the end effector, for example a gripper, and even to the design of the 'fingers' that match the gripper to the object being grasped. For example, if a robot picks a screw by its head, the screw could be at a random angle. A subsequent attempt to insert the screw into a hole could easily fail. These and similar scenarios can be improved with 'lead-ins' e.g. by making the entrance to the hole tapered. Motion control – for some applications, such as simple pick-and-place assembly, the robot need merely return repeatably to a limited number of pre-taught positions. For more sophisticated applications, such as welding and finishing (spray painting), motion must be continuously controlled to follow a path in space, with controlled orientation and velocity. Power source – some robots use electric motors, others use hydraulic actuators. The former are faster, the latter are stronger and advantageous in applications such as spray painting, where a spark could set off an explosion; however, low internal air-pressurisation of the arm can prevent ingress of flammable vapours as well as other contaminants. Nowadays, it is highly unlikely to see any hydraulic robots in the market. Additional sealings, brushless electric motors and spark-proof protection eased the construction of units that are able to work in the environment with an explosive atmosphere. Drive – some robots connect electric motors to the joints via gears; others connect the motor to the joint directly (direct drive). Using gears results in measurable 'backlash' which is free movement in an axis. Smaller robot arms frequently employ high speed, low torque DC motors, which generally require high gearing ratios; this has the disadvantage of backlash. In such cases the harmonic drive is often used. Compliance - this is a measure of the amount in angle or distance that a robot axis will move when a force is applied to it. Because of compliance when a robot goes to a position carrying its maximum payload it will be at a position slightly lower than when it is carrying no payload. Compliance can also be responsible for overshoot when carrying high payloads in which case acceleration would need to be reduced. Robot programming and interfaces The setup or programming of motions and sequences for an industrial robot is typically taught by linking the robot controller to a laptop, desktop computer or (internal or Internet) network. A robot and a collection of machines or peripherals is referred to as a workcell, or cell. A typical cell might contain a parts feeder, a molding machine and a robot. The various machines are 'integrated' and controlled by a single computer or PLC. How the robot interacts with other machines in the cell must be programmed, both with regard to their positions in the cell and synchronizing with them. Software: The computer is installed with corresponding interface software. The use of a computer greatly simplifies the programming process. Specialized robot software is run either in the robot controller or in the computer or both depending on the system design. There are two basic entities that need to be taught (or programmed): positional data and procedure. For example, in a task to move a screw from a feeder to a hole the positions of the feeder and the hole must first be taught or programmed. Secondly the procedure to get the screw from the feeder to the hole must be programmed along with any I/O involved, for example a signal to indicate when the screw is in the feeder ready to be picked up. The purpose of the robot software is to facilitate both these programming tasks. Teaching the robot positions may be achieved a number of ways: Positional commands The robot can be directed to the required position using a GUI or text based commands in which the required X-Y-Z position may be specified and edited. Teach pendant: Robot positions can be taught via a teach pendant. This is a handheld control and programming unit. The common features of such units are the ability to manually send the robot to a desired position, or "inch" or "jog" to adjust a position. They also have a means to change the speed since a low speed is usually required for careful positioning, or while test-running through a new or modified routine. A large emergency stop button is usually included as well. Typically once the robot has been programmed there is no more use for the teach pendant. All teach pendants are equipped with a 3-position deadman switch. In the manual mode, it allows the robot to move only when it is in the middle position (partially pressed). If it is fully pressed in or completely released, the robot stops. This principle of operation allows natural reflexes to be used to increase safety. Lead-by-the-nose: this is a technique offered by many robot manufacturers. In this method, one user holds the robot's manipulator, while another person enters a command which de-energizes the robot causing it to go into limp. The user then moves the robot by hand to the required positions and/or along a required path while the software logs these positions into memory. The program can later run the robot to these positions or along the taught path. This technique is popular for tasks such as paint spraying. Offline programming is where the entire cell, the robot and all the machines or instruments in the workspace are mapped graphically. The robot can then be moved on screen and the process simulated. A robotics simulator is used to create embedded applications for a robot, without depending on the physical operation of the robot arm and end effector. The advantages of robotics simulation is that it saves time in the design of robotics applications. It can also increase the level of safety associated with robotic equipment since various "what if" scenarios can be tried and tested before the system is activated.[8] Robot simulation software provides a platform to teach, test, run, and debug programs that have been written in a variety of programming languages. Robot simulation tools allow for robotics programs to be conveniently written and debugged off-line with the final version of the program tested on an actual robot. The ability to preview the behavior of a robotic system in a virtual world allows for a variety of mechanisms, devices, configurations and controllers to be tried and tested before being applied to a "real world" system. Robotics simulators have the ability to provide real-time computing of the simulated motion of an industrial robot using both geometric modeling and kinematics modeling. Manufacturing independent robot programming tools are a relatively new but flexible way to program robot applications. Using a visual programming language, the programming is done via drag and drop of predefined template/building blocks. They often feature the execution of simulations to evaluate the feasibility and offline programming in combination. If the system is able to compile and upload native robot code to the robot controller, the user no longer has to learn each manufacturer's proprietary language. Therefore, this approach can be an important step to standardize programming methods. Others in addition, machine operators often use user interface devices, typically touchscreen units, which serve as the operator control panel. The operator can switch from program to program, make adjustments within a program and also operate a host of peripheral devices that may be integrated within the same robotic system. These include end effectors, feeders that supply components to the robot, conveyor belts, emergency stop controls, machine vision systems, safety interlock systems, barcode printers and an almost infinite array of other industrial devices which are accessed and controlled via the operator control panel. The teach pendant or PC is usually disconnected after programming and the robot then runs on the program that has been installed in its controller. However a computer is often used to 'supervise' the robot and any peripherals, or to provide additional storage for access to numerous complex paths and routines. End-of-arm tooling The most essential robot peripheral is the end effector, or end-of-arm-tooling (EOAT). Common examples of end effectors include welding devices (such as MIG-welding guns, spot-welders, etc.), spray guns and also grinding and deburring devices (such as pneumatic disk or belt grinders, burrs, etc.), and grippers (devices that can grasp an object, usually electromechanical or pneumatic). Other common means of picking up objects is by vacuum or magnets. End effectors are frequently highly complex, made to match the handled product and often capable of picking up an array of products at one time. They may utilize various sensors to aid the robot system in locating, handling, and positioning products. Controlling movement For a given robot the only parameters necessary to completely locate the end effector (gripper, welding torch, etc.) of the robot are the angles of each of the joints or displacements of the linear axes (or combinations of the two for robot formats such as SCARA). However, there are many different ways to define the points. The most common and most convenient way of defining a point is to specify a Cartesian coordinate for it, i.e. the position of the 'end effector' in mm in the X, Y and Z directions relative to the robot's origin. In addition, depending on the types of joints a particular robot may have, the orientation of the end effector in yaw, pitch, and roll and the location of the tool point relative to the robot's faceplate must also be specified. For a jointed arm these coordinates must be converted to joint angles by the robot controller and such conversions are known as Cartesian Transformations which may need to be performed iteratively or recursively for a multiple axis robot. The mathematics of the relationship between joint angles and actual spatial coordinates is called kinematics. See robot control Positioning by Cartesian coordinates may be done by entering the coordinates into the system or by using a teach pendant which moves the robot in X-Y-Z directions. It is much easier for a human operator to visualize motions up/down, left/right, etc. than to move each joint one at a time. When the desired position is reached it is then defined in some way particular to the robot software in use, e.g. P1 - P5 below. Typical programming Most articulated robots perform by storing a series of positions in memory, and moving to them at various times in their programming sequence. For example, a robot which is moving items from one place (bin A) to another (bin B) might have a simple 'pick and place' program similar to the following: Define points P1–P5: Safely above workpiece (defined as P1) 10 cm Above bin A (defined as P2) At position to take part from bin A (defined as P3) 10 cm Above bin B (defined as P4) At position to take part from bin B. (defined as P5) Define program: Move to P1 Move to P2 Move to P3 Close gripper Move to P2 Move to P4 Move to P5 Open gripper Move to P4 Move to P1 and finish For examples of how this would look in popular robot languages see industrial robot programming. Singularities The American National Standard for Industrial Robots and Robot Systems — Safety Requirements (ANSI/RIA R15.06-1999) defines a singularity as "a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities." It is most common in robot arms that utilize a "triple-roll wrist". This is a wrist about which the three axes of the wrist, controlling yaw, pitch, and roll, all pass through a common point. An example of a wrist singularity is when the path through which the robot is traveling causes the first and third axes of the robot's wrist (i.e. robot's axes 4 and 6) to line up. The second wrist axis then attempts to spin 180° in zero time to maintain the orientation of the end effector. Another common term for this singularity is a "wrist flip". The result of a singularity can be quite dramatic and can have adverse effects on the robot arm, the end effector, and the process. Some industrial robot manufacturers have attempted to side-step the situation by slightly altering the robot's path to prevent this condition. Another method is to slow the robot's travel speed, thus reducing the speed required for the wrist to make the transition. The ANSI/RIA has mandated that robot manufacturers shall make the user aware of singularities if they occur while the system is being manually manipulated. A second type of singularity in wrist-partitioned vertically articulated six-axis robots occurs when the wrist center lies on a cylinder that is centered about axis 1 and with radius equal to the distance between axes 1 and 4. This is called a shoulder singularity. Some robot manufacturers also mention alignment singularities, where axes 1 and 6 become coincident. This is simply a sub-case of shoulder singularities. When the robot passes close to a shoulder singularity, joint 1 spins very fast. The third and last type of singularity in wrist-partitioned vertically articulated six-axis robots occurs when the wrist's center lies in the same plane as axes 2 and 3. Singularities are closely related to the phenomena of gimbal lock, which has a similar root cause of axes becoming lined up. Market structure According to the International Federation of Robotics (IFR) study World Robotics 2024, there were about 4,281,585 operational industrial robots by the end of 2023. For the year 2018 the IFR estimates the worldwide sales of industrial robots with US$16.5 billion. Including the cost of software, peripherals and systems engineering, the annual turnover for robot systems is estimated to be US$48.0 billion in 2018. China is the largest industrial robot market with 154,032 units sold in 2018. China had the largest operational stock of industrial robots, with 649,447 at the end of 2018. The United States industrial robot-makers shipped 35,880 robot to factories in the US in 2018 and this was 7% more than in 2017. The biggest customer of industrial robots is automotive industry with 30% market share, then electrical/electronics industry with 25%, metal and machinery industry with 10%, rubber and plastics industry with 5%, food industry with 5%. In textiles, apparel and leather industry, 1,580 units are operational. Estimated worldwide annual supply of industrial robots (in units): Health and safety The International Federation of Robotics has predicted a worldwide increase in adoption of industrial robots and they estimated 1.7 million new robot installations in factories worldwide by 2020 [IFR 2017] . Rapid advances in automation technologies (e.g. fixed robots, collaborative and mobile robots, and exoskeletons) have the potential to improve work conditions but also to introduce workplace hazards in manufacturing workplaces. Despite the lack of occupational surveillance data on injuries associated specifically with robots, researchers from the US National Institute for Occupational Safety and Health (NIOSH) identified 61 robot-related deaths between 1992 and 2015 using keyword searches of the Bureau of Labor Statistics (BLS) Census of Fatal Occupational Injuries research database (see info from Center for Occupational Robotics Research). Using data from the Bureau of Labor Statistics, NIOSH and its state partners have investigated 4 robot-related fatalities under the Fatality Assessment and Control Evaluation Program. In addition the Occupational Safety and Health Administration (OSHA) has investigated dozens of robot-related deaths and injuries, which can be reviewed at OSHA Accident Search page. Injuries and fatalities could increase over time because of the increasing number of collaborative and co-existing robots, powered exoskeletons, and autonomous vehicles into the work environment. Safety standards are being developed by the Robotic Industries Association (RIA) in conjunction with the American National Standards Institute (ANSI). On October 5, 2017, OSHA, NIOSH and RIA signed an alliance to work together to enhance technical expertise, identify and help address potential workplace hazards associated with traditional industrial robots and the emerging technology of human-robot collaboration installations and systems, and help identify needed research to reduce workplace hazards. On October 16 NIOSH launched the Center for Occupational Robotics Research to "provide scientific leadership to guide the development and use of occupational robots that enhance worker safety, health, and wellbeing." So far, the research needs identified by NIOSH and its partners include: tracking and preventing injuries and fatalities, intervention and dissemination strategies to promote safe machine control and maintenance procedures, and on translating effective evidence-based interventions into workplace practice.
Technology
Basics_6
null
147923
https://en.wikipedia.org/wiki/Syringe
Syringe
A syringe is a simple reciprocating pump consisting of a plunger (though in modern syringes, it is actually a piston) that fits tightly within a cylindrical tube called a barrel. The plunger can be linearly pulled and pushed along the inside of the tube, allowing the syringe to take in and expel liquid or gas through a discharge orifice at the front (open) end of the tube. The open end of the syringe may be fitted with a hypodermic needle, a nozzle or tubing to direct the flow into and out of the barrel. Syringes are frequently used in clinical medicine to administer injections, infuse intravenous therapy into the bloodstream, apply compounds such as glue or lubricant, and draw/measure liquids. There are also prefilled syringes (disposable syringes marketed with liquid inside). The word "syringe" is derived from the Greek σῦριγξ (syrinx, meaning "Pan flute", "tube"). Medical syringes Medical syringes include disposable and safety syringes, injection pens, needleless injectors, insulin pumps, and specialty needles. Hypodermic syringes are used with hypodermic needles to inject liquid or gases into body tissues, or to remove from the body. Injecting of air into a blood vessel is hazardous, as it may cause an air embolism; preventing embolisms by removing air from the syringe is one of the reasons for the familiar image of holding a hypodermic syringe pointing upward, tapping it, and expelling a small amount of liquid before an injection into the bloodstream. The barrel of a syringe is made of plastic or glass, usually has graduated marks indicating the volume of fluid in the syringe, and is nearly always transparent. Glass syringes may be sterilized in an autoclave. Plastic syringes can be constructed as either two-part or three-part designs. A three-part syringe contains a plastic plunger/piston with a rubber tip to create a seal between the piston and the barrel, where a two-part syringe is manufactured to create a perfect fit between the plastic plunger and the barrel to create the seal without the need for a separate synthetic rubber piston. Two-part syringes have been traditionally used in European countries to prevent introduction of additional materials such as silicone oil needed for lubricating three-part plungers. Most modern medical syringes are plastic because they are cheap enough to dispose of after being used only once, reducing the risk of spreading blood-borne diseases. Reuse of needles and syringes has caused spread of diseases, especially HIV and hepatitis, among intravenous drug users. Syringes are also commonly reused by diabetics, as they can go through several in a day with multiple daily insulin injections, which becomes an affordability issue for many. Even though the syringe and needle are only used by a single person, this practice is still unsafe as it can introduce bacteria from the skin into the bloodstream and cause serious and sometimes lethal infections. In medical settings, single-use needles and syringes effectively reduce the risk of cross-contamination. Medical syringes are sometimes used without a needle for orally administering liquid medicines to young children or animals, or milk to small young animals, because the dose can be measured accurately and it is easier to squirt the medicine into the subject's mouth instead of coaxing the subject to drink out of a measuring spoon. Tip designs Syringes come with a number of designs for the area in which the blade locks to the syringe body. Perhaps the most well known of these is the Luer lock, which simply twists the two together. Bodies featuring a small, plain connection are known as slip tips and are useful for when the syringe is being connected to something not featuring a screw lock mechanism. Similar to this is the catheter tip, which is essentially a slip tip but longer and tapered, making it good for pushing into things where there the plastic taper can form a tight seal. These can also be used for rinsing out wounds or large abscesses in veterinary use. There is also an eccentric tip, where the nozzle at the end of the syringe is not in the centre of the syringe but at the side. This causes the blade attached to the syringe to lie almost in line with the walls of the syringe itself and they are used when the blade needs to get very close to parallel with the skin (when injecting into a surface vein or artery for example). Standard U-100 insulin syringes Syringes for insulin users are designed for standard U-100 insulin. The dilution of insulin is such that 1 mL of insulin fluid has 100 standard "units" of insulin. A typical insulin vial may contain 10 mL, for 1000 units. Insulin syringes are made specifically for a patient to inject themselves, and have features to assist this purpose when compared to a syringe for use by a healthcare professional: shorter needles, as insulin injections are subcutaneous (under the skin) rather than intramuscular, finer gauge needles, for less pain, markings in insulin units to simplify drawing a measured dose of insulin, and low dead space to reduce complications caused by improper drawing order of different insulin strengths. Multishot needle syringes There are needle syringes designed to reload from a built-in tank (container) after each injection, so they can make several or many injections on a filling. These are not used much in human medicine because of the risk of cross-infection via the needle. An exception is the personal insulin autoinjector used by diabetic patients and in dual-chambered syringe designs intended to deliver a prefilled saline flush solution after the medication. Venom extraction syringes Venom extraction syringes are different from standard syringes, because they usually do not puncture the wound. The most common types have a plastic nozzle which is placed over the affected area, and then the syringe piston is pulled back, creating a vacuum that allegedly sucks out the venom. Attempts to treat snakebites in this way are specifically advised against, as they are ineffective and can cause additional injury. Syringes of this type are sometimes used for extracting human botfly larvae from the skin. Oral An oral syringe is a measuring instrument used to accurately measure doses of liquid medication, expressed in millilitres (mL). They do not have threaded tips, because no needle or other device needs to be screwed onto them. The contents are simply squirted or sucked from the syringe directly into the mouth of the person or animal. Oral syringes are available in various sizes, from 1–10 mL and larger. An oral syringe is typically purple in colour to distinguish it from a standard injection syringe with a luer tip. The sizes most commonly used are 1 mL, 2.5 mL, 3 mL, 5 mL and 10 mL. Dental syringes A dental syringe is used by dentists for the injection of an anesthetic. It consists of a breech-loading syringe fitted with a sealed cartridge containing an anesthetic solution. In 1928, Bayer Dental developed, coined and produced a sealed cartridge system under the registered trademark Carpule®. The current trademark owner is Kulzer Dental GmbH. The carpules have long been reserved for anesthetic products for dental use. It is practically a bottomless flask. The latter is replaced by an elastomer plug that can slide in the body of the cartridge. This plug will be pushed by the plunger of the syringe. The neck is closed with a rubber cap. The dentist places the cartridge directly into a stainless steel syringe, with a double-pointed (single-use) needle. The tip placed on the cartridge side punctures the capsule and the piston will push the product. There is therefore no contact between the product and the ambient air during use. The ancillary tool (generally part of a dental engine) used to supply water, compressed air or mist (formed by combination of water and compressed air) to the oral cavity for the purpose of irrigation (cleaning debris away from the area the dentist is working on), is also referred to as a dental syringe or a dental irrigation nozzle. A 3-way syringe/nozzle has separate internal channels supplying air, water or a mist created by combining the pressurized air with the waterflow. The syringe tip can be separated from the main body and replaced when necessary. In the UK and Ireland, manually operated hand syringes are used to inject lidocaine into patients' gums. Dose-sparing syringes A dose-sparing syringe is one which minimises the amount of liquid remaining in the barrel after the plunger has been depressed. These syringes feature a combined needle and syringe, and a protrusion on the face of the plunger to expel liquid from the needle hub. Such syringes were particularly popular during the COVID-19 pandemic as vaccines were in short supply. Regulation In some jurisdictions, the sale or possession of hypodermic syringes may be controlled or prohibited without a prescription, due to its potential use with illegal intravenous drugs. Non-medical uses The syringe has many non-medical applications. Laboratory applications Medical-grade disposable hypodermic syringes are often used in research laboratories for convenience and low cost. Another application is to use the needle tip to add liquids to very confined spaces, such as washing out some scientific apparatus. They are often used for measuring and transferring solvents and reagents where a high precision is not required. Alternatively, microliter syringes can be used to measure and dose chemicals very precisely by using a small diameter capillary as the syringe barrel. The polyethylene construction of these disposable syringes usually makes them rather chemically resistant. There is, however, a risk of the contents of the syringes leaching plasticizers from the syringe material. Non-disposable glass syringes may be preferred where this is a problem. Glass syringes may also be preferred where a very high degree of precision is important (i.e. quantitative chemical analysis), because their engineering tolerances are lower and the plungers move more smoothly. In these applications, the transfer of pathogens is usually not an issue. Used with a long needle or cannula, syringes are also useful for transferring fluids through rubber septa when atmospheric oxygen or moisture are being excluded. Examples include the transfer of air-sensitive or pyrophoric reagents such as phenylmagnesium bromide and n-butyllithium respectively. Glass syringes are also used to inject small samples for gas chromatography (1 μl) and mass spectrometry (10 μl). Syringe drivers may be used with the syringe as well. Cooking Some culinary uses of syringes are injecting liquids (such as gravy) into other foods, or for the manufacture of some candies. Syringes may also be used when cooking meat to enhance flavor and texture by injecting juices inside the meat, and in baking to inject filling inside a pastry. It is common for these syringes to be made of stainless steel components, including the barrel. Such facilitates easy disassembly and cleaning. Others Syringes are used to refill ink cartridges with ink in fountain pens. Common workshop applications include injecting glue into tight spots to repair joints where disassembly is impractical or impossible; and injecting lubricants onto working surfaces without spilling. Sometimes a large hypodermic syringe is used without a needle for very small baby mammals to suckle from in artificial rearing. Historically, large pumps that use reciprocating motion to pump water were referred to as syringes. Pumps of this type were used as early firefighting equipment. There are fountain syringes where the liquid is in a bag or can and goes to the nozzle via a pipe. In earlier times, clyster syringes were used for that purpose. Loose snus is often applied using modified syringes. The nozzle is removed so the opening is the width of the chamber. The snus can be packed tightly into the chamber and plunged into the upper lip. Syringes, called portioners, are also manufactured for this particular purpose. Historical timeline Piston syringes were used in ancient times. During the 1st century AD Aulus Cornelius Celsus mentioned the use of them to treat medical complications in his De Medicina. 9th century: The Iraqi/Egyptian surgeon Ammar ibn 'Ali al-Mawsili' described a syringe in the 9th century using a hollow glass tube, and suction to remove cataracts from patients' eyes, a practice that remained in use until at least the 13th century. Pre-Columbian Native Americans created early hypodermic needles and syringes using "hollow bird bones and small animal bladders". 1650: Blaise Pascal invented a syringe (not necessarily hypodermic) as an application of what is now called Pascal's law. 1844: Irish physician Francis Rynd invented the hollow needle and used it to make the first recorded subcutaneous injections, specifically a sedative to treat neuralgia. 1853: Charles Pravaz and Alexander Wood independently developed medical syringes with a needle fine enough to pierce the skin. Pravaz's syringe was made of silver and used a screw mechanism to dispense fluids. Wood's syringe was made of glass, enabling its contents to be seen and measured, and used a plunger to inject them. It is effectively the syringe that is used today. 1865: Charles Hunter coined the term "hypodermic", and developed an improvement to the syringe that locked the needle into place so that it would not be ejected from the end of the syringe when the plunger was depressed, and published research indicating that injections of pain relief could be given anywhere in the body, not just in the area of pain, and still be effective. 1867: The Medical and Chirurgical Society of London investigated whether injected narcotics had a general effect (as argued by Hunter) or whether they only worked locally (as argued by Wood). After conducting animal tests and soliciting opinions from the wider medical community, they firmly sided with Hunter. 1899: Letitia Mumford Geer patented a syringe which could be operated with one hand and which could be used for self-administered rectal injections. 1946: Chance Brothers in Smethwick, West Midlands, England, produced the first all-glass syringe with interchangeable barrel and plunger, thereby allowing mass-sterilisation of components without the need for matching them. 1949: Australian inventor Charles Rothauser created the world's first plastic, disposable hypodermic syringe at his Adelaide factory. 1951: Rothauser produced the first injection-moulded syringes made of polypropylene, a plastic that can be heat-sterilised. Millions were made for Australian and export markets. 1956: New Zealand pharmacist and inventor Colin Murdoch was granted New Zealand and Australian patents for a disposable plastic syringe.
Technology
Equipment
null
147939
https://en.wikipedia.org/wiki/Constant%20of%20integration
Constant of integration
In calculus, the constant of integration, often denoted by (or ), is a constant term added to an antiderivative of a function to indicate that the indefinite integral of (i.e., the set of all antiderivatives of ), on a connected domain, is only defined up to an additive constant. This constant expresses an ambiguity inherent in the construction of antiderivatives. More specifically, if a function is defined on an interval, and is an antiderivative of then the set of all antiderivatives of is given by the functions where is an arbitrary constant (meaning that any value of would make a valid antiderivative). For that reason, the indefinite integral is often written as although the constant of integration might be sometimes omitted in lists of integrals for simplicity. Origin The derivative of any constant function is zero. Once one has found one antiderivative for a function adding or subtracting any constant will give us another antiderivative, because The constant is a way of expressing that every function with at least one antiderivative will have an infinite number of them. Let and be two everywhere differentiable functions. Suppose that for every real number x. Then there exists a real number such that for every real number x. To prove this, notice that So can be replaced by and by the constant function making the goal to prove that an everywhere differentiable function whose derivative is always zero must be constant: Choose a real number and let For any x, the fundamental theorem of calculus, together with the assumption that the derivative of vanishes, implying that thereby showing that is a constant function. Two facts are crucial in this proof. First, the real line is connected. If the real line were not connected, one would not always be able to integrate from our fixed a to any given x. For example, if one were to ask for functions defined on the union of intervals [0,1] and [2,3], and if a were 0, then it would not be possible to integrate from 0 to 3, because the function is not defined between 1 and 2. Here, there will be two constants, one for each connected component of the domain. In general, by replacing constants with locally constant functions, one can extend this theorem to disconnected domains. For example, there are two constants of integration for , and infinitely many for , so for example, the general form for the integral of 1/x is: Second, and were assumed to be everywhere differentiable. If and are not differentiable at even one point, then the theorem might fail. As an example, let be the Heaviside step function, which is zero for negative values of x and one for non-negative values of x, and let Then the derivative of is zero where it is defined, and the derivative of is always zero. Yet it's clear that and do not differ by a constant, even if it is assumed that and are everywhere continuous and almost everywhere differentiable the theorem still fails. As an example, take to be the Cantor function and again let It turns out that adding and subtracting constants is the only flexibility available in finding different antiderivatives of the same function. That is, all antiderivatives are the same up to a constant. To express this fact for one can write: where is constant of integration. It is easily determined that all of the following functions are antiderivatives of : Significance The inclusion of the constant of integration is necessitated in some, but not all circumstances. For instance, when evaluating definite integrals using the fundamental theorem of calculus, the constant of integration can be ignored as it will always cancel with itself. However, different methods of computation of indefinite integrals can result in multiple resulting antiderivatives, each implicitly containing different constants of integration, and no particular option may be considered simplest. For example, can be integrated in at least three different ways. Additionally, omission of the constant, or setting it to zero, may make it prohibitive to deal with a number of problems, such as those with initial value conditions. A general solution containing the arbitrary constant is often necessary to identify the correct particular solution. For example, to obtain the antiderivative of that has the value 400 at x = π, then only one value of will work (in this case ). The constant of integration also implicitly or explicitly appears in the language of differential equations. Almost all differential equations will have many solutions, and each constant represents the unique solution of a well-posed initial value problem. An additional justification comes from abstract algebra. The space of all (suitable) real-valued functions on the real numbers is a vector space, and the differential operator is a linear operator. The operator maps a function to zero if and only if that function is constant. Consequently, the kernel of is the space of all constant functions. The process of indefinite integration amounts to finding a pre-image of a given function. There is no canonical pre-image for a given function, but the set of all such pre-images forms a coset. Choosing a constant is the same as choosing an element of the coset. In this context, solving an initial value problem is interpreted as lying in the hyperplane given by the initial conditions.
Mathematics
Integral calculus
null
147952
https://en.wikipedia.org/wiki/Videotape
Videotape
Videotape is magnetic tape used for storing video and usually sound in addition. Information stored can be in the form of either an analog or digital signal. Videotape is used in both video tape recorders (VTRs) and, more commonly, videocassette recorders (VCRs) and camcorders. Videotapes have also been used for storing scientific or medical data, such as the data produced by an electrocardiogram. Because video signals have a very high bandwidth, and stationary heads would require extremely high tape speeds, in most cases, a helical-scan video head rotates against the moving tape to record the data in two dimensions. Tape is a linear method of storing information and thus imposes delays to access a portion of the tape that is not already against the heads. The early 2000s saw the introduction and rise to prominence of high-quality random-access video recording media such as hard disks and flash memory. Since then, videotape has been increasingly relegated to archival and similar uses. Early formats The electronics division of entertainer Bing Crosby's production company, Bing Crosby Enterprises (BCE), gave the world's first demonstration of a videotape recording in Los Angeles on November 11, 1951. In development by John T. Mullin and Wayne R. Johnson since 1950, the device gave what were described as "blurred and indistinct" images using a modified Ampex 200 tape recorder and standard quarter-inch (0.635 cm) audio tape moving at per second. A year later, an improved version using one-inch (2.54 cm) magnetic tape was shown to the press, who reportedly expressed amazement at the quality of the images although they had a "persistent grainy quality that looked like a worn motion picture." Overall the picture quality was still considered inferior to the best kinescope recordings on film. Bing Crosby Enterprises hoped to have a commercial version available in 1954 but none came forth. The BBC experimented from 1952 to 1958 with a high-speed linear videotape system called Vision Electronic Recording Apparatus (VERA), but this was ultimately dropped in favor of quadruplex videotape. VERA used half-inch metallized (1.27 cm) tape on 20-inch reels traveling at . RCA demonstrated the magnetic tape recording of both black-and-white and color television programs at its Princeton laboratories on December 1, 1953. The high-speed longitudinal tape system, called Simplex, in development since 1951, could record and play back only a few minutes of a television program. The color system used half-inch (1.27 cm) tape on 10½ inch reels to record five tracks, one each for red, blue, green, synchronization, and audio. The black-and-white system used quarter-inch (0.635 cm) tape also on 10½ inch reels with two tracks, one for video and one for audio. Both systems ran at with per reel yielding an 83-second capacity. RCA-owned NBC first used it on The Jonathan Winters Show on October 23, 1956, when a prerecorded song sequence by Dorothy Collins in color was included in the otherwise live television program. In 1953, Norikazu Sawazaki developed a prototype helical scan video tape recorder. BCE demonstrated a color system in February 1955 using a longitudinal recording on half-inch (1.27 cm) tape. CBS, RCA's competitor, was about to order BCE machines when Ampex introduced the superior Quadruplex system. BCE was acquired by 3M Company in 1956. In 1959, Toshiba released the first commercial helical scan video tape recorder. Broadcast video Quad The first commercial professional broadcast quality videotape machines capable of replacing kinescopes were the two-inch quadruplex videotape (Quad) machines introduced by Ampex on April 14, 1956, at the National Association of Broadcasters convention in Chicago. Quad employed a transverse (scanning the tape across its width) four-head system on a two-inch (5.08 cm) tape and stationary heads for the soundtrack. CBS Television first used the Ampex VRX-1000 Mark IV at its Television City studios in Hollywood on November 30, 1956, to play a delayed broadcast of Douglas Edwards and the News from New York City to the Pacific Time Zone. On January 22, 1957, the NBC Television game show Truth or Consequences, produced in Hollywood, became the first program to be broadcast in all time zones from a prerecorded videotape. Ampex introduced a color videotape recorder in 1958 in a cross-licensing agreement with RCA, whose engineers had developed it from an Ampex black-and-white recorder. NBC's special, An Evening With Fred Astaire (1958), is the oldest surviving television network color videotape, and has been restored by the UCLA Film and Television Archive. On December 7, 1963, instant replay, originally a videotape-based system, was used for the first time during the live transmission of the Army–Navy Game by its inventor, director Tony Verna. Although Quad became the industry standard for approximately thirty years, it has drawbacks such as an inability to freeze pictures, and no picture search. Also, in early machines, a tape could reliably be played back using only the same set of hand-made tape heads, which wore out very quickly. Despite these problems, Quad is capable of producing excellent images. Subsequent videotape systems have used helical scan, where the video heads record diagonal tracks (of complete fields) onto the tape. Many early videotape recordings were not preserved. While much less expensive (if repeatedly recycled) and more convenient than kinescope, the high cost of 3M Scotch 179 and other early videotapes ($300 per one-hour reel) meant that most broadcasters erased and reused them, and (in the United States) regarded videotape as simply a better and more cost-effective means of time-delaying broadcasts than kinescopes. It was the four time zones of the continental United States which had made the system very desirable in the first place. Some early broadcast videotapes have survived, including The Edsel Show, broadcast live on October 13, 1957 and An Evening With Fred Astaire which aired on October 18, 1958 and was the oldest color videotape of an entertainment program known to exist until the discovery of the October 8, 1958 episode of the Kraft Music Hall hosted by Milton Berle. The oldest color videotape known to survive is the May 1958 dedication of the WRC-TV studios in Washington, D.C.). In 1976, NBC's 50th-anniversary special included an excerpt from a 1957 color special starring Donald O'Connor; despite some obvious technical problems, the color tape was remarkably good. Some classic television programs recorded on studio videotape have been made available on DVD – among them NBC's Peter Pan (first telecast in 1960) with Mary Martin as Peter, several episodes of The Dinah Shore Chevy Show (late 1950s/early 60s), the final Howdy Doody Show (1960), the television version of Hal Holbrook's one-man show Mark Twain Tonight (first telecast in 1967), and Mikhail Baryshnikov's classic production of the ballet The Nutcracker (first telecast in 1977). Types C and B The next format to gain widespread usage was 1 inch (2.54 cm) Type C videotape introduced in 1976. This format introduced features such as shuttling, various-speed playback (including slow-motion), and still framing. Although 1" Type C's quality was still quite high, the sound and picture reproduction attainable on the format were of slightly lower quality than Quad. However, compared to Quad, 1" Type C machines required much less maintenance, took up less space, and consumed much less electrical power. In Europe, a similar tape format was developed, called 1 inch Type B videotape. Type B machines use the same 1" tape as Type C but they lacked C's shuttle and slow-motion options. The picture quality is slightly better, though. Type B was the broadcast norm in continental Europe for most of the 1980s. Professional cassette formats A videocassette is a cartridge containing videotape. In 1969, Sony introduced a prototype for the first widespread video cassette, the ¾ʺ (1.905 cm) composite U-matic system, which Sony introduced commercially in September 1971 after working out industry standards with other manufacturers. Sony later refined it to Broadcast Video U-matic (BVU). Sony continued its hold on the professional market with its ever-expanding ½ʺ (1.27 cm) component video Betacam family introduced in 1982. This tape form factor would go on to be used for leading professional digital video formats. Panasonic had some limited success with its MII system, but never could compare to Betacam in terms of market share. The next step was the digital revolution. Sony's D-1 was introduced in 1986 and featured uncompressed digital component recording. Because D-1 was extremely expensive, the composite D-2 (Sony, 1988) and D-3 (Panasonic, 1991) were introduced soon after. Ampex introduced the first compressed component recording with its DCT series in 1992. Panasonic's D-5 format was introduced in 1994. Like D-1, it is uncompressed, but much more affordable. The DV standard, which debuted in 1995, and was widely used both in its native form as MiniDV and in more robust professional variants. In digital camcorders, Sony adapted the Betacam system with its Digital Betacam format in 1993, and in 1996 following it up with the cheaper Betacam SX and the 2000 MPEG IMX format, The semiprofessional DV-based DVCAM system was introduced in 1996. Panasonic used its DV variant DVCPRO for all professional cameras, with the higher-end format DVCPRO50 being a direct descendant. JVC developed the competing D9/Digital-S format, which compresses video data in a way similar to DVCPRO but uses a cassette similar to S-VHS media. Many helical scan cassette formats such as VHS and Betacam use a head drum with heads that use azimuth recording, in which the heads in the head drum have a gap that is tilted at an angle, and opposing heads have their gaps tilted so as to oppose each other. High definition The introduction of HDTV video production necessitated a medium for storing high-definition video. In 1997, Sony supplemented its Betacam family with the HD-capable HDCAM standard and its higher-end cousin HDCAM SR in 2003. Panasonic's competing HD format for its camcorders was based on DVCPRO and called DVCPRO HD. For VTR and archive use, Panasonic expanded the D-5 specification to store compressed HD streams and called it D-5 HD. Home video Videocassette recorders The first consumer videocassette recorders (VCRs) used Sony U-matic technology and were launched in 1971. Philips entered the domestic market the following year with the N1500. Sony's Betamax (1975) and JVC's VHS (1976) created a mass-market for VCRs and the two competing systems battled the videotape format war, which VHS ultimately won. In Europe, Philips had developed the Video 2000 format, which did not find favor with the TV rental companies in the UK and lost out to VHS. At first VCRs and videocassettes were very expensive, but by the late 1980s the price had come down enough to make them affordable to a mainstream audience. Videocassettes finally made it possible for consumers to buy or rent a complete film and watch it at home whenever they wished, rather than going to a movie theater or having to wait until it was telecast. It gave birth to video rental stores, Blockbuster the largest chain, which lasted from 1985 to 2005. It also made it possible for a VCR owner to begin time shifting their viewing of films and other television programs. This caused an enormous change in viewing practices, as one no longer had to wait for a repeat of a program that had been missed. The shift to home viewing also changed the movie industry's revenue streams, because home renting created an additional window of time in which a film could make money. In some cases, films that did only modestly in their theater releases went on to have strong performances in the rental market (e.g., cult films). VHS became the leading consumer tape format for home movies after the videotape format war, though its follow-ups S-VHS, W-VHS and D-VHS never caught up in popularity. In the early 2000s in the prerecorded video market, VHS began to be displaced by DVD. The DVD format has several advantages over VHS tape. A DVD is much better able to take repeated viewings than VHS tape. Whereas a VHS tape can be erased though degaussing, DVDs and other optical discs are not affected by magnetic fields. DVDs can still be damaged by scratches. DVDs are smaller and take less space to store. DVDs can support both standard 4x3 and widescreen 16x9 screen aspect ratios and DVDs can provide twice the video resolution of VHS. DVD supports random access while a VHS tape is restricted to sequential access and must be rewound. DVDs can have interactive menus, multiple language tracks, audio commentaries, closed captioning and subtitling (with the option of turning the subtitles on or off, or selecting subtitles in several languages). Moreover, a DVD can be played on a computer. Due to these advantages, by the mid-2000s, DVDs were the dominant form of prerecorded video movies. Through the late 1990s and early 2000s consumers continued to use VCRs to record over-the-air TV shows, because consumers could not make home recordings onto DVDs. This last barrier to DVD domination was broken in the late 2000s with the advent of inexpensive DVD recorders and digital video recorders (DVRs). In July 2016, the last known manufacturer of VCRs, Funai, announced that it was ceasing VCR production. Consumer and prosumer camcorders Early consumer camcorders used full-size VHS or Betamax cassettes. Later models switched to more compact formats, designed explicitly for smaller camcorder use, like VHS-C and Video8. VHS-C is a downsized version of VHS, using the same recording method and the same tape, but in a smaller cassette. It is possible to play VHS-C tapes in a regular VHS tape recorder by using an adapter. After the introduction of S-VHS, a corresponding compact version, S-VHS-C, was released as well. Video8 is an indirect descendant of Betamax, using narrower tape and a smaller cassette. Because of its narrower tape and other technical differences, it is not possible to develop an adapter from Video8 to Betamax. Video8 was later developed into Hi8, which provides better resolution similar to S-VHS. The first consumer-level and lower-end professional (prosumer) digital video recording format, introduced in 1995, used a smaller Digital Video Cassette (DVC). The format was later renamed MiniDV to reflect the DV encoding scheme, but the tapes are still marked DVC. Some later formats like DVC Pro from Panasonic reflect the original name. The DVC or MiniDV format provides broadcast-quality video and sophisticated nonlinear editing capability on consumer and some professional equipment and has been used on feature films, including Danny Boyle's 28 Days Later (2002, shot on a Canon XL1) and David Lynch's Inland Empire (2006, shot on a Sony DSR-PD150). In 1999 Sony backported the DV recording scheme to 8-mm systems, creating Digital8. By using the same cassettes as Hi8, many Digital8 camcorders were able to play analog Video8 or Hi8 recordings, preserving compatibility with already recorded analog video tapes. Sony introduced another camcorder cassette format called MicroMV in 2001. Sony was the only electronics manufacturer to sell MicroMV cameras. In 2006, Sony stopped offering new MicroMV camcorder models. In November 2015, Sony announced that shipment of MicroMV cassettes would be discontinued in March 2016. In the late 2000s, MiniDV and its high-definition cousin, HDV, were the two most popular consumer or prosumer tape-based formats. The formats use different encoding methods, but the same cassette type. Future of tape With advances in technology, videotape has moved past its original uses (original recording, editing, and broadcast playback) and is now primarily an archival medium. The death of tape for video recording was predicted as early as 1995 when the Avid nonlinear editing system was demonstrated storing video clips on hard disks. Yet videotape was still used extensively, especially by consumers, up until about 2004, when DVD-based camcorders became affordable and domestic computers had large enough hard drives to store an acceptable amount of video. Consumer camcorders have switched from being tape-based to tapeless machines that record video as computer files. Small hard disks and writable optical discs have been used, with solid-state memory such as SD cards being the current market leader. There are two primary advantages: First, copying a tape recording onto a computer or other video machine occurs in real time (e.g. a ten-minute video would take ten minutes to copy); since tapeless camcorders record video as computer-ready data files, the files can copied onto a computer significantly faster than real time. Second, tapeless camcorders, and those using solid-state memory in particular, are far simpler mechanically and so are more reliable. Despite these conveniences, tape is still used extensively with filmmakers and television networks because of its longevity, low cost, and reliability. Master copies of visual content are often stored on tape for these reasons, particularly by users who cannot afford to move to tapeless machines. During the mid- to late 2000s, professional users such as broadcast television were still using tape heavily but tapeless formats like P2, XDCAM and AVCHD were gaining broader acceptance. While live recording has migrated to solid state, optical disc (Sony's XDCAM) and hard disks, the high cost of solid state and the limited shelf life of hard-disk drives make them less desirable for archival use, for which tape is still used.
Technology
Non-volatile memory
null
147957
https://en.wikipedia.org/wiki/Filename%20extension
Filename extension
A filename extension, file name extension or file extension is a suffix to the name of a computer file (for example, .txt, .mp3, .exe). The extension indicates a characteristic of the file contents or its intended use. A filename extension is typically delimited from the rest of the filename with a full stop (period), but in some systems it is separated with spaces. Some file systems implement filename extensions as a feature of the file system itself and may limit the length and format of the extension, while others treat filename extensions as part of the filename without special distinction. Operating system and file system support The Multics file system stores the file name as a single string, not split into base name and extension components, allowing the "." to be just another character allowed in file names. It allows for variable-length filenames, permitting more than one dot, and hence multiple suffixes, as well as no dot, and hence no suffix. Some components of Multics, and applications running on it, use suffixes to indicate file types, but not all files are required to have a suffix — for example, executables and ordinary text files usually have no suffixes in their names. File systems for UNIX-like operating systems also store the file name as a single string, with "." as just another character in the file name. A file with more than one suffix is sometimes said to have more than one extension, although terminology varies in this regard, and most authors define extension in a way that does not allow more than one in the same file name. More than one extension usually represents nested transformations, such as files.tar.gz (the .tar indicates that the file is a tar archive of one or more files, and the .gz indicates that the tar archive file is compressed with gzip). Programs transforming or creating files may add the appropriate extension to names inferred from input file names (unless explicitly given an output file name), but programs reading files usually ignore the information; it is mostly intended for the human user. It is more common, especially in binary files, for the file to contain internal or external metadata describing its contents. This model generally requires the full filename to be provided in commands, whereas the metadata approach often allows the extension to be omitted. In DOS and 16-bit Windows, file names have a maximum of 8 characters, a period, and an extension of up to three letters. The FAT file system for DOS and Windows stores file names as an 8-character name and a three-character extension. The period character is not stored. The High Performance File System (HPFS), used in Microsoft and IBM's OS/2 stores the file name as a single string, with the "." character as just another character in the file name. The convention of using suffixes continued, even though HPFS supports extended attributes for files, allowing a file's type to be stored in the file as an extended attribute. Microsoft's Windows NT's native file system, NTFS, and the later ReFS, also store the file name as a single string; again, the convention of using suffixes to simulate extensions continued, for compatibility with existing versions of Windows. In Windows NT 3.5, a variant of the FAT file system, called VFAT appeared; it supports longer file names, with the file name being treated as a single string. Windows 95, with VFAT, introduced support for long file names, and removed the 8.3 name/extension split in file names from non-NT Windows. Lolo classic Mac OS disposed of filename-based extension metadata entirely; it used, instead, a distinct file type code to identify the file format. Additionally, a creator code was specified to determine which application would be launched when the file's icon was double-clicked. macOS, however, uses filename suffixes as a consequence of being derived from the UNIX-like NeXTSTEP operating system, in addition to using type and creator codes. In Commodore systems, files can only have four extensions: PRG, SEQ, USR, REL. However, these are used to separate data types used by a program and are irrelevant for identifying their contents. With the advent of graphical user interfaces, the issue of file management and interface behavior arose. Microsoft Windows allowed multiple applications to be associated with a given extension, and different actions were available for selecting the required application, such as a context menu offering a choice between viewing, editing or printing the file. The assumption was still that any extension represented a single file type; there was an unambiguous mapping between extension and icon. When the Internet age first arrived, those using Windows systems that were still restricted to 8.3 filename formats had to create web pages with names ending in .HTM, while those using Macintosh or UNIX computers could use the recommended .html filename extension. This also became a problem for programmers experimenting with the Java programming language, since it requires the four-letter suffix .java for source code files and the five-letter suffix .class for Java compiler object code output files. Content type Filename extensions may be considered a type of metadata. They are commonly used to imply information about the way data might be stored in the file. The exact definition, giving the criteria for deciding what part of the file name is its extension, belongs to the rules of the specific file system used; usually the extension is the substring which follows the last occurrence, if any, of the dot character (example: txt is the extension of the filename readme.txt, and html the extension of index.html). On file systems of some mainframe systems such as CMS in VM, VMS, and of PC systems such as CP/M and derivative systems such as MS-DOS, the extension is a separate namespace from the filename. Under Microsoft's DOS and Windows, extensions such as EXE, COM or BAT indicate that a file is a program executable. In OS/360 and successors, the part of the dataset name following the last period, called the low level qualifier, is treated as an extension by some software, e.g., TSO EDIT, but it has no special significance to the operating system itself; the same applies to Unix files in MVS. The filename extension was originally used to determine the file's generic type. The need to condense a file's type into three characters frequently led to abbreviated extensions. Examples include using .GFX for graphics files, .TXT for plain text, and .MUS for music. However, because many different software programs have been made that all handle these data types (and others) in a variety of ways, filename extensions started to become closely associated with certain products—even specific product versions. For example, early WordStar files used .WS or .WSn, where n was the program's version number. Also, conflicting uses of some filename extensions developed. One example is .rpm, used for both RPM Package Manager packages and RealPlayer Media files;. Others are .qif, shared by DESQview fonts, Quicken financial ledgers, and QuickTime pictures; .gba, shared by GrabIt scripts and Game Boy Advance ROM images; .sb, used for SmallBasic and Scratch; and .dts, being used for Dynamix Three Space and DTS. Compared to MIME type In many Internet protocols, such as HTTP and MIME email, the type of a bitstream is stated as the media type, or MIME type, of the stream, rather than a filename extension. This is given in a line of text preceding the stream, such as Content-type: text/plain. There is no standard mapping between filename extensions and media types, resulting in possible mismatches in interpretation between authors, web servers, and client software when transferring files over the Internet. For instance, a content author may specify the extension svgz for a compressed Scalable Vector Graphics file, but a web server that does not recognize this extension may not send the proper content type application/svg+xml and its required compression header, leaving web browsers unable to correctly interpret and display the image. BeOS, whose BFS file system supports extended attributes, would tag a file with its media type as an extended attribute. Some desktop environments, such as KDE Plasma and GNOME, associate a media type with a file by examining both the filename suffix and the contents of the file, in the fashion of the file command, as a heuristic. They choose the application to launch when a file is opened based on that media type, reducing the dependency on filename extensions. macOS uses both filename extensions and media types, as well as file type codes, to select a Uniform Type Identifier by which to identify the file type internally. Executable programs The use of a filename extension in a command name appears occasionally, usually as a side effect of the command having been implemented as a script, e.g., for the Bourne shell or for Python, and the interpreter name being suffixed to the command name, a practice common on systems that rely on associations between filename extension and interpreter, but sharply deprecated in Unix-like systems, such as Linux, Oracle Solaris, BSD-based systems, and Apple's macOS, where the interpreter is normally specified as a header in the script ("shebang"). On association-based systems, the filename extension is generally mapped to a single, system-wide selection of interpreter for that extension (such as ".py" meaning to use Python), and the command itself is runnable from the command line even if the extension is omitted (assuming appropriate setup is done). If the implementation language is changed, the command name extension is changed as well, and the OS provides a consistent API by allowing the same extensionless version of the command to be used in both cases. This method suffers somewhat from the essentially global nature of the association mapping, as well as from developers' incomplete avoidance of extensions when calling programs, and that developers can not force that avoidance. Windows is the only remaining widespread employer of this mechanism. On systems with interpreter directives, including virtually all versions of Unix, command name extensions have no special significance, and are by standard practice not used, since the primary method to set interpreters for scripts is to start them with a single line specifying the interpreter to use. In these environments, including the extension in a command name unnecessarily exposes an implementation detail which puts all references to the commands from other programs at future risk if the implementation changes. For example, it would be perfectly normal for a shell script to be reimplemented in Python or Ruby, and later in C or C++, all of which would change the name of the command were extensions used. Without extensions, a program always has the same extension-less name, with only the interpreter directive and/or magic number changing, and references to the program from other programs remain valid. Security issues The default behavior of File Explorer, the file browser provided with Microsoft Windows, is for filename extensions to not be displayed. Malicious users have tried to spread computer viruses and computer worms by using file names formed like LOVE-LETTER-FOR-YOU.TXT.vbs. The idea is that this will appear as LOVE-LETTER-FOR-YOU.TXT, a harmless text file, without alerting the user to the fact that it is a harmful computer program, in this case, written in VBScript. The default behavior for ReactOS is to display filename extensions in ReactOS Explorer. Later Windows versions (starting with Windows XP Service Pack 2 and Windows Server 2003) included customizable lists of filename extensions that should be considered "dangerous" in certain "zones" of operation, such as when downloaded from the web or received as an e-mail attachment. Modern antivirus software systems also help to defend users against such attempted attacks where possible. Some viruses take advantage of the similarity between the ".com" top-level domain and the ".COM" filename extension by emailing malicious, executable command-file attachments under names superficially similar to URLs (e.g., "myparty.yahoo.com"), with the effect that unaware users click on email-embedded links that they think lead to websites but actually download and execute the malicious attachments. There have been instances of malware crafted to exploit vulnerabilities in some Windows applications which could cause a stack-based buffer overflow when opening a file with an overly long, unhandled filename extension. The filename extension is just a marker and the content of the file does not have to match it. This can be used to disguise malicious content. When trying to identify a file for security reasons, it is therefore considered dangerous to rely on the extension alone and a proper analysis of the content of the file is preferred. For example, on UNIX-like systems, it is not uncommon to find files with no extensions at all, as commands such as file are meant to be used instead, and will read the file's header to determine its content.
Technology
Data storage and memory
null
148069
https://en.wikipedia.org/wiki/Drosera
Drosera
Drosera, which is commonly known as the sundews, is one of the largest genera of carnivorous plants, with at least 194 species. These members of the family Droseraceae lure, capture, and digest insects using stalked mucilaginous glands covering their leaf surfaces. The insects are used to supplement the poor mineral nutrition of the soil in which the plants grow. Various species, which vary greatly in size and form, are native to every continent except Antarctica. Charles Darwin performed much of the early research into Drosera, engaging in a long series of experiments with Drosera rotundifolia which were the first to confirm carnivory in plants. In an 1860 letter, Darwin wrote, “…at the present moment, I care more about Drosera than the origin of all the species in the world.” Taxonomy The botanical name from the Greek drosos "dew, dewdrops" refer to the glistening drops of mucilage at the tip of the glandular trichomes that resemble drops of morning dew. The English common name sundew also describes this, derived from Latin ros solis meaning "dew of the sun". The Principia Botanica, published in 1787, states “Sun-dew (Drosera) derives its name from small drops of a liquor-like dew, hanging on its fringed leaves, and continuing in the hottest part of the day, exposed to the sun.” Phylogenetics The unrooted cladogram to the right shows the relationship between various subgenera and classes as defined by the analysis of Rivadavia et al. The monotypic section Meristocaulis was not included in the study, so its place in this system is unclear. More recent studies have placed this group near section Bryastrum, so it is placed there below. Also of note, the placement of the section Regiae in relation to Aldrovanda and Dionaea is uncertain. Since the section Drosera is polyphyletic, it shows up multiple times in the cladogram (*). This phylogenetic study has made the need for a revision of the genus even clearer. Description Sundews are perennial (or rarely annual) herbaceous plants, forming prostrate or upright rosettes between in height, depending on the species. Climbing species form scrambling stems which can reach much longer lengths, up to in the case of D. erythrogyne. Sundews have been shown to be able to achieve a lifespan of 50 years. The genus is specialized for nutrient uptake through its carnivorous behavior, for example the pygmy sundew is missing the enzymes (nitrate reductase, in particular) that plants normally use for the uptake of earth-bound nitrates. Growth Form The genus can be divided into several habits, or growth forms: Temperate sundews: These species form a tight cluster of unfurled leaves called a hibernaculum in a winter dormancy period (= Hemicryptophyte). All of the North American and European species belong to this group. Drosera arcturi from Australia (including Tasmania) and New Zealand is another temperate species that dies back to a horn-shaped hibernaculum. Subtropical sundews: These species maintain vegetative growth year-round under uniform or nearly uniform climatic conditions. Pygmy sundews: A group of roughly 40 Australian species, they are distinguished by miniature growth, the formation of gemmae for asexual reproduction, and dense formation of hairs in the crown center. These hairs serve to protect the plants from Australia's intense summer sun. Pygmy sundews form the subgenus Bryastrum. Tuberous sundews: These nearly 50 Australian species form an underground tuber to survive the extremely dry summers of their habitat, re-emerging in the autumn. These so-called tuberous sundews can be further divided into two groups, those that form rosettes and those that form climbing or scrambling stems. Tuberous sundews comprise the subgenus Ergaleium. Petiolaris complex: A group of tropical Australian species, they live in constantly warm but sometimes wet conditions. Several of the 14 species that comprise this group have developed special strategies to cope with the alternately drier conditions. Many species, for example, have petioles densely covered in trichomes, which maintain a sufficiently humid environment and serve as an increased condensation surface for morning dew. The Petiolaris complex comprises the subgenus Lasiocephala. Although they do not form a single strictly defined growth form, a number of species are often put together in a further group: Queensland sundews: A small group of three species (D. adelae, D. schizandra and D. prolifera), all are native to highly humid habitats in the dim understories of the Australian rainforest. Leaves and Entomophagy Sundews are characterised by the glandular tentacles, topped with sticky secretions, that cover their leaves. The trapping and digestion mechanism usually employs two types of glands: stalked glands that secrete sweet mucilage to attract and ensnare insects and enzymes to digest them, and sessile glands that absorb the resulting nutrient soup (the latter glands are missing in some species, such as D. erythrorhiza). Small prey, mainly consisting of insects, are attracted by the sweet secretions of the peduncular glands. Upon touching these, the prey become entrapped by sticky mucilage which prevents their progress or escape. Eventually, the prey either succumb to death through exhaustion or through asphyxiation as the mucilage envelops them and clogs their spiracles. Death usually occurs within 15 minutes. The plant meanwhile secretes esterase, peroxidase, phosphatase and protease enzymes. These enzymes dissolve the insect and free the nutrients contained within it. This nutrient mixture is then absorbed through the leaf surfaces to be used by the rest of the plant. All species of sundew are able to move their tentacles in response to contact with edible prey. The tentacles are extremely sensitive and will bend toward the center of the leaf to bring the insect into contact with as many stalked glands as possible. According to Charles Darwin, the contact of the legs of a small gnat with a single tentacle is enough to induce this response. This response to touch is known as thigmonasty, and is quite rapid in some species. The outer tentacles (recently coined as "snap-tentacles") of D. burmanni and D. sessilifolia can bend inwards toward prey in a matter of seconds after contact, while D. glanduligera is known to bend these tentacles in toward prey in tenths of a second. In addition to tentacle movement, some species are able to bend their leaves to various degrees to maximize contact with the prey. Of these, D. capensis exhibits what is probably the most dramatic movement, curling its leaf completely around prey in 30 minutes. Some species, such as D. filiformis, are unable to bend their leaves in response to prey. A further type of (mostly strong red and yellow) leaf coloration has recently been discovered in a few Australian species (D. hartmeyerorum, D. indica). Their function is not known yet, although they may help in attracting prey. The leaf morphology of the species within the genus is extremely varied, ranging from the sessile ovate leaves of D. erythrorhiza to the bipinnately divided acicular leaves of D. binata. While the exact physiological mechanism of the sundew's carnivorous response is not yet known, some studies have begun to shed light on how the plant is able to move in response to mechanical and chemical stimulation to envelop and digest prey. Individual tentacles, when mechanically stimulated, fire action potentials that terminate near the base of the tentacle, resulting in rapid movement of the tentacle towards the center of the leaf. This response is more prominent when marginal tentacles further away from the leaf center are stimulated. The tentacle movement response is achieved through auxin-mediated acid growth. When action potentials reach their target cells, the plant hormone auxin causes protons (H+ ions) to be pumped out of the plasma membrane into the cell wall, thereby reducing the pH and making the cell wall more acidic. The resulting reduction in pH causes the relaxation of the cell wall protein, expansin, and allows for an increase in cell volume via osmosis and turgor. As a result of differential cell growth rates, the sundew tentacles are able to achieve movement towards prey and the leaf center through the bending caused by expanding cells. Among some drosera species, a second bending response occurs in which non-local, distant tentacles bend towards prey as well as the bending of the entire leaf blade to maximize contact with prey. While mechanical stimulation is sufficient to achieve a localized tentacle bend response, both mechanical and chemical stimuli are required for the secondary bending response to occur. Flowers and fruit The flowers of sundews, as with nearly all carnivorous plants, are held far above the leaves by a long stem. This physical isolation of the flower from the traps is commonly thought to be an adaptation meant to avoid trapping potential pollinators. The mostly unforked inflorescences are spikes, whose flowers open one at a time and usually only remain open for a short period. Flowers open in response to light intensity (often opening only in direct sunlight), and the entire inflorescence is also heliotropic, moving in response to the sun's position in the sky. The radially symmetrical (actinomorphic) flowers are always perfect and have five parts (the exceptions to this rule are the four-petaled D. pygmaea and the eight to 12-petaled D. heterophylla). Most of the species have small flowers (<1.5 cm or 0.6 in). A few species, however, such as D. regia and D. cistiflora, have flowers or more in diameter. In general, the flowers are white or pink. Australian species display a wider range of colors, including orange (D. callistos), red (D. adelae), yellow (D. zigzagia) or metallic violet (D. microphylla). The ovary is superior and develops into a dehiscent seed capsule bearing numerous tiny seeds. The pollen grain type is compound, which means four microspores (pollen grains) are stuck together with a protein called callose. Roots The root systems of most Drosera are often only weakly developed or have lost their original functions. They are relatively useless for nutrient uptake, and they serve mainly to absorb water and to anchor the plant to the ground; they have long hairs. A few South African species use their roots for water and food storage. Some species have wiry root systems that remain during frosts if the stem dies. Some species, such as D. adelae and D. hamiltonii, use their roots for asexual propagation, by sprouting plantlets along their length. Some Australian species form underground corms for this purpose, which also serve to allow the plants to survive dry summers. The roots of pygmy sundews are often extremely long in proportion to their size, with a 1-cm (0.4-in) plant extending roots over beneath the soil surface. Some pygmy sundews, such as D. lasiantha and D. scorpioides, also form adventitious roots as supports. D. intermedia and D. rotundifolia have been reported to form arbuscular mycorrhizae, which penetrate the plant's tissues, they also host fungi like endophytes to collect nutrients when they grow in poor soil and form symbiotic relationships. Reproduction Many species of sundews are self-fertile; their flowers will often self-pollinate upon closing. Often, numerous seeds are produced. The tiny black seeds germinate in response to moisture and light, while seeds of temperate species also require cold, damp, stratification to germinate. Seeds of the tuberous species require a hot, dry summer period followed by a cool, moist winter to germinate. Vegetative reproduction occurs naturally in some species that produce stolons or when roots come close to the surface of the soil. Older leaves that touch the ground may sprout plantlets. Pygmy sundews reproduce asexually using specialized scale-like leaves called gemmae. Tuberous sundews can produce offsets from their corms. In culture, sundews can often be propagated through leaf, crown, or root cuttings, as well as through seeds. Distribution The range of the sundew genus stretches from Alaska in the north to New Zealand in the south. The centers of diversity are Australia, with roughly 50% of all known species, and South America and southern Africa, each with more than 20 species. A few species are also found in large parts of Eurasia and North America. These areas, however, can be considered to form the outskirts of the generic range, as the ranges of sundews do not typically approach temperate or Arctic areas. Contrary to previous supposition, the evolutionary speciation of this genus is no longer thought to have occurred with the breakup of Gondwana through continental drift. Rather, speciation is now thought to have occurred as a result of a subsequent wide dispersal of its range. The origins of the genus are thought to have been in Africa or Australia. Europe is home to only three species: D. intermedia, D. anglica, and D. rotundifolia. Where the ranges of the two latter species overlap, they sometimes hybridize to form the sterile D. × obovata. In addition to the three species and the hybrid native to Europe, North America is also home to four additional species; D. brevifolia is a small annual native to coastal states from Texas to Virginia, while D. capillaris, a slightly larger plant with a similar range, is also found in areas of the Caribbean. The third species, D. linearis, is native to the northern United States and southern Canada. D. filiformis has two subspecies native to the East Coast of North America, the Gulf Coast, and the Florida panhandle. This genus is often described as cosmopolitan, meaning it has worldwide distribution. The botanist Ludwig Diels, author of the only monograph of the family to date, called this description an "arrant misjudgment of this genus' highly unusual distributional circumstances (arge Verkennung ihrer höchst eigentümlichen Verbreitungsverhältnisse)", while admitting sundew species do "occupy a significant part of the Earth's surface (einen beträchtlichen Teil der Erdoberfläche besetzt)". He particularly pointed to the absence of Drosera species from almost all arid climate zones, countless rainforests, the American Pacific Coast, Polynesia, the Mediterranean region, and North Africa, as well as the scarcity of species diversity in temperate zones, such as Europe and North America. Habitat Sundews generally grow in seasonally moist or more rarely constantly wet habitats with acidic soils and high levels of sunlight. Common habitats include bogs, fens, swamps, marshes, the tepuis of Venezuela, the wallums of coastal Australia, the fynbos of South Africa, and moist streambanks. Many species grow in association with sphagnum moss, which absorbs much of the soil's nutrient supply and also acidifies the soil, making nutrients less available to plant life. This allows sundews, which do not rely on soil-bound nutrients, to flourish where more dominating vegetation would usually outcompete them. The genus, though, is very variable in terms of habitat. Individual sundew species have adapted to a wide variety of environments, including atypical habitats, such as rainforests, deserts (D. burmanni and D. indica), and even highly shaded environments (Queensland sundews). The temperate species, which form hibernacula in the winter, are examples of such adaptation to habitats; in general, sundews tend to inhabit warm climates, and are only moderately frost-resistant. Conservation status Protection of the genus varies between countries. None of the Drosera species in the United States are federally protected. Some are listed as threatened or endangered at the state level, but this gives little protection to lands under private ownership. Many of the remaining native populations are located on protected land, such as national parks or wildlife preserves. Drosera species are protected by law in many European countries, such as Germany, Austria, Switzerland, the Czech Republic, Finland, Hungary, France, and Bulgaria. In Australia, they are listed as "threatened". In South America and the Caribbean, Drosera species in a number of areas are considered critical, endangered or vulnerable, while other areas have not been surveyed. At the same time that species are at risk in South Africa, new species continue to be discovered in the Western Cape and Madagascar. Worldwide, Drosera are at risk of extinction due to the destruction of natural habitat through urban and agricultural development. They are also threatened by the illegal collection of wild plants for the horticultural trade. An additional risk is environmental change, because species are often specifically adapted to a precise location and set of conditions. Currently, the largest threat in Europe and North America is loss of wetland habitat. Causes include urban development and the draining of bogs for agricultural uses and peat harvesting. Such threats have led to the extirpation of some species from parts of their former range. Reintroduction of plants into such habitats is usually difficult or impossible, as the ecological needs of certain populations are closely tied to their geographical location. Increased legal protection of bogs and moors, and a concentrated effort to renaturalize such habitats, are possible ways to combat threats to Drosera plants' survival. As part of the landscape, sundews are often overlooked or not recognized at all. In South Africa and Australia, two of the three centers of species diversity, the natural habitats of these plants are undergoing a high degree of pressure from human activities. The African sundews D. insolita and D. katangensis are listed as critically endangered by the International Union for Conservation of Nature (IUCN), while D. bequaertii is listed as vulnerable. Expanding population centers such as Queensland, Perth, and Cape Town, and the draining of moist areas for agriculture and forestry in rural areas threaten many such habitats. The droughts that have been sweeping Australia in the 21st century pose a threat to many species by drying up previously moist areas. Those species endemic to a very limited area are often most threatened by the collection of plants from the wild. D. madagascariensis is considered endangered in Madagascar because of the large-scale removal of plants from the wild for exportation; 10 - 200 million plants are harvested for commercial medicinal use annually. Gallery of prey Uses Traditional medicine The Zafimaniry people in central Madagascar have been using Drosera madagascariensis as a remedy for dysentery and fever. In Western medicine, sundews were used as medicinal herbs as early as the 12th century, when an Italian doctor from the School of Salerno, Matthaeus Platearius, described the plant as an herbal remedy for coughs under the name herba sole. Culbreth's 1927 Materia Medica listed D. rotundifolia, D. anglica and D. linearis as being used as stimulants and expectorants, and "of doubtful efficacy" for treating bronchitis, whooping cough, and tuberculosis. Sundew tea was recommended by herbalists for dry coughs, bronchitis, whooping cough, asthma and "bronchial cramps". The French Pharmacopoeia of 1965 listed sundew for the treatment of inflammatory diseases such as asthma, chronic bronchitis and whooping cough. Drosera has been used commonly in cough preparations in Germany and elsewhere in Europe. In traditional medicine practices, Drosera is used to treat ailments such as asthma, coughs, lung infections, and stomach ulcers. Herbal preparations are primarily made using the roots, flowers, and fruit-like capsules. Since all native sundews species are protected in many parts of Europe and North America, extracts are usually prepared using cultivated fast-growing sundews (specifically D. rotundifolia, D. intermedia, D. anglica, D. ramentacea and D. madagascariensis) or from plants collected and imported from Madagascar, Spain, France, Finland and the Baltics. Sundews are historically mentioned as an aphrodisiac (hence the common name lustwort). They are mentioned as a folk remedy for treatment of warts, corns, and freckles. As ornamental plants Because of their carnivorous nature and the beauty of their glistening traps, sundews have become favorite ornamental plants; however, the environmental requirements of most species are relatively stringent and can be difficult to meet in cultivation. As a result, most species are unavailable commercially. A few of the hardiest varieties, however, have made their way into the mainstream nursery business and can often be found for sale next to Venus flytraps. These most often include D. capensis, D. aliciae, and D. spatulata. Cultivation requirements vary greatly by species. In general, though, sundews require high environmental moisture content, usually in the form of a constantly moist or wet soil substrate. Most species also require this water to be pure, as nutrients, salts, or minerals in their soil can stunt their growth or even kill them. Commonly, plants are grown in a soil substrate containing some combination of dead or live sphagnum moss, sphagnum peat moss, sand, and/or perlite, and are watered with distilled, reverse osmosis, or rain water. Nano-biotechnology The mucilage produced by Drosera has remarkable elastic properties and has made this genus a very attractive subject in biomaterials research. In one recent study, the adhesive mucilages of three species (D. binata, D. capensis, and D. spatulata) were analyzed for nanofiber and nanoparticle content. Using atomic force microscopy, transmission electron microscopy, and energy-dispersive X-ray spectroscopy, researchers were able to observe networks of nanofibers and nanoparticles of various sizes within the mucilage residues. In addition, calcium, magnesium, and chlorine – key components of biological salts - were identified. These nanoparticles are theorized to increase the viscosity and stickiness of the mucilage, in turn increasing the effectiveness of the trap. More importantly for biomaterials research, however, is the fact that, when dried, the mucin provides a suitable substrate for the attachment of living cells. This has important implications for tissue engineering, especially because of the elastic qualities of the adhesive. Essentially, a coating of Drosera mucilage on a surgical implant, such as a replacement hip or an organ transplant, could drastically improve the rate of recovery and decrease the potential for rejection, because living tissue can effectively attach and grow on it. The authors also suggest a wide variety of applications for Drosera mucin, including wound treatment, regenerative medicine, or enhancing synthetic adhesives. Because this mucilage can stretch to nearly a million times its original size and is readily available for use, it can be an extremely cost-efficient source of biomaterial. Other uses The corms of the tuberous sundews native to Australia are considered a delicacy by the Indigenous Australians. Some of these corms were also used to dye textiles, while another purple or yellow dye was traditionally prepared in the Scottish Highlands using D. rotundifolia. A sundew liqueur is also still produced using a recipe from the 14th century. It is made using fresh leaves from mainly D. capensis, D. spatulata, and D. rotundifolia. Chemical constituents Several chemical compounds with potential biological activities are found in sundews, including flavonoids (kaempferol, myricetin, quercetin and hyperoside), quinones (plumbagin, hydroplumbagin glucoside and rossoliside (7–methyl–hydrojuglone–4–glucoside)), and other constituents such as carotenoids, plant acids (e.g. butyric acid, citric acid, formic acid, gallic acid, malic acid, propionic acid), resin, tannins and ascorbic acid (vitamin C).
Biology and health sciences
Caryophyllales
Plants
148091
https://en.wikipedia.org/wiki/Somali%20cat
Somali cat
The Somali cat is genetically similar to the Abyssinian cat. Due to inheriting 2 copies of the recessive gene for long hair, they have a characteristic luscious coat, unlike their cousin the Abyssinian. History In the 1940s, a British breeder named Janet Robertson exported some Abyssinian kittens to Australia, New Zealand and North America. Descendants of these cats occasionally produced kittens with long or fuzzy coats. In 1963, Mary Mailing, a breeder from Canada, entered one into a local pet show. Ken McGill, the show's judge, asked for one for breeding purposes. The first known long-haired Abyssinian, named 'Raby Chuffa of Selene', appeared in North America in 1953. Breeders assume that the long-haired gene was passed down through his ancestry. Most breeders were appalled by the sudden difference in appearance in their litters and refused to mention them. However, some breeders were intrigued and continued to breed the long-haired Abyssinian. At first, other Abyssinian breeders looked down upon the new development of the Somali and refused to associate them with the Abyssinian. They worked hard to keep the long-haired gene out of their own cats. An American Abyssinian breeder Evelyn Mague also received longhairs from her cats, which she named "Somalis". Mague put out a call for other cats to breed with her own long-haired Abyssinians and found the many other breeders internationally that had been breeding long-haired Abyssinians for several years already. Don Richings, another Canadian breeder, used kittens from McGill, and began to work with Mague. The first Somali recognized as such by a fancier organization was Mayling Tutsuta, one of McGill's cats. In 1979, the breed was recognized by the CFA in North America. The new breed was accepted in Europe in 1982. By 1991, the breed was broadly (though not universally) accepted internationally. The name "Somali" is in reference to the African nation, Somalia. Somalia borders Abyssinia, which is modern day Ethiopia. The name of the breed is a unique interpretation of the Ethiopian-Somali conflict; Mague charitably assumed that since the land borders were a human creation, so are the genetic borders between the Abyssinian cat and the long-haired Abyssinian. Mague also founded the Somali Cat Club of America, which included members from Canada as well. The SCCA worked to grant the breed championship status by the CFA, which occurred in 1979. In 1975, the CFA founded the International Somali Cat Club. Appearance Description Somalis are recognised for their energetic and social nature. Their appearance with sleek bodies, long tails, and large pointed ears have earned them the nickname of "Fox Cat." Their ticked coats contain between four and twenty colours on each hair are very fine in texture making their coats softer to the touch than those of other cat breeds. The cat itself is medium-large in size. Within the GCCF, short haired Somalis are recognised separately from Abyssinian cats. Colours and patterns All Somali cats have a ticked tabby pattern. The usual or ruddy coloured Somali has a golden brown ground colour ticked with black, so the official genetic term is black ticked tabby. The coat colour names in Somalis refer to the ticking colour. There are 28 colours of Somali in total although certain organisations accept only some of these colours. All organisations that register Somalis permit usual (genetically black, a.k.a. ruddy or tawny in Somalis), blue, sorrel (genetically cinnamon, a.k.a. red in Somalis), and fawn. Most clubs also recognise usual/ruddy silver, blue silver, sorrel/red silver, and fawn silver. Other colours that may be accepted by some registries include chocolate, lilac, red, cream, usual-tortie, blue-tortie, sorrel-tortie, fawn-tortie, chocolate-tortie, lilac-tortie, and silver variants of these (e.g. blue-tortie silver). Health The Somali is one of the more commonly affected breeds for pyruvate kinase deficiency. An autosomal recessive mutation of the PKLR gene is responsible for the condition in the breed. Somalis may also have hereditary retinal degeneration due to a mutation in the rdAc allele. This mutation is also seen in Abyssinians, Siamese cats, and other related breeds. Coat colour overview
Biology and health sciences
Cats
Animals
148285
https://en.wikipedia.org/wiki/64-bit%20computing
64-bit computing
In computer architecture, 64-bit integers, memory addresses, or other data units are those that are 64 bits wide. Also, 64-bit central processing units (CPU) and arithmetic logic units (ALU) are those that are based on processor registers, address buses, or data buses of that size. A computer that uses such a processor is a 64-bit computer. From the software perspective, 64-bit computing means the use of machine code with 64-bit virtual memory addresses. However, not all 64-bit instruction sets support full 64-bit virtual memory addresses; x86-64 and AArch64 for example, support only 48 bits of virtual address, with the remaining 16 bits of the virtual address required to be all zeros (000...) or all ones (111...), and several 64-bit instruction sets support fewer than 64 bits of physical memory address. The term 64-bit also describes a generation of computers in which 64-bit processors are the norm. 64 bits is a word size that defines certain classes of computer architecture, buses, memory, and CPUs and, by extension, the software that runs on them. 64-bit CPUs have been used in supercomputers since the 1970s (Cray-1, 1975) and in reduced instruction set computers (RISC) based workstations and servers since the early 1990s. In 2003, 64-bit CPUs were introduced to the mainstream PC market in the form of x86-64 processors and the PowerPC G5. A 64-bit register can hold any of 264 (over 18 quintillion or 1.8×1019) different values. The range of integer values that can be stored in 64 bits depends on the integer representation used. With the two most common representations, the range is 0 through 18,446,744,073,709,551,615 (equal to 264 − 1) for representation as an (unsigned) binary number, and −9,223,372,036,854,775,808 (−263) through 9,223,372,036,854,775,807 (263 − 1) for representation as two's complement. Hence, a processor with 64-bit memory addresses can directly access 264 bytes (16 exabytes or EB) of byte-addressable memory. With no further qualification, a 64-bit computer architecture generally has integer and addressing registers that are 64 bits wide, allowing direct support for 64-bit data types and addresses. However, a CPU might have external data buses or address buses with different sizes from the registers, even larger (the 32-bit Pentium had a 64-bit data bus, for instance). Architectural implications Processor registers are typically divided into several groups: integer, floating-point, single instruction, multiple data (SIMD), control, and often special registers for address arithmetic which may have various uses and names such as address, index, or base registers. However, in modern designs, these functions are often performed by more general purpose integer registers. In most processors, only integer or address-registers can be used to address data in memory; the other types of registers cannot. The size of these registers therefore normally limits the amount of directly addressable memory, even if there are registers, such as floating-point registers, that are wider. Most high performance 32-bit and 64-bit processors (some notable exceptions are older or embedded ARM architecture (ARM) and 32-bit MIPS architecture (MIPS) CPUs) have integrated floating point hardware, which is often, but not always, based on 64-bit units of data. For example, although the x86/x87 architecture has instructions able to load and store 64-bit (and 32-bit) floating-point values in memory, the internal floating-point data and register format is 80 bits wide, while the general-purpose registers are 32 bits wide. In contrast, the 64-bit Alpha family uses a 64-bit floating-point data and register format, and 64-bit integer registers. History Many computer instruction sets are designed so that a single integer register can store the memory address to any location in the computer's physical or virtual memory. Therefore, the total number of addresses to memory is often determined by the width of these registers. The IBM System/360 of the 1960s was an early 32-bit computer; it had 32-bit integer registers, although it only used the low order 24 bits of a word for addresses, resulting in a 16 MiB () address space. 32-bit superminicomputers, such as the DEC VAX, became common in the 1970s, and 32-bit microprocessors, such as the Motorola 68000 family and the 32-bit members of the x86 family starting with the Intel 80386, appeared in the mid-1980s, making 32 bits something of a de facto consensus as a convenient register size. A 32-bit address register meant that 232 addresses, or 4 GB of random-access memory (RAM), could be referenced. When these architectures were devised, 4 GB of memory was so far beyond the typical amounts (4 MiB) in installations, that this was considered to be enough headroom for addressing. 4.29 billion addresses were considered an appropriate size to work with for another important reason: 4.29 billion integers are enough to assign unique references to most entities in applications like databases. Some supercomputer architectures of the 1970s and 1980s, such as the Cray-1, used registers up to 64 bits wide, and supported 64-bit integer arithmetic, although they did not support 64-bit addressing. In the mid-1980s, Intel i860 development began culminating in a 1989 release; the i860 had 32-bit integer registers and 32-bit addressing, so it was not a fully 64-bit processor, although its graphics unit supported 64-bit integer arithmetic. However, 32 bits remained the norm until the early 1990s, when the continual reductions in the cost of memory led to installations with amounts of RAM approaching 4 GB, and the use of virtual memory spaces exceeding the 4 GB ceiling became desirable for handling certain types of problems. In response, MIPS and DEC developed 64-bit microprocessor architectures, initially for high-end workstation and server machines. By the mid-1990s, HAL Computer Systems, Sun Microsystems, IBM, Silicon Graphics, and Hewlett-Packard had developed 64-bit architectures for their workstation and server systems. A notable exception to this trend were mainframes from IBM, which then used 32-bit data and 31-bit address sizes; the IBM mainframes did not include 64-bit processors until 2000. During the 1990s, several low-cost 64-bit microprocessors were used in consumer electronics and embedded applications. Notably, the Nintendo 64 and the PlayStation 2 had 64-bit microprocessors before their introduction in personal computers. High-end printers, network equipment, and industrial computers also used 64-bit microprocessors, such as the Quantum Effect Devices R5000. 64-bit computing started to trickle down to the personal computer desktop from 2003 onward, when some models in Apple's Macintosh lines switched to PowerPC 970 processors (termed G5 by Apple), and Advanced Micro Devices (AMD) released its first 64-bit x86-64 processor. Physical memory eventually caught up with 32 bit limits. In 2023, laptop computers were commonly equipped with 16GB and servers starting from 64 GB of memory, greatly exceeding the 4 GB address capacity of 32 bits. 64-bit data timeline 1961 IBM delivers the IBM 7030 Stretch supercomputer, which uses 64-bit data words and 32- or 64-bit instruction words. 1974 Control Data Corporation launches the CDC Star-100 vector supercomputer, which uses a 64-bit word architecture (prior CDC systems were based on a 60-bit architecture). International Computers Limited launches the ICL 2900 Series with 32-bit, 64-bit, and 128-bit two's complement integers; 64-bit and 128-bit floating point; 32-bit, 64-bit, and 128-bit packed decimal and a 128-bit accumulator register. The architecture has survived through a succession of ICL and Fujitsu machines. The latest is the Fujitsu Supernova, which emulates the original environment on 64-bit Intel processors. 1976 Cray Research delivers the first Cray-1 supercomputer, which is based on a 64-bit word architecture and will form the basis for later Cray vector supercomputers. 1983 Elxsi launches the Elxsi 6400 parallel minisupercomputer. The Elxsi architecture has 64-bit data registers but a 32-bit address space. 1989 Intel introduces the Intel i860 reduced instruction set computer (RISC) processor. Marketed as a "64-Bit Microprocessor", it had essentially a 32-bit architecture, enhanced with a 3D graphics unit capable of 64-bit integer operations. 1993 Atari introduces the Atari Jaguar video game console, which includes some 64-bit wide data paths in its architecture. 64-bit address timeline 1991 MIPS Computer Systems produces the first 64-bit microprocessor, the R4000, which implements the MIPS III architecture, the third revision of its MIPS architecture. The CPU is used in SGI graphics workstations starting with the IRIS Crimson. Kendall Square Research deliver their first KSR1 supercomputer, based on a proprietary 64-bit RISC processor architecture running OSF/1. 1992 Digital Equipment Corporation (DEC) introduces the pure 64-bit Alpha architecture which was born from the PRISM project. 1994 Intel announces plans for the 64-bit IA-64 architecture (jointly developed with Hewlett-Packard) as a successor to its 32-bit IA-32 processors. A 1998 to 1999 launch date was targeted. 1995 Sun launches a 64-bit SPARC processor, the UltraSPARC. Fujitsu-owned HAL Computer Systems launches workstations based on a 64-bit CPU, HAL's independently designed first-generation SPARC64. IBM releases the A10 and A30 microprocessors, the first 64-bit PowerPC AS processors. IBM also releases a 64-bit AS/400 system upgrade, which can convert the operating system, database and applications. 1996 Nintendo introduces the Nintendo 64 video game console, built around a low-cost variant of the MIPS R4000. HP releases the first implementation of its 64-bit PA-RISC 2.0 architecture, the PA-8000. 1998 IBM releases the POWER3 line of full-64-bit PowerPC/POWER processors. 1999 Intel releases the instruction set for the IA-64 architecture. AMD publicly discloses its set of 64-bit extensions to IA-32, called x86-64 (later branded AMD64). 2000 IBM ships its first 64-bit z/Architecture mainframe, the zSeries z900. z/Architecture is a 64-bit version of the 32-bit ESA/390 architecture, a descendant of the 32-bit System/360 architecture. 2001 Intel ships its IA-64 processor line, after repeated delays in getting to market. Now branded Itanium and targeting high-end servers, sales fail to meet expectations. 2003 AMD introduces its Opteron and Athlon 64 processor lines, based on its AMD64 architecture which is the first x86-based 64-bit processor architecture. Apple also ships the 64-bit "G5" PowerPC 970 CPU produced by IBM. Intel maintains that its Itanium chips would remain its only 64-bit processors. 2004 Intel, reacting to the market success of AMD, admits it has been developing a clone of the AMD64 extensions named IA-32e (later renamed EM64T, then yet again renamed to Intel 64). Intel ships updated versions of its Xeon and Pentium 4 processor families supporting the new 64-bit instruction set. VIA Technologies announces the Isaiah 64-bit processor. 2006 Sony, IBM, and Toshiba begin manufacturing the 64-bit Cell processor for use in the PlayStation 3, servers, workstations, and other appliances. Intel released Core 2 Duo as the first mainstream x86-64 processor for its mobile, desktop, and workstation line. Prior 64-bit extension processor lines were not widely available in the consumer retail market (most of 64-bit Pentium 4/D were OEM), 64-bit Pentium 4, Pentium D, and Celeron were not into mass production until late 2006 due to poor yield issue (most of good yield wafers were targeted at server and mainframe while mainstream still remain 130 nm 32-bit processor line until 2006) and soon became low end after Core 2 debuted. AMD released their first 64-bit mobile processor and manufactured in 90 nm. 2011 ARM Holdings announces ARMv8-A, the first 64-bit version of the ARM architecture family. 2012 ARM Holdings announced their Cortex-A53 and Cortex-A57 cores, their first cores based on their 64-bit architecture, on 30 October 2012. 2013Apple announces the iPhone 5S, with the world's first 64-bit processor in a smartphone, which uses their A7 ARMv8-A-based system-on-a-chip alongside the iPad Air and iPad Mini 2 which are the world's first 64-bit processor in a tablet. 2014Google announces the Nexus 9 tablet, the first Android device to run on the 64-bit Tegra K1 chip. 2015Apple announces the iPod Touch (6th generation), the first iPod Touch to use the 64-bit processor A8 ARMv8-A-based system-on-a-chip alongside the Apple TV (4th generation) which is the world's first 64-bit processor in an Apple TV. 2018Apple announces the Apple Watch Series 4, the first Apple Watch to use the 64-bit processor S4 ARMv8-A-based system-on-a-chip. 2020Synopsis announce the ARCv3 ISA, the first 64-bit version of the ARC ISA. 64-bit operating system timeline 1985 Cray releases UNICOS, the first 64-bit implementation of the Unix operating system. 1993 DEC releases the 64-bit DEC OSF/1 AXP Unix-like operating system (later renamed Tru64 UNIX) for its systems based on the Alpha architecture. 1994 Support for the R8000 processor is added by Silicon Graphics to the IRIX operating system in release 6.0. 1995 DEC releases OpenVMS 7.0, the first full 64-bit version of OpenVMS for Alpha. First 64-bit Linux distribution for the Alpha architecture is released. 1996 Support for the R4x00 processors in 64-bit mode is added by Silicon Graphics to the IRIX operating system in release 6.2. 1998 Sun releases Solaris 7, with full 64-bit UltraSPARC support. 2000 IBM releases z/OS, a 64-bit operating system descended from MVS, for the new zSeries 64-bit mainframes; 64-bit Linux on z Systems follows the CPU release almost immediately. 2001 Linux becomes the first OS kernel to fully support x86-64 (on a simulator, as no x86-64 processors had been released yet). 2001 Microsoft releases Windows XP 64-Bit Edition for the Itanium's IA-64 architecture; it could run 32-bit applications through an execution layer. 2003 Apple releases its Mac OS X 10.3 "Panther" operating system which adds support for native 64-bit integer arithmetic on PowerPC 970 processors. Several Linux distributions release with support for AMD64. FreeBSD releases with support for AMD64. 2005 On January 4, Microsoft discontinues Windows XP 64-Bit Edition, as no PCs with IA-64 processors had been available since the previous September, and announces that it is developing x86-64 versions of Windows to replace it. On January 31, Sun releases Solaris 10 with support for AMD64 and EM64T processors. On April 29, Apple releases Mac OS X 10.4 "Tiger" which provides limited support for 64-bit command-line applications on machines with PowerPC 970 processors; later versions for Intel-based Macs supported 64-bit command-line applications on Macs with EM64T processors. On April 30, Microsoft releases Windows XP Professional x64 Edition and Windows Server 2003 x64 Edition for AMD64 and EM64T processors. 2006 Microsoft releases Windows Vista, including a 64-bit version for AMD64/EM64T processors that retains 32-bit compatibility. In the 64-bit version, all Windows applications and components are 64-bit, although many also have their 32-bit versions included for compatibility with plug-ins. 2007 Apple releases Mac OS X 10.5 "Leopard", which fully supports 64-bit applications on machines with PowerPC 970 or EM64T processors. 2009 Microsoft releases Windows 7, which, like Windows Vista, includes a full 64-bit version for AMD64/Intel 64 processors; most new computers are loaded by default with a 64-bit version. Microsoft also releases Windows Server 2008 R2, which is the first 64-bit only server operating system. Apple releases Mac OS X 10.6, "Snow Leopard", which ships with a 64-bit kernel for AMD64/Intel64 processors, although only certain recent models of Apple computers will run the 64-bit kernel by default. Most applications bundled with Mac OS X 10.6 are now also 64-bit. 2011 Apple releases Mac OS X 10.7, "Lion", which runs the 64-bit kernel by default on supported machines. Older machines that are unable to run the 64-bit kernel run the 32-bit kernel, but, as with earlier releases, can still run 64-bit applications; Lion does not support machines with 32-bit processors. Nearly all applications bundled with Mac OS X 10.7 are now also 64-bit, including iTunes. 2012 Microsoft releases Windows 8 which supports UEFI Class 3 (UEFI without CSM) and Secure Boot. 2013 Apple releases iOS 7, which, on machines with AArch64 processors, has a 64-bit kernel that supports 64-bit applications. 2014 Google releases Android Lollipop, the first version of the Android operating system with support for 64-bit processors. 2017 Apple releases iOS 11, supporting only machines with AArch64 processors. It has a 64-bit kernel that only supports 64-bit applications. 32-bit applications are no longer compatible. 2018 Apple releases watchOS 5, the first watchOS version to bring the 64-bit support. 2019 Apple releases macOS 10.15 "Catalina", dropping support for 32-bit Intel applications. 2021 Microsoft releases Windows 11 on October 5, which only supports 64-bit systems, dropping support for IA-32 and AArch32 systems. 2022 Google releases the Pixel 7, which drops support for 32-bit applications. Apple releases watchOS 9, the first watchOS version to run exclusively on the Apple Watch models with 64-bit processors (including Apple Watch Series 4 or newer, Apple Watch SE (1st generation) or newer and the newly introduced Apple Watch Ultra), dropping support for Apple Watch Series 3 as the final Apple Watch model with 32-bit processor. 2024 Microsoft releases Windows 11 2024 Update, ARM versions drop support for 32-bit ARM applications. Limits of processors In principle, a 64-bit microprocessor can address 16 EB () of memory. However, not all instruction sets, and not all processors implementing those instruction sets, support a full 64-bit virtual or physical address space. The x86-64 architecture () allows 48 bits for virtual memory and, for any given processor, up to 52 bits for physical memory. These limits allow memory sizes of 256 TB () and 4 PB (), respectively. A PC cannot currently contain 4 petabytes of memory (due to the physical size of the memory chips), but AMD envisioned large servers, shared memory clusters, and other uses of physical address space that might approach this in the foreseeable future. Thus the 52-bit physical address provides ample room for expansion while not incurring the cost of implementing full 64-bit physical addresses. Similarly, the 48-bit virtual address space was designed to provide 65,536 (216) times the 32-bit limit of 4 GB (), allowing room for later expansion and incurring no overhead of translating full 64-bit addresses. The Power ISA v3.0 allows 64 bits for an effective address, mapped to a segmented address with between 65 and 78 bits allowed, for virtual memory, and, for any given processor, up to 60 bits for physical memory. The Oracle SPARC Architecture 2015 allows 64 bits for virtual memory and, for any given processor, between 40 and 56 bits for physical memory. The ARM AArch64 Virtual Memory System Architecture allows from 48 to 56 bits for virtual memory and, for any given processor, from 32 to 56 bits for physical memory. The DEC Alpha specification requires minimum of 43 bits of virtual memory address space (8 TB) to be supported, and hardware need to check and trap if the remaining unsupported bits are zero (to support compatibility on future processors). Alpha 21064 supported 43 bits of virtual memory address space (8 TB) and 34 bits of physical memory address space (16 GB). Alpha 21164 supported 43 bits of virtual memory address space (8 TB) and 40 bits of physical memory address space (1 TB). Alpha 21264 supported user-configurable 43 or 48 bits of virtual memory address space (8 TB or 256 TB) and 44 bits of physical memory address space (16 TB). 64-bit applications 32-bit vs 64-bit A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most operating systems must be extensively modified to take advantage of the new architecture, because that software has to manage the actual memory addressing hardware. Other software must also be ported to use the new abilities; older 32-bit software may be supported either by virtue of the 64-bit instruction set being a superset of the 32-bit instruction set, so that processors that support the 64-bit instruction set can also run code for the 32-bit instruction set, or through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor, as with some Itanium processors from Intel, which included an IA-32 processor core to run 32-bit x86 applications. The operating systems for those 64-bit architectures generally support both 32-bit and 64-bit applications. One significant exception to this is the IBM AS/400, software for which is compiled into a virtual instruction set architecture (ISA) called Technology Independent Machine Interface (TIMI); TIMI code is then translated to native machine code by low-level software before being executed. The translation software is all that must be rewritten to move the full OS and all software to a new platform, as when IBM transitioned the native instruction set for AS/400 from the older 32/48-bit IMPI to the newer 64-bit PowerPC-AS, codenamed Amazon. The IMPI instruction set was quite different from even 32-bit PowerPC, so this transition was even bigger than moving a given instruction set from 32 to 64 bits. On 64-bit hardware with x86-64 architecture (AMD64), most 32-bit operating systems and applications can run with no compatibility issues. While the larger address space of 64-bit architectures makes working with large data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate on whether they or their 32-bit compatibility modes will be faster than comparably priced 32-bit systems for other tasks. A compiled Java program can run on a 32- or 64-bit Java virtual machine with no modification. The lengths and precision of all the built-in types, such as char, short, int, long, float, and double, and the types that can be used as array indices, are specified by the standard and are not dependent on the underlying architecture. Java programs that run on a 64-bit Java virtual machine have access to a larger address space. Speed is not the only factor to consider in comparing 32-bit and 64-bit processors. Applications such as multi-tasking, stress testing, and clustering – for high-performance computing (HPC) – may be more suited to a 64-bit architecture when deployed appropriately. For this reason, 64-bit clusters have been widely deployed in large organizations, such as IBM, HP, and Microsoft. Summary: A 64-bit processor performs best with 64-bit software. A 64-bit processor may have backward compatibility, allowing it to run 32-bit application software for the 32-bit version of its instruction set, and may also support running 32-bit operating systems for the 32-bit version of its instruction set. A 32-bit processor is incompatible with 64-bit software. Pros and cons A common misconception is that 64-bit architectures are no better than 32-bit architectures unless the computer has more than 4 GB of random-access memory. This is not entirely true: Some operating systems and certain hardware configurations limit the physical memory space to 3 GB on IA-32 systems, due to much of the 3–4 GB region being reserved for hardware addressing; see 3 GB barrier; 64-bit architectures can address far more than 4 GB. However, IA-32 processors from the Pentium Pro onward allow a 36-bit physical memory address space, using Physical Address Extension (PAE), which gives a 64 GB physical address range, of which up to 62 GB may be used by main memory; operating systems that support PAE may not be limited to 4 GB of physical memory, even on IA-32 processors. However, drivers and other kernel mode software, more so older versions, may be incompatible with PAE; this has been cited as the reason for 32-bit versions of Microsoft Windows being limited to 4 GB of physical RAM (although the validity of this explanation has been disputed). Some operating systems reserve portions of process address space for OS use, effectively reducing the total address space available for mapping memory for user programs. For instance, 32-bit Windows reserves 1 or 2 GB (depending on the settings) of the total address space for the kernel, which leaves only 3 or 2 GB (respectively) of the address space available for user mode. This limit is much higher on 64-bit operating systems. Memory-mapped files are becoming more difficult to implement in 32-bit architectures as files of over 4 GB become more common; such large files cannot be memory-mapped easily to 32-bit architectures, as only part of the file can be mapped into the address space at a time, and to access such a file by memory mapping, the parts mapped must be swapped into and out of the address space as needed. This is a problem, as memory mapping, if properly implemented by the OS, is one of the most efficient disk-to-memory methods. Some 64-bit programs, such as encoders, decoders and encryption software, can benefit greatly from 64-bit registers, while the performance of other programs, such as 3D graphics-oriented ones, remains unaffected when switching from a 32-bit to a 64-bit environment. Some 64-bit architectures, such as x86-64 and AArch64, support more general-purpose registers than their 32-bit counterparts (although this is not due specifically to the word length). This leads to a significant speed increase for tight loops since the processor does not have to fetch data from the cache or main memory if the data can fit in the available registers. Example in C: int a, b, c, d, e; for (a = 0; a < 100; a++) { b = a; c = b; d = c; e = d; } This code first creates 5 values: a, b, c, d and e; and then puts them in a loop. During the loop, this code changes the value of b to the value of a, the value of c to the value of b, the value of d to the value of c and the value of e to the value of d. This has the same effect as changing all the values to a. If a processor can keep only two or three values or variables in registers, it would need to move some values between memory and registers to be able to process variables d and e also; this is a process that takes many CPU cycles. A processor that can hold all values and variables in registers can loop through them with no need to move data between registers and memory for each iteration. This behavior can easily be compared with virtual memory, although any effects are contingent on the compiler. The main disadvantage of 64-bit architectures is that, relative to 32-bit architectures, the same data occupies more space in memory (due to longer pointers and possibly other types, and alignment padding). This increases the memory requirements of a given process and can have implications for efficient processor cache use. Maintaining a partial 32-bit model is one way to handle this, and is in general reasonably effective. For example, the z/OS operating system takes this approach, requiring program code to reside in 31-bit address spaces (the high order bit is not used in address calculation on the underlying hardware platform) while data objects can optionally reside in 64-bit regions. Not all such applications require a large address space or manipulate 64-bit data items, so these applications do not benefit from these features. Software availability x86-based 64-bit systems sometimes lack equivalents of software that is written for 32-bit architectures. The most severe problem in Microsoft Windows is incompatible device drivers for obsolete hardware. Most 32-bit application software can run on a 64-bit operating system in a compatibility mode, also termed an emulation mode, e.g., Microsoft WoW64 Technology for IA-64 and AMD64. The 64-bit Windows Native Mode driver environment runs atop 64-bit , which cannot call 32-bit Win32 subsystem code (often devices whose actual hardware function is emulated in user mode software, like Winprinters). Because 64-bit drivers for most devices were unavailable until early 2007 (Vista x64), using a 64-bit version of Windows was considered a challenge. However, the trend has since moved toward 64-bit computing, more so as memory prices dropped and the use of more than 4 GB of RAM increased. Most manufacturers started to provide both 32-bit and 64-bit drivers for new devices, so unavailability of 64-bit drivers ceased to be a problem. 64-bit drivers were not provided for many older devices, which could consequently not be used in 64-bit systems. Driver compatibility was less of a problem with open-source drivers, as 32-bit ones could be modified for 64-bit use. Support for hardware made before early 2007, was problematic for open-source platforms, due to the relatively small number of users. 64-bit versions of Windows cannot run 16-bit software. However, most 32-bit applications will work well. 64-bit users are forced to install a virtual machine of a 16- or 32-bit operating system to run 16-bit applications or use one of the alternatives for NTVDM. Mac OS X 10.4 "Tiger" and Mac OS X 10.5 "Leopard" had only a 32-bit kernel, but they can run 64-bit user-mode code on 64-bit processors. Mac OS X 10.6 "Snow Leopard" had both 32- and 64-bit kernels, and, on most Macs, used the 32-bit kernel even on 64-bit processors. This allowed those Macs to support 64-bit processes while still supporting 32-bit device drivers; although not 64-bit drivers and performance advantages that can come with them. Mac OS X 10.7 "Lion" ran with a 64-bit kernel on more Macs, and OS X 10.8 "Mountain Lion" and later macOS releases only have a 64-bit kernel. On systems with 64-bit processors, both the 32- and 64-bit macOS kernels can run 32-bit user-mode code, and all versions of macOS up to macOS Mojave (10.14) include 32-bit versions of libraries that 32-bit applications would use, so 32-bit user-mode software for macOS will run on those systems. The 32-bit versions of libraries have been removed by Apple in macOS Catalina (10.15). Linux and most other Unix-like operating systems, and the C and C++ toolchains for them, have supported 64-bit processors for many years. Many applications and libraries for those platforms are open-source software, written in C and C++, so that if they are 64-bit-safe, they can be compiled into 64-bit versions. This source-based distribution model, with an emphasis on frequent releases, makes availability of application software for those operating systems less of an issue. 64-bit data models In 32-bit programs, pointers and data types such as integers generally have the same length. This is not necessarily true on 64-bit machines. Mixing data types in programming languages such as C and its descendants such as C++ and Objective-C may thus work on 32-bit implementations but not on 64-bit implementations. In many programming environments for C and C-derived languages on 64-bit machines, int variables are still 32 bits wide, but long integers and pointers are 64 bits wide. These are described as having an LP64 data model, which is an abbreviation of "Long, Pointer, 64". Other models are the ILP64 data model in which all three data types are 64 bits wide, and even the SILP64 model where short integers are also 64 bits wide. However, in most cases the modifications required are relatively minor and straightforward, and many well-written programs can simply be recompiled for the new environment with no changes. Another alternative is the LLP64 model, which maintains compatibility with 32-bit code by leaving both int and long as 32-bit. LL refers to the long long integer type, which is at least 64 bits on all platforms, including 32-bit environments. There are also systems with 64-bit processors using an ILP32 data model, with the addition of 64-bit long long integers; this is also used on many platforms with 32-bit processors. This model reduces code size and the size of data structures containing pointers, at the cost of a much smaller address space, a good choice for some embedded systems. For instruction sets such as x86 and ARM in which the 64-bit version of the instruction set has more registers than does the 32-bit version, it provides access to the additional registers without the space penalty. It is common in 64-bit RISC machines, explored in x86 as x32 ABI, and has recently been used in the Apple Watch Series 4 and 5. Many 64-bit platforms today use an LP64 model (including Solaris, AIX, HP-UX, Linux, macOS, BSD, and IBM z/OS). Microsoft Windows uses an LLP64 model. The disadvantage of the LP64 model is that storing a long into an int truncates. On the other hand, converting a pointer to a long will "work" in LP64. In the LLP64 model, the reverse is true. These are not problems which affect fully standard-compliant code, but code is often written with implicit assumptions about the widths of data types. C code should prefer (u)intptr_t instead of long when casting pointers into integer objects. A programming model is a choice made to suit a given compiler, and several can coexist on the same OS. However, the programming model chosen as the primary model for the OS application programming interface (API) typically dominates. Another consideration is the data model used for device drivers. Drivers make up the majority of the operating system code in most modern operating systems (although many may not be loaded when the operating system is running). Many drivers use pointers heavily to manipulate data, and in some cases have to load pointers of a certain size into the hardware they support for direct memory access (DMA). As an example, a driver for a 32-bit PCI device asking the device to DMA data into upper areas of a 64-bit machine's memory could not satisfy requests from the operating system to load data from the device to memory above the 4 gigabyte barrier, because the pointers for those addresses would not fit into the DMA registers of the device. This problem is solved by having the OS take the memory restrictions of the device into account when generating requests to drivers for DMA, or by using an input–output memory management unit (IOMMU). Current 64-bit architectures , 64-bit architectures for which processors are being manufactured include: The 64-bit extension created by Advanced Micro Devices (AMD) to Intel's x86 architecture (later licensed by Intel); commonly termed x86-64, AMD64, or x64: AMD's AMD64 extensions (used in Athlon 64, Opteron, Sempron, Turion 64, Phenom, Athlon II, Phenom II, APU, FX, Ryzen, and Epyc processors) Intel's Intel 64 extensions, used in Intel Core 2/i3/i5/i7/i9, some Atom, and newer Celeron, Pentium, and Xeon processors Intel's K1OM architecture, a variant of Intel 64 with no CMOV, MMX, and SSE instructions, used in first-generation Xeon Phi (Knights Corner) coprocessors, binary incompatible with x86-64 programs VIA Technologies' 64-bit extensions, used in the VIA Nano processors IBM's PowerPC/Power ISA: IBM's Power10 processor and predecessors, and the IBM A2 processors SPARC V9 architecture: Oracle's M8 and S7 processors Fujitsu's SPARC64 XII and SPARC64 XIfx processors IBM's z/Architecture, a 64-bit version of the ESA/390 architecture, used in IBM's IBM Z mainframes: IBM Telum processor and predecessors Hitachi AP8000E MIPS Technologies' MIPS64 architecture ARM Holdings' AArch64 architecture Elbrus architecture: Elbrus-8S NEC SX architecture SX-Aurora TSUBASA RISC-V ARC Most architectures of 64 bits that are derived from the same architecture of 32 bits can execute code written for the 32-bit versions natively, with no performance penalty. This kind of support is commonly called bi-arch support or more generally multi-arch support.
Technology
Computer architecture concepts
null
148349
https://en.wikipedia.org/wiki/Chatbot
Chatbot
A chatbot (originally chatterbot) is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades. Although chatbots have existed since the late 1960s, the field gained widespread attention in the early 2020s due to the popularity of OpenAI's ChatGPT, followed by alternatives such as Microsoft's Copilot and Google's Gemini. Such examples reflect the recent practice of basing such products upon broad foundational large language models, such as GPT-4 or the Gemini language model, that get fine-tuned so as to target specific tasks or applications (i.e., simulating human conversation, in the case of chatbots). Chatbots can also be designed or customized to further target even more specific situations and/or particular subject-matter domains. A major area where chatbots have long been used is in customer service and support, with various sorts of virtual assistants. Companies spanning a wide range of industries have begun using the latest generative artificial intelligence technologies to power more advanced developments in such areas. History Turing test In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published, which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge to the extent that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. Eliza The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:In artificial intelligence, machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained, its magic crumbles away; it stands revealed as a mere collection of procedures. The observer says to himself "I could have written that". With that thought, he moves the program in question from the shelf marked "intelligent", to that reserved for curios. The object of this paper is to cause just such a re-evaluation of the program about to be "explained". Few programs ever needed it more.ELIZA's key method of operation involves the recognition of clue words or phrases in the input, and the output of the corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY'). Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate because human judges are ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent". Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational—even when it is actually based on rather simple pattern-matching—can be exploited for useful purposes. Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories. Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods". Early chatbots Among the most notable early chatbots are ELIZA (1966) and PARRY (1972). More recent notable programs include A.L.I.C.E., Jabberwacky and D.U.D.E (Agence Nationale de la Recherche and CNRS 2006). While ELIZA and PARRY were used exclusively to simulate typed conversation, many chatbots now include other functional features, such as games and web searching abilities. In 1984, a book called The Policeman's Beard is Half Constructed was published, allegedly written by the chatbot Racter (though the program as released would not have been capable of doing so). From 1978 to some time after 1983, the CYRUS project led by Janet Kolodner constructed a chatbot simulating Cyrus Vance (57th United States Secretary of State). It used case-based reasoning, and updated its database daily by parsing wire news from United Press International. The program was unable to process the news items subsequent to the surprise resignation of Cyrus Vance in April 1980, and the team constructed another chatbot simulating his successor, Edmund Muskie. One pertinent field of AI research is natural-language processing. Usually, weak AI fields employ specialized software or programming languages created specifically for the narrow function required. For example, A.L.I.C.E. uses a markup language called AIML, which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so-called, Alicebots. Nevertheless, A.L.I.C.E. is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would require sapience and logical reasoning abilities. Jabberwacky learns new responses and context based on real-time user interactions, rather than being driven from a static database. Some more recent chatbots also combine real-time learning with evolutionary algorithms that optimize their ability to communicate based on each conversation held. Chatbot competitions focus on the Turing test or more specific goals. Two such annual contests are the Loebner Prize and The Chatterbox Challenge (the latter has been offline since 2015, however, materials can still be found from web archives). DBpedia created a chatbot during the GSoC of 2017. It can communicate through Facebook Messenger (see Master of Code Global article). Modern chatbots based on large language models Modern chatbots like ChatGPT are often based on large language models called generative pre-trained transformers (GPT). They are based on a deep learning architecture called the transformer, which contains artificial neural networks. They learn how to generate text by being trained on a large text corpus, which provides a solid foundation for the model to perform well on downstream tasks with limited amounts of task-specific data. Despite criticism of its accuracy and tendency to "hallucinate"—that is, to confidently output false information and even cite non-existent sources—ChatGPT has gained attention for its detailed responses and historical knowledge. Another example is BioGPT, developed by Microsoft, which focuses on answering biomedical questions. In November 2023, Amazon announced a new chatbot, called Q, for people to use at work. Application Messaging apps Many companies' chatbots run on messaging apps or simply via SMS. They are used for B2C customer service, sales and marketing. In 2016, Facebook Messenger allowed developers to place chatbots on their platform. There were 30,000 bots created for Messenger in the first six months, rising to 100,000 by September 2017. Since September 2017, this has also been as part of a pilot program on WhatsApp. Airlines KLM and Aeroméxico both announced their participation in the testing; both airlines had previously launched customer services on the Facebook Messenger platform. The bots usually appear as one of the user's contacts, but can sometimes act as participants in a group chat. Many banks, insurers, media companies, e-commerce companies, airlines, hotel chains, retailers, health care providers, government entities, and restaurant chains have used chatbots to answer simple questions, increase customer engagement, for promotion, and to offer additional ways to order from them. Chatbots are also used in market research to collect short survey responses. A 2017 study showed 4% of companies used chatbots. In a 2016 study, 80% of businesses said they intended to have one by 2020. As part of company apps and websites Previous generations of chatbots were present on company websites, e.g. Ask Jenn from Alaska Airlines which debuted in 2008 or Expedia's virtual customer service agent which launched in 2011. The newer generation of chatbots includes IBM Watson-powered "Rocky", introduced in February 2017 by the New York City-based e-commerce company Rare Carat to provide information to prospective diamond buyers. Chatbot sequences Used by marketers to script sequences of messages, very similar to an autoresponder sequence. Such sequences can be triggered by user opt-in or the use of keywords within user interactions. After a trigger occurs a sequence of messages is delivered until the next anticipated user response. Each user response is used in the decision tree to help the chatbot navigate the response sequences to deliver the correct response message. Company internal platforms Companies have used chatbots for customer support, human resources, or in Internet-of-Things (IoT) projects. Overstock.com, for one, has reportedly launched a chatbot named Mila to attempt to automate certain processes when customer service employees request sick leave. Other large companies such as Lloyds Banking Group, Royal Bank of Scotland, Renault and Citroën are now using chatbots instead of call centres with humans to provide a first point of contact. In large companies, like in hospitals and aviation organizations, chatbots are also used to share information within organizations, and to assist and replace service desks. Customer service Chatbots have been proposed as a replacement for customer service departments. Deep learning techniques can be incorporated into chatbot applications to allow them to map conversations between users and customer service agents, especially in social media. In 2019, Gartner predicted that by 2021, 15% of all customer service interactions globally will be handled completely by AI. A study by Juniper Research in 2019 estimates retail sales resulting from chatbot-based interactions will reach $112 billion by 2023. In 2016, Russia-based Tochka Bank launched a chatbot on Facebook for a range of financial services, including a possibility of making payments. In July 2016, Barclays Africa also launched a Facebook chatbot. In 2023, US-based National Eating Disorders Association replaced its human helpline staff with a chatbot but had to take it offline after users reported receiving harmful advice from it. Healthcare Chatbots are also appearing in the healthcare industry. A study suggested that physicians in the United States believed that chatbots would be most beneficial for scheduling doctor appointments, locating health clinics, or providing medication information. ChatGPT is able to answer user queries related to health promotion and disease prevention such as screening and vaccination. WhatsApp has teamed up with the World Health Organization (WHO) to make a chatbot service that answers users' questions on COVID-19. In 2020, the Government of India launched a chatbot called MyGov Corona Helpdesk, that worked through WhatsApp and helped people access information about the Coronavirus (COVID-19) pandemic. Certain patient groups are still reluctant to use chatbots. A mixed-methods 2019 study showed that people are still hesitant to use chatbots for their healthcare due to poor understanding of the technological complexity, the lack of empathy, and concerns about cyber-security. The analysis showed that while 6% had heard of a health chatbot and 3% had experience of using it, 67% perceived themselves as likely to use one within 12 months. The majority of participants would use a health chatbot for seeking general health information (78%), booking a medical appointment (78%), and looking for local health services (80%). However, a health chatbot was perceived as less suitable for seeking results of medical tests and seeking specialist advice such as sexual health. The analysis of attitudinal variables showed that most participants reported their preference for discussing their health with doctors (73%) and having access to reliable and accurate health information (93%). While 80% were curious about new technologies that could improve their health, 66% reported only seeking a doctor when experiencing a health problem and 65% thought that a chatbot was a good idea. 30% reported dislike about talking to computers, 41% felt it would be strange to discuss health matters with a chatbot and about half were unsure if they could trust the advice given by a chatbot. Therefore, perceived trustworthiness, individual attitudes towards bots, and dislike for talking to computers are the main barriers to health chatbots. Politics In New Zealand, the chatbot SAM – short for Semantic Analysis Machine – has been developed by Nick Gerritsen of Touchtech. It is designed to share its political thoughts, for example on topics such as climate change, healthcare and education, etc. It talks to people through Facebook Messenger. In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated for The Synthetic Party to run in the Danish parliamentary election, and was built by the artist collective Computer Lars. Leader Lars differed from earlier virtual politicians by leading a political party and by not pretending to be an objective candidate. This chatbot engaged in critical discussions on politics with users from around the world. In India, the state government has launched a chatbot for its Aaple Sarkar platform, which provides conversational access to information regarding public services managed. Toys Chatbots have also been incorporated into devices not primarily meant for computing, such as toys. Hello Barbie is an Internet-connected version of the doll that uses a chatbot provided by the company ToyTalk, which previously used the chatbot for a range of smartphone-based characters for children. These characters' behaviors are constrained by a set of rules that in effect emulate a particular character and produce a storyline. The My Friend Cayla doll was marketed as a line of dolls which uses speech recognition technology in conjunction with an Android or iOS mobile app to recognize the child's speech and have a conversation. Like the Hello Barbie doll, it attracted controversy due to vulnerabilities with the doll's Bluetooth stack and its use of data collected from the child's speech. IBM's Watson computer has been used as the basis for chatbot-based educational toys for companies such as CogniToys, intended to interact with children for educational purposes. Malicious use Malicious chatbots are frequently used to fill chat rooms with spam and advertisements by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They were commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website. Tay, an AI chatbot designed to learn from previous interaction, caused major controversy due to it being targeted by internet trolls on Twitter. Soon after its launch, the bot was exploited, and with its "repeat after me" capability, it started releasing racist, sexist, and controversial responses to Twitter users. This suggests that although the bot learned effectively from experience, adequate protection was not put in place to prevent misuse. If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seems plausible, for instance making false claims during an election. With enough chatbots, it might be even possible to achieve artificial social proof. Data security Data security is one of the major concerns of chatbot technologies. Security threats and system vulnerabilities are weaknesses that are often exploited by malicious users. Storage of user data and past communication, that is highly valuable for training and development of chatbots, can also give rise to security threats. Chatbots operating on third-party networks may be subject to various security issues if owners of the third-party applications have policies regarding user data that differ from those of the chatbot. Security threats can be reduced or prevented by incorporating protective mechanisms. User authentication, chat End-to-end encryption, and self-destructing messages are some effective solutions to resist potential security threats. Limitations of chatbots Chatbots have difficulty managing non-linear conversations that must go back and forth on a topic with a user. Large language models are more versatile, but require a large amount of conversational data to train. These modeles generate new responses word by word based on user input, are usually trained on a large dataset of natural-language phrases. They sometimes provide plausible-sounding but incorrect or nonsensical answers. They can make up names, dates, historical events, and even simple math problems. When large language models produce coherent-sounding but inaccurate or fabricated content, this is referred to as "hallucinations". When humans use and apply chatbot content contaminated with hallucinations, this results in "botshit". Given the increasing adoption and use of chatbots for generating content, there are concerns that this technology will significantly reduce the cost it takes humans to generate misinformation. Impact on jobs Chatbots and technology in general used to automate repetitive tasks. But advanced chatbots like ChatGPT are also targeting high-paying, creative, and knowledge-based jobs, raising concerns about workforce disruption and quality trade-offs in favor of cost-cutting. Chatbots are increasingly used by small and medium enterprises, to handle customer interactions efficiently, reducing reliance on large call centers and lowering operational costs. Prompt engineering, the task of designing and refining prompts (inputs) leading to desired AI-generated responses has quickly gained significant demand with the advent of large language models, although the viability of this job is questioned due to new techniques for automating prompt engineering. Impact on the environment Generative AI uses a high amount of electric power. Due to reliance on fossil fuels in its generation, this increases air pollution, water pollution, and greenhouse gas emissions. In 2023, a question to ChatGPT consumed on average 10 times as much energy as a Google search. Data centres in general, and those used for AI tasks specifically, consume significant amounts of water for cooling.
Technology
Computer software
null
148367
https://en.wikipedia.org/wiki/Dendritic%20cell
Dendritic cell
A dendritic cell (DC) is an antigen-presenting cell (also known as an accessory cell) of the mammalian immune system. A DC's main function is to process antigen material and present it on the cell surface to the T cells of the immune system. They act as messengers between the innate and adaptive immune systems. Dendritic cells are present in tissues that are in contact with the body's external environment, such as the skin (where there is a specialized dendritic cell type called the Langerhans cell), and the inner lining of the nose, lungs, stomach and intestines. They can also be found in an immature and mature state in the blood. Once activated, they migrate to the lymph nodes, where they interact with T cells and B cells to initiate and shape the adaptive immune response. At certain development stages they grow branched projections, the dendrites, that give the cell its name (δένδρον or déndron being Greek for 'tree'). While similar in appearance to the dendrites of neurons, these are structures distinct from them. Immature dendritic cells are also called veiled cells, as they possess large cytoplasmic 'veils' rather than dendrites. History Dendritic cells were first described by Paul Langerhans (hence Langerhans cells) in the late nineteenth century. The term dendritic cells was coined in 1973 by Ralph M. Steinman and Zanvil A. Cohn. For discovering the central role of dendritic cells in the adaptive immune response, Steinman was awarded the Albert Lasker Award for Basic Medical Research in 2007 and the Nobel Prize in Physiology or Medicine in 2011. Types The morphology of dendritic cells results in a very large surface-to-volume ratio. That is, the dendritic cell has a very large surface area compared to the overall cell volume. In vivo – primate The most common division of dendritic cells is conventional dendritic cells (a.k.a. myeloid dendritic cells) vs. plasmacytoid dendritic cell (most likely of lymphoid lineage) as described in the table below: The markers BDCA-2, BDCA-3, and BDCA-4 can be used to discriminate among the types. Lymphoid and myeloid DCs evolve from lymphoid and myeloid precursors, respectively, and thus are of hematopoietic origin. By contrast, follicular dendritic cells (FDC) are probably of mesenchymal rather than hematopoietic origin and do not express MHC class II, but are so named because they are located in lymphoid follicles and have long "dendritic" processes. In blood The blood DCs are typically identified and enumerated in flow cytometry. Three types of DCs have been defined in human blood: the CD1c+ myeloid DCs, the CD141+ myeloid DCs and the CD303+ plasmacytoid DCs. This represents the nomenclature proposed by the nomenclature committee of the International Union of Immunological Societies. Dendritic cells that circulate in blood do not have all the typical features of their counterparts in tissue, i.e. they are less mature and have no dendrites. Still, they can perform complex functions including chemokine-production (in CD1c+ myeloid DCs), cross-presentation (in CD141+ myeloid DCs), and IFNalpha production (in CD303+ plasmacytoid DCs). In vitro In some respects, dendritic cells cultured in vitro do not show the same behaviour or capability as dendritic cells isolated ex vivo. Nonetheless, they are often used for research as they are still much more readily available than genuine DCs. Mo-DC or MDDC refers to cells matured from monocytes. HP-DC refers to cells derived from hematopoietic progenitor cells. Development and life cycle Formation of immature cells and their maturation Dendritic cells are derived from hematopoietic bone marrow progenitor cells (HSC). These progenitor cells initially transform into immature dendritic cells. These cells are characterized by high endocytic activity and low T-cell activation potential. Immature dendritic cells constantly sample the surrounding environment for pathogens such as viruses and bacteria. This is done through pattern recognition receptors (PRRs) such as the toll-like receptors (TLRs). TLRs recognize specific chemical signatures found on subsets of pathogens. Immature dendritic cells may also phagocytose small quantities of membrane from live own cells, in a process called nibbling. Once they have come into contact with a presentable antigen, they become activated into mature dendritic cells and begin to migrate to a lymph node. Immature dendritic cells phagocytose pathogens and degrade their proteins into small pieces and upon maturation present those fragments at their cell surface using MHC molecules. Simultaneously, they upregulate cell-surface receptors that act as co-receptors in T-cell activation such as CD80 (B7.1), CD86 (B7.2), and CD40 greatly enhancing their ability to activate T-cells. They also upregulate CCR7, a chemotactic receptor that induces the dendritic cell to travel through the blood stream to the spleen or through the lymphatic system to a lymph node. Here they act as antigen-presenting cells: they activate helper T-cells and killer T-cells as well as B-cells by presenting them with antigens derived from the pathogen, alongside non-antigen specific costimulatory signals. Dendritic cells can also induce T-cell tolerance (unresponsiveness). Certain C-type lectin receptors (CLRs) on the surface of dendritic cells, some functioning as PRRs, help instruct dendritic cells as to when it is appropriate to induce immune tolerance rather than lymphocyte activation. Every helper T-cell is specific to one particular antigen. Only professional antigen-presenting cells (APCs: macrophages, B lymphocytes, and dendritic cells) are able to activate a resting helper T-cell when the matching antigen is presented. However, in non-lymphoid organs, macrophages and B cells can only activate memory T cells whereas dendritic cells can activate both memory and naive T cells, and are the most potent of all the antigen-presenting cells. In the lymph node and secondary lymphoid organs, all three APCs can activate naive T cells. Whereas mature dendritic cells are able to activate antigen-specific naive CD8+ T cells, the formation of CD8+ memory T cells requires the interaction of dendritic cells with CD4+ helper T cells. This help from CD4+ T cells additionally activates the matured dendritic cells and licenses (empowers) them to efficiently induce CD8+ memory T cells, which are also able to be expanded a second time. For this activation of CD8+, concurrent interaction of all three cell types, namely CD4+ T helper cells, CD8+ T cells and dendritic cells, seems to be required. As mentioned above, mDC probably arise from monocytes, white blood cells which circulate in the body and, depending on the right signal, can turn into either dendritic cells or macrophages. The monocytes in turn are formed from stem cells in the bone marrow. Monocyte-derived dendritic cells can be generated in vitro from peripheral blood mononuclear cell (PBMCs). Plating of PBMCs in a tissue culture flask permits adherence of monocytes. Treatment of these monocytes with interleukin 4 (IL-4) and granulocyte-macrophage colony stimulating factor (GM-CSF) leads to differentiation to immature dendritic cells (iDCs) in about a week. Subsequent treatment with tumor necrosis factor (TNF) further differentiates the iDCs into mature dendritic cells. Monocytes can be induced to differentiate into dendritic cells by a self-peptide Ep1.B derived from apolipoprotein E. These are primarily tolerogenic plasmacytoid dendritic cells. Life span In mice, it has been estimated that dendritic cells are replenished from the blood at a rate of 4000 cells per hour, and undergo a limited number of divisions during their residence in the spleen over 10 to 14 days. Research challenges The exact genesis and development of the different types and subsets of dendritic cells and their interrelationship is only marginally understood at the moment, as dendritic cells are so rare and difficult to isolate that only in recent years they have become subject of focused research. Distinct surface antigens that characterize dendritic cells have only become known from 2000 on; before that, researchers had to work with a 'cocktail' of several antigens which, used in combination, result in isolation of cells with characteristics unique to DCs. Cytokines The dendritic cells are constantly in communication with other cells in the body. This communication can take the form of direct cell–cell contact based on the interaction of cell-surface proteins. An example of this includes the interaction of the membrane proteins of the B7 family of the dendritic cell with CD28 present on the lymphocyte. However, the cell–cell interaction can also take place at a distance via cytokines. For example, stimulating dendritic cells in vivo with microbial extracts causes the dendritic cells to rapidly begin producing IL-12. IL-12 is a signal that helps send naive CD4 T cells towards a Th1 phenotype. The ultimate consequence is priming and activation of the immune system for attack against the antigens which the dendritic cell presents on its surface. However, there are differences in the cytokines produced depending on the type of dendritic cell. The plasmacytoid DC has the ability to produce huge amounts of type-1 IFNs, which recruit more activated macrophages to allow phagocytosis. Disease Blastic plasmacytoid dendritic cell neoplasm Blastic plasmacytoid dendritic cell neoplasm is a rare type of myeloid cancer in which malignant pDCs infiltrate the skin, bone marrow, central nervous system, and other tissues. Typically, the disease presents with skin lesions (e.g. nodules, tumors, papules, bruise-like patches, and/or ulcers) that most often occur on the head, face, and upper torso. This presentation may be accompanied by cPC infiltrations into other tissues to result in swollen lymph nodes, enlarged liver, enlarged spleen, symptoms of central nervous system dysfunction, and similar abnormalities in breasts, eyes, kidneys, lungs, gastrointestinal tract, bone, sinuses, ears, and/or testes. The disease may also present as a pDC leukemia, i.e. increased levels of malignant pDC in blood (i.e. >2% of nucleated cells) and bone marrow and evidence (i.e. cytopenias) of bone marrow failure. Blastic plasmacytoid dendritic cell neoplasm has a high rate of recurrence following initial treatments with various chemotherapy regimens. In consequence, the disease has a poor overall prognosis and newer chemotherapeutic and novel non-chemotherapeutic drug regimens to improve the situation are under study. Viral infection HIV, which causes AIDS, can bind to dendritic cells via various receptors expressed on the cell. The best studied example is DC-SIGN (usually on MDC subset 1, but also on other subsets under certain conditions; since not all dendritic cell subsets express DC-SIGN, its exact role in sexual HIV-1 transmission is not clear). When the dendritic cell takes up HIV and then travels to the lymph node, the virus can be transferred to helper CD4+ T-cells, contributing to the developing infection. This infection of dendritic cells by HIV explains one mechanism by which the virus could persist after prolonged HAART. Many other viruses, such as the SARS virus, seem to use DC-SIGN to 'hitchhike' to its target cells. However, most work with virus binding to DC-SIGN expressing cells has been conducted using in vitro derived cells such as moDCs. The physiological role of DC-SIGN in vivo is more difficult to ascertain. Cancer Dendritic cells are usually not abundant at tumor sites, but increased densities of populations of dendritic cells have been associated with better clinical outcome, suggesting that these cells can participate in controlling cancer progression. Lung cancers have been found to include four different subsets of dendritic cells: three classical dendritic cell subsets and one plasmacytoid dendritic cell subset. At least some of these dendritic cell subsets can activate CD4+ helper T cells and CD8+ cytotoxic T cells, which are immune cells that can also suppress tumor growth. In experimental models, dendritic cells have also been shown to contribute to the success of cancer immunotherapies, for example with the immune checkpoint blocker anti-PD-1. Autoimmunity Altered function of dendritic cells is also known to play a major or even key role in allergy and autoimmune diseases like lupus erythematosus and inflammatory bowel diseases (Crohn's disease and ulcerative colitis). Other animals The above applies to humans. In other organisms, the function of dendritic cells can differ slightly. However, the principal function of dendritic cells as known to date is always to act as an immune sentinel. They survey the body and collect information relevant to the immune system, they are then able to instruct and direct the adaptive arms to respond to challenges. In addition, an immediate precursor to myeloid and lymphoid dendritic cells of the spleen has been identified. This precursor, termed pre-DC, lacks MHC class II surface expression, and is distinct from monocytes, which primarily give rise to DCs in non-lymphoid tissues. Dendritic cells have also been found in turtles. Dendritic cells have been found in rainbow trout (Oncorhynchus mykiss) and zebrafish (Danio rerio) but their role is still not fully understood Media
Biology and health sciences
Immune system
Biology
148383
https://en.wikipedia.org/wiki/Arquebus
Arquebus
An arquebus ( ) is a form of long gun that appeared in Europe and the Ottoman Empire during the 15th century. An infantryman armed with an arquebus is called an arquebusier. The term arquebus was applied to many different forms of firearms from the 15th to 17th centuries, but it originally referred to "a hand-gun with a hook-like projection or lug on its under surface, useful for steadying it against battlements or other objects when firing". These "hook guns" were in their earliest forms defensive weapons mounted on German city walls in the early 15th century. The addition of a shoulder stock, priming pan, and matchlock mechanism in the late 15th century turned the arquebus into a handheld firearm and also the first firearm equipped with a trigger. The exact dating of the matchlock's appearance is disputed. It could have appeared in the Ottoman Empire as early as 1465 and in Europe a little before 1475. The heavy arquebus, which was then called a musket, was developed to better penetrate plate armor and appeared in Europe around 1521. Heavy arquebuses mounted on war wagons were called arquebus à croc. These carried a lead ball of about . A standardized arquebus, the caliver, was introduced in the latter half of the 16th century. The name "caliver" is an English derivation from the French – a reference to the gun's standardized bore. The caliver allowed troops to load bullets faster since they fit their guns more easily, whereas before soldiers often had to modify their bullets into suitable fits, or even made their own prior to battle. The matchlock arquebus is considered the forerunner to the flintlock musket. Terminology The term arquebus is derived from the Dutch word ("hook gun"). which was applied to an assortment of firearms from the 15th to 17th centuries. It originally referred to "a hand-gun with a hook-like projection or lug on its under surface, useful for steadying it against battlements or other objects when firing". The first certain attestation of the term arquebus dates back to 1364, when the lord of Milan Bernabò Visconti recruited 70 archibuxoli, although in this case it almost certainly referred to a hand cannon. The arquebus has at times been known as the harquebus, harkbus, hackbut, hagbut, archibugio, haakbus, schiopo, sclopus, tüfenk, tofak, matchlock, and firelock. Musket The musket, essentially a large arquebus, was introduced around 1521, but fell out of favor by the mid-16th century due to the decline of armor. The term, however, remained and musket became a generic descriptor for smoothbore gunpowder weapons fired from the shoulder ("shoulder arms") into the mid-19th century. At least on one occasion musket and arquebus were used interchangeably to refer to the same weapon, and even referred to as an arquebus musket. A Habsburg commander in the mid-1560s once referred to muskets as double arquebuses. The matchlock firing mechanism also became a common term for the arquebus after it was added to the firearm. Later flintlock firearms were sometimes called fusils or fuzees. Mechanism and usage Prior to the appearance of the serpentine lever by around 1411, handguns were fired from the chest, tucked under one arm, while the other arm maneuvered a hot pricker to the touch hole to ignite the gunpowder. The matchlock, which appeared roughly around 1475, changed this by adding a firing mechanism consisting of two parts, the match, and the lock. The lock mechanism held within a clamp a long length of smoldering rope soaked in saltpeter, which was the match. Connected to the lock lever was a trigger, which lowered the match into a priming pan when squeezed, igniting the priming powder, causing a flash to travel through the touch hole, also igniting the gunpowder within the barrel, and propelling the bullet out the muzzle. While matchlocks provided a crucial advantage by allowing the user to aim the firearm using both hands, it was also awkward to utilize. To avoid accidentally igniting the gunpowder the match had to be detached while loading the gun. In some instances the match would also go out, so both ends of the match were kept lit. This proved cumbersome to maneuver as both hands were required to hold the match during removal, one end in each hand. The procedure was so complex that a 1607 drill manual published by Jacob de Gheyn in the Netherlands listed 28 steps just to fire and load the gun. In 1584 the Ming General Qi Jiguang composed an 11-step song to practice the procedure in rhythm: "One, clean the gun. Two, pour the powder. Three, tamp the powder down. Four, drop the pellet. Five, drive the pellet down. Six, put in paper (stopper). Seven, drive the paper down. Eight, open the flashpan cover. Nine, pour in the flash powder. Ten, close the flashpan, and clamp the fuse. Eleven, listen for the signal, then open the flashpan cover. Aiming at the enemy, raise your gun and fire." Reloading a gun during the 16th century took anywhere from 20 seconds to a minute under the most ideal conditions. The development of volley fire—by the Ottomans, the Chinese, the Japanese, and the Dutch—made the arquebus more feasible for widespread adoption by militaries. The volley fire technique transformed soldiers carrying firearms into organized firing squads with each row of soldiers firing in turn and reloading in a systematic fashion. Volley fire was implemented with cannons as early as 1388 by Ming artillerists, but volley fire with matchlocks was not implemented until 1526 when the Ottoman Janissaries utilized it during the Battle of Mohács. The matchlock volley fire technique was next seen in mid-16th-century China as pioneered by Qi Jiguang and in late-16th-century Japan. Qi Jiguang elaborates on his volley fire technique in the Jixiao Xinshu: In Europe, William Louis, Count of Nassau-Dillenburg theorized that by applying to firearms the same Roman counter march technique as described by Aelianus Tacticus, matchlocks could provide fire without cease. In a letter to his cousin Maurice of Nassau, Prince of Orange, on 8 December 1594, he wrote: Once volley firing had been developed, the rate of fire and efficiency was greatly increased and the arquebus went from being a support weapon to the primary focus of most early modern armies. The wheellock mechanism was utilized as an alternative to the matchlock as early as 1505, but was more expensive to produce at three times the cost of a matchlock and prone to breakdown, thus limiting it primarily to specialist firearms and pistols. The snaphance flintlock was invented by the mid-16th century and then the "true" flintlock in the early 17th century, but by this time the generic term for firearms had shifted to musket, and flintlocks are not usually associated with arquebuses. History Origins The earliest known examples of an "arquebus" date back to 1411 in Europe and no later than 1425 in the Ottoman Empire. This early firearm was a hand cannon, whose roots trace back to China, with a serpentine lever to hold matches. However it did not have the matchlock mechanism traditionally associated with the arquebus. The exact dating of the matchlock addition is disputed. The first references to the use of what may have been arquebuses (tüfek) by the Janissary corps of the Ottoman army date them from 1394 to 1465. However, it is unclear whether these were arquebuses or small cannons as late as 1444, but according to Gábor Ágoston the fact that they were listed separately from cannons in mid-15th century inventories suggest they were handheld firearms. In Europe, a shoulder stock, probably inspired by the crossbow stock, was added to the arquebus around 1470 and the appearance of the matchlock mechanism is dated to a little before 1475. The matchlock arquebus was the first firearm equipped with a trigger mechanism. It is also considered to be the first portable shoulder-arms firearm. Ottomans The Ottomans made use of arquebuses as early as the first half of the fifteenth century. During the Ottoman–Hungarian wars of 1443–1444, it was noted that Ottoman defenders in Vidin had arquebuses. Based on the earliest known contemporary written sources, Godfrey Goodwin dates the first use of the arquebus by the Janissaries to no earlier than 1465. According to contemporary accounts, 400 arquebusiers served in Sultan Murad II's campaign in the 1440s when he crossed Bosporus straits and arquebuses were used in combat by the Ottomans at the second battle of Kosovo in 1448. Ottomans also made some use of Wagon Fortresses which they copied from the Hussites, which often involved the placing of arquebusiers in the protective wagons and using them against the enemy. Arquebusiers were also used effectively at the battle of Bashkent in 1473 when they were used in conjunction with artillery. Europe The arquebus was used in substantial numbers for the first time in Europe during the reign of King Matthias Corvinus of Hungary (r. 1458–1490). One in four soldiers in the infantry of the Black Army of Hungary wielded an arquebus, and one in five when accounting for the whole army, which was an unusually high proportion at the time. Although they were present on the battlefield King Mathias preferred enlisting shielded men instead due to the arquebus's low rate of fire. While the Black Army adopted arquebuses relatively early, the trend did not catch on for decades in Europe and by the turn of the 16th century only around 10% of Western European infantrymen used firearms. Arquebuses were used as early as 1472 by the Portuguese at Zamora. Likewise, the Castilians used arquebuses as well in 1476. The French started adopting the arquebus in 1520. However, arquebus designs continued to develop and in 1496 Philip Monch of the Palatinate composed an illustrated Buch der Strynt un(d) Buchsse(n) on guns and "harquebuses". The effectiveness of the arquebus was apparent by the Battle of Cerignola of 1503, which is the earliest-recorded military conflict where arquebuses played a decisive role in the outcome of the battle. In Russia, a small arquebus called pishchal () appeared in 1478 in Pskov. The Russian arquebusiers, or pishchal'niki, were seen as integral parts of the army and one thousand pishchal'niki participated in the final annexation of Pskov in 1510 as well as the conquest of Smolensk in 1512. The Russian need to acquire gunpowder weaponry bears some resemblance to the situation the Iranians were in. In 1545 two thousand pishchal'niki (one thousand on horseback) were levied by the towns and outfitted at treasury expense. Their use of mounted troops was also unique to the time period. The pishchal'niki eventually became skilled hereditary tradesmen farmers rather than conscripts. Arquebuses were used in the Italian Wars in the first half of the 16th century. Frederick Lewis Taylor claims that a kneeling volley fire may have been employed by Prospero Colonna's arquebusiers as early as the Battle of Bicocca (1522). However, this has been called into question by Tonio Andrade who believes this is an overinterpretation as well as a mis-citation of a passage by Charles Oman suggesting that the Spanish arquebusiers knelt to reload, when in fact Oman never made such a claim. This is contested by Idan Sherer, who quotes Paolo Giovio saying that the arquebusiers kneeled to reload so that the second line of arquebusiers could fire without endangering those in front of them. Mamluks The Mamluks in particular were conservatively against the incorporation of gunpowder weapons. When faced with cannons and arquebuses wielded by the Ottomans they criticized them thus, "God curse the man who invented them, and God curse the man who fires on Muslims with them." Insults were also levied against the Ottomans for having "brought with you this contrivance artfully devised by the Christians of Europe when they were incapable of meeting the Muslim armies on the battlefield". Similarly, musketeers and musket-wielding infantrymen were despised in society by the feudal knights, even until the time of Miguel de Cervantes (1547–1616). Eventually the Mamluks under Qaitbay were ordered in 1489 to train in the use of al-bunduq al-rasas (arquebuses). However, in 1514 an Ottoman army of 12,000 soldiers wielding arquebuses devastated a much larger Mamluk army. The arquebus had become a common infantry weapon by the 16th century due to its relative cheapness—a helmet, breastplate and pike cost about three and a quarter ducats while an arquebus only a little over one ducat. Another advantage of arquebuses over other equipment and weapons was its short training period. While a bow potentially took years to master, an effective arquebusier could be trained in just two weeks. Asia The arquebus spread further east, reaching India by 1500, Southeast Asia by 1540, and China sometime between 1523 and 1548. They were introduced to Japan in 1543 by Portuguese traders who landed by accident on Tanegashima (種子島), an island south of Kyūshū in the region controlled by the Shimazu clan. By 1550, arquebuses known as tanegashima, teppō (鉄砲) or hinawaju (火縄銃) were being produced in large numbers in Japan. The tanegashima seem to have utilized snap matchlocks based on firearms from Goa, India, which was captured by the Portuguese in 1510. Within ten years of its introduction upwards of three hundred thousand tanegashima were reported to have been manufactured. The tanegashima eventually became one of the most important weapons in Japan. Oda Nobunaga revolutionized musket tactics in Japan by splitting loaders and shooters and assigning three guns to a shooter at the Battle of Nagashino in 1575, during which volley fire may have been implemented. However, the volley fire technique of 1575 has been called into dispute in recent years by J. S. A. Elisonas and J. P. Lamers in their translation of The Chronicle of Oda Nobunaga by Ota Gyuichi. In Lamers' Japonius he says that "whether or not Nobunaga actually operated with three rotating ranks cannot be determined on the basis of reliable evidence." They claim that the version of events describing volley fire was written several years after the battle, and an earlier account says to the contrary that guns were fired en masse. Even so, both Korean and Chinese sources note that Japanese gunners were making use of volley fire during the Japanese invasions of Korea from 1592 to 1598. Iran Regarding Iranian use of the arquebus, much of the credit for their increase in use can be attributed to Shah Ismail I who, after being defeated by the firearm-using Ottomans in 1514, began extensive use of arquebuses and other firearms himself with an estimated 12,000 arquebusiers in service less than 10 years after his initial defeat by the Ottomans. According to a 1571 report by Vincentio d'Alessandri, Persian arms including arquebuses "were superior and better tempered than those of any other nation", suggesting that such firearms were in common use among middle eastern powers by at least the mid-16th century. While the use of 12,000 arquebusiers is impressive, the firearms were not widely adopted in Iran. This is in no small part due to the reliance on light cavalry by the Iranians. Riding a horse and operating an arquebus are incredibly difficult which helped lead to both limited use and heavy stagnation in the technology associated with firearms. These limitations aside, the Iranians still made use of firearms and Europe was very important in facilitating that as Europeans supplied Iran with firearms and sent experts to help them produce some of the firearms themselves. Iran also made use of elephant mounted arquebusiers which would give them a clear view of their targets and better mobility. Southeast Asia Southeast Asian powers started fielding arquebuses by 1540. Đại Việt was considered by the Ming to have produced particularly advanced matchlocks during the 16–17th century, surpassing even Ottoman, Japanese, and European firearms. European observers of the Lê–Mạc War and later Trịnh–Nguyễn War also noted the proficiency of matchlock making by the Vietnamese. The Vietnamese matchlock was said to have been able to pierce several layers of iron armour, kill two to five men in one shot, yet also fire quietly for a weapon of its caliber. China The arquebus was introduced to the Ming dynasty in the early 16th century and was used in small numbers to fight off pirates by 1548. There is, however, no exact date for its introduction and sources conflict on the time and manner in which it was introduced. Versions of the arquebus' introduction to China include the capture of firearms by the Ming during a battle in 1523, the capture of the pirate Wang Zhi, who had arquebuses, in 1558, which contradicts the usage of arquebuses by the Ming army ten years earlier, and the capture of arquebuses from Europeans by the Xu brother pirates, which later came into possession of a man named Bald Li, from whom the Ming officials captured the arquebuses. About 10,000 muskets were ordered by the Central Military Weaponry Bureau in 1558 and the firearms were used to fight off pirates. Qi Jiguang developed military formations for the effective use of arquebus equipped troops with different mixtures of troops deployed in 12-man teams. The number of arquebuses assigned to each team could vary depending on the context but theoretically in certain cases all members of the team could have been deployed as gunners. These formations also made use of countermarch volley fire techniques. Firearm platoons deployed one team in front of them at the blast of a bamboo flute. They started firing after their leader fired and fired once at the blast of a trumpet, and then spread out according to their drilling pattern. Each layer could also fire once at the blowing of a horn and were supported by close-quarters troops who could advance should the need arise. To avoid self-inflicted injuries and ensure a consistent rate of fire in the heat of battle, Qi emphasized drilling in the procedure required to reload the weapon. Qi Jiguang gave a eulogy on the effectiveness of the gun in 1560: European arquebus formations In Europe, Maurice of Nassau pioneered the countermarch volley fire technique. After outfitting his entire army with new, standardized arms in 1599, Maurice of Nassau attempted to recapture Spanish forts built on former Dutch lands. In the Battle of Nieuwpoort in 1600, he administered the new techniques and technologies for the first time. The Dutch marched onto the beach where the fort was located and fully utilized the countermarching tactic. By orienting all of his arquebusiers into a block, he was able to maintain a steady stream of fire out of a disciplined formation using volley fire tactics. The result was a lopsided victory with 4,000 Spanish casualties to only 1,000 dead and 700 wounded on the Dutch side. Although the battle was principally won by the decisive counterattack of the Dutch cavalry and despite the failure of the new Dutch infantry tactic in stopping the veteran Spanish tercios, the battle is considered a decisive step forward in the development early modern warfare, where firearms took on an increasingly large role in Europe in the following centuries. "Musket" eventually overtook "arquebus" as the dominant term for similar firearms starting from the 1550s. Arquebuses are most often associated with matchlocks. Use with other weapons The arquebus had many advantages but also severe limitations on the battlefield. This led to it often being paired up with other weaponry to mitigate these weaknesses. Qi Jiguang from China developed systems where soldiers with traditional weaponry stayed right behind the arquebusiers to protect them should enemy infantry get too close. Pikemen were used to protect the arquebusiers by the English and the Venetians often used archers to lay down cover fire during the long reloading process. The Ottomans often supported their arquebusiers with artillery fire or placed them in fortified wagons, a tactic they borrowed from the Hussites. Comparison to bows Sixteenth-century military writer John Smythe thought that an arquebus could not match the accuracy of a bow in the hands of a highly skilled archer; other military writers such as Humfrey Barwick and Barnabe Rich argued the opposite. An arquebus angled at 35 degrees could throw a bullet up to or more. An arquebus shot was considered deadly at up to 400 yards (360 m) while the heavier Spanish musket was considered deadly at up to 600 yards (550 m). During the Japanese Invasions of Korea, Korean officials said they were at a severe disadvantage against Japanese troops because their arquebuses "could reach beyond several hundred paces". In 1590, Smythe noted that arquebusiers and musketeers firing at such extreme distances rarely seemed to hit anything and instead decided to argue effective range, claiming that English archers like the ones from the Hundred Years' War would be more effective at 200–240 yards (180–220 m) than arquebusiers or musketeers, but by that point there were no longer enough skilled archers in England to properly test his theories. Perhaps the most important advantage of the arquebus over muscle-powered weapons like longbows was sheer power. A shot from a typical 16th-century arquebus boasted between of kinetic energy, depending on the powder quality. A longbow arrow by contrast was about , while crossbows could vary from depending on construction. Thus, arquebuses could easily defeat armor that would be highly effective against arrows or bolts, and inflict far greater wounds on flesh. The disparity was even greater with a 16th-century heavy musket, which were . Most high-skilled bowmen achieved a far higher rate of shot than the matchlock arquebus, which took 30–60 seconds to reload properly. The arquebus did, however, have a faster rate of fire than the most powerful crossbow, a shorter learning curve than a longbow, and was more powerful than either. The arquebus did not rely on the physical strength of the user for propulsion of the projectile, making it easier to find a suitable recruit. It also meant that, compared to an archer or crossbowman, an arquebusier lost less of his battlefield effectiveness due to fatigue, malnutrition, or sickness. The arquebusier also had the added advantage of frightening enemies (and horses) with the noise. Wind could reduce the accuracy of archery, but had much less of an effect on an arquebus. During a siege, it was also easier to fire an arquebus out of loopholes than it was a bow and arrow. It was sometimes advocated that an arquebusier should load his weapon with multiple bullets or small shot at close ranges rather than a single ball. Small shot did not pack the same punch as a single round ball but the shot could hit and wound multiple enemies. An arquebus also has superior penetrating power to a bow or crossbow. Although some plate armors were bulletproof, these armors were unique, heavy, and expensive. A cuirass with a tapul was able to absorb some musket fire due to being angled. Otherwise, most forms of armor a common soldier would wear (especially cloth, light plate, and mail) had little resistance against musket fire. Arrows, however, were relatively weaker in penetration, and heavier than bows or crossbows that required more skill and reload time than the standard bows. Producing an effective arquebusier required much less training than producing an effective bowman. Most archers spent their whole lives training to shoot with accuracy, but with drill and instruction, the arquebusier was able to learn his profession in months as opposed to years. This low level of skill made it a lot easier to outfit an army in a short amount of time as well as expand the small arms ranks. This idea of lower-skilled, lightly armoured units was the driving force in the infantry revolution that took place in the 16th and 17th centuries and allowed early modern infantries to phase out the longbow. An arquebusier could carry more ammunition and powder than a crossbowman or longbowman could with bolts or arrows. Once the methods were developed, powder and shot were relatively easy to mass-produce, while arrow making was a genuine craft requiring highly skilled labor. However, the arquebus was more sensitive to rain, wind, and humid weather. At the Battle of Villalar, rebel troops experienced a significant defeat partially due to having a high proportion of arquebusiers in a rainstorm which rendered the weapons useless. Gunpowder also ages much faster than a bolt or an arrow, particularly if improperly stored. Also, the resources needed to make gunpowder were less universally available than the resources needed to make bolts and arrows. Finding and reusing arrows or bolts was a lot easier than doing the same with arquebus bullets. This was a useful way to reduce the cost of practice or resupply oneself if control of the battlefield after a battle was retained. A bullet must fit a barrel much more precisely than an arrow or bolt must fit a bow or crossbow, so the arquebus required more standardization and this made it harder to resupply by looting bodies of fallen soldiers. Gunpowder production was also far more dangerous than arrow or bolt production. An arquebus was also significantly more dangerous to its user. The arquebusier carries a lot of gunpowder on his person and has a lit match in one hand. The same goes for the soldiers next to him. Amid the confusion, stress and fumbling of a battle, arquebusiers are potentially a danger to themselves. Early arquebuses tended to have a drastic recoil. They took a long time to load making them vulnerable while reloading unless using the 'continuous fire' tactic, where one line would shoot and, while the next line shot, would reload. They also tended to overheat. During repeated firing, guns could become clogged and explode, which could be dangerous to the gunner and those around him. Furthermore, the amount of smoke produced by black-powder weapons was considerable, making it hard to see the enemy after a few salvos, unless there was enough wind to disperse the smoke quickly. (Conversely, this cloud of smoke also served to make it difficult for any archers to target the opposing soldiers who were using firearms.) Before the wheellock, the need for a lit match made stealth and concealment nearly impossible, particularly at night. Even with successful concealment, the smoke emitted by a single arquebus shot would make it quite obvious where the shot came from, at least in daylight. While with a bow or crossbow a soldier could conceivably kill silently, this was of course impossible with an explosion-driven projectile weapon, such as the arquebus. The noise of arquebuses and the ringing in the ears that it caused could also make it hard to hear shouted commands. In the long run, the weapon could make the user permanently hard of hearing. Though bows and crossbows could shoot over obstacles by firing with high-arcing ballistic trajectories they could not do so very accurately or effectively. Sir John Smythe blamed the declining effectiveness of the longbow in part on English commanders who would place firearms at the front of their formations and bowmen at the back, where they could not see their targets and aim appropriately. Cultural references Arquebuse de L'Hermitage, a clear spirit made by macerating and distilling a large variety of plants, was supposedly invented in 1857 by a herbalist of the Marist Brothers in the Hermitage Monastery in Saint-Genis-Laval, France although other sources assert it was produced in France and Piedmont since the 18th century. Its name has been ascribed to the sensation of drinking it and to its use in treating the wounded. It remains in production by various companies and is drunk as a digestif.
Technology
Firearms
null
148414
https://en.wikipedia.org/wiki/Sink
Sink
A sink (also known as basin in the UK) is a bowl-shaped plumbing fixture for washing hands, dishwashing, and other purposes. Sinks have a tap (faucet) that supplies hot and cold water and may include a spray feature to be used for faster rinsing. They also include a drain to remove used water; this drain may itself include a strainer and/or shut-off device and an overflow-prevention device. Sinks may also have an integrated soap dispenser. Many sinks, especially in kitchens, are installed adjacent to or inside a counter. When a sink becomes clogged, a person will often resort to using a chemical drain cleaner or a plunger, though most professional plumbers will remove the clog with a drain auger (often called a "plumber's snake"). History United States The washstand was a bathroom sink made in the United States in the late 18th century. The washstands were small tables on which were placed a pitcher and a deep bowl, following the English tradition. Sometimes the table had a hole where the large bowl rested, which led to the making of dry sinks. From about 1820 to 1900, the dry sink evolved by the addition of a wooden cabinet with a trough built on the top, lined with zinc or lead. This is where the bowls or buckets for water were kept. Splashboards were sometimes added to the back wall, as well as shelves and drawers, the more elaborate designs usually placed in the kitchen. Materials Sinks are made of many different materials. These include: Ceramic Concrete Copper Enamel over steel or cast iron Glass Granite Marble Nickel Plastic Polyester Porcelain Soapstone Stainless steel Stone Terrazzo Wood Stainless steel is most commonly used in kitchens and commercial applications because it represents a good trade-off between cost, usability, durability, and ease of cleaning. Most stainless steel sinks are made by drawing a sheet of stainless steel over a die. Some very deep sinks are fabricated by welding. Stainless steel sinks will not be damaged by hot or cold objects and resist damage from impacts. Stainless steel sinks are widely celebrated for their durability, sleek appearance, and resistance to rust and corrosion. However, many homeowners are baffled when they spot rust stains on their stainless steel sink. One disadvantage of stainless steel is that, being made of thin metal, they tend to be noisier than most other sink materials, although better sinks apply a heavy coating of vibration-damping material to the underside of the sink. Enamel over cast iron is a popular material for kitchen and bathroom sinks. Heavy and durable, these sinks can also be manufactured in a very wide range of shapes and colors. Like stainless steel, they are very resistant to hot or cold objects, but they can be damaged by sharp impacts and once the glass surface is breached, the underlying cast iron will often corrode, spalling off more of the glass. Aggressive cleaning will dull the surface, leading to more dirt accumulation. Enamel over steel is a similar-appearing but far less rugged and less cost-effective alternative. Solid ceramic sinks have many of the same characteristics as enamel over cast iron, but without the risk of surface damage leading to corrosion. Plastic sinks come in several basic forms: Inexpensive sinks are simply made using injection-molded thermoplastics. These are often deep, free-standing sinks used in laundry rooms. Subject to damage by hot or sharp objects, the principal virtue of these sinks is their low cost. High-end acrylic drop-in (lowered into the countertop) and undermount (attached from the bottom) sinks are becoming more popular, although they tend to be easily damaged by hard objects – like scouring a cast iron frying pan in the sink. Plastic sinks may also be made from the same materials used to form "solid surface" countertops. These sinks are durable, attractive, and can often be molded with an integrated countertop or joined to a separate countertop in a seamless fashion, leading to no sink-to-countertop joint or a very smooth sink-to-countertop joint that can not trap dirt or germs. These sinks are subject to damage by hot objects but damaged areas can sometimes be sanded down to expose undamaged material. Soapstone sinks were once common, but today tend to be used only in very-high-end applications or applications that must resist caustic chemicals that would damage more-conventional sinks. Wood sinks are from the early days of sinks, and baths were made from natural teak with no additional finishing. Teak is chosen because of its natural waterproofing properties – it has been used for hundreds of years in the marine industry for this reason. Teak also has natural antiseptic properties, which is a bonus for its use in baths and sinks. Glass sinks: A current trend in bathroom design is the handmade glass sink (often referred to as a vessel sink), which has become fashionable for wealthy homeowners. Stone sinks have been used for ages. Some of the more popular stones used are: marble, travertine, onyx, granite, and soap stone on high end sinks. Glass, concrete, and terrazzo sinks are usually designed for their aesthetic appeal and can be obtained in a wide variety of unusual shapes and colors such as floral shapes. Concrete and terrazzo are occasionally also used in very-heavy-duty applications such as janitorial sinks. Styles Top-mount sinks Self-rimming (top-mount) sinks sit in appropriately shaped holes roughly cut in the countertop (or substrate material) using a jigsaw or other cutter appropriate to the material at hand. They are suspended by their rim which forms a fairly close seal with the top surface of the worktop. If necessary, this seal can be enhanced by clamping the sink from below the worktop. Bottom-mount sinks Bottom-mount or under-mount sinks are installed below the countertop surface. The edge of the countertop material is exposed at the hole created for the sink (and so must be a carefully finished edge rather than a rough cut). The sink is then clamped to the bottom of the material from below. Especially for bottom-mount sinks, silicone-based sealants are usually used to assure a waterproof joint between the sink and the countertop material. Advantages of an undermount sink include superior ergonomics and a contemporary look; disadvantages include extra cost in both the sink and the counter top. Also, no matter how carefully the cut out is made, the result is either a small ledge or overhang at the interface with the sink. This can create an environment for catching dirt and allowing germs to grow. Solid-surface plastic materials allow sinks to be made of the same plastic material as the countertop. These sinks can then easily be glued to the underside of the countertop material and the joint sanded flat, creating the usual invisible joint and completely eliminating any dirt-catching seam between the sink and the countertop. In a similar fashion, for stainless steel, a sink may be welded into the countertop; the joint is then ground to create a finished, concealed appearance. Butler's sink A butler's sink is a rectangular ceramic sink with a rounded rim which is set into a work surface. There are generally two kinds of butler's sinks: the London sink and the Belfast sink. In 2006, both types of sinks usually were across and front-to-back, with a depth of . London sinks were originally shallower than Belfast sinks. (One plumbing guide in 1921 suggested that the Belfast sink was deep.) Some believe this was because London had less access to fresh water (and thus a greater need to conserve water), but this theory is now contested. It is more likely the two sinks had different roles within the household. But that difference usually does not exist in the modern era, and both sinks are now shallow. The primary difference both in the past and today between a Belfast and London sink is that the Belfast sink is fitted with an overflow weir which prevented water from spilling over the sink's edge by draining it away and down into the wastewater plumbing. Farmer's sink A farmer's sink is a deep sink that has a finished front. Set onto a countertop, the finished front of the sink remains exposed. This style of sink requires very little "reach-over" to access the sink. Vessel sink A vessel sink is a free-standing sink, generally finished and decorated on all sides, that sits directly on the surface of the furniture on which it is mounted. These sinks have become increasingly popular with bathroom designers because of the large range of materials, styles, and finishes that can be shown to good advantage. Food catering sinks Catering sinks are often made in sizes compatible with standard size Gastronorm containers, a European standard for food containers. Ceramic basin construction Pottery is made by a blend of clays, fillers and fluxes being fused together during the firing process. There are high fire clays and glazes which are heated to over 1200 °C (2200 °F) and are extremely resistant to fading, staining, burning, scratching and acid attack. Low fire clays, fired below 1200 °C, most often used by large commercial manufacturers and third world producers, while durable, are susceptible to scratching and wear over time. The clay body is first bisqued to about 1000 °C (1900 °F). In the second firing a white or coloured glaze is applied and is melted by heat which chemically and physically fuses the glass (glaze) to the clay body during the same firing process. Due to the firing process and natural clays used, it is normal for the product to vary in size and shape, and +/− 5 mm is normal. Accessories Some public restrooms feature automatic faucets, which use a motion-sensing valve to detect the user's hands moving beneath the tap and turn the water on. Some kitchen sinks also come equipped with a sink sprayer. Sinks, especially those made of stainless steel, can be fitted with an integrated drainboard, allowing for the draining of washed dishes. Gallery There are many different shapes and sizes of sinks.
Technology
Household appliances
null
148417
https://en.wikipedia.org/wiki/Stored-program%20computer
Stored-program computer
A stored-program computer is a computer that stores program instructions in electronically, electromagnetically, or optically accessible memory. This contrasts with systems that stored the program instructions with plugboards or similar mechanisms. The definition is often extended with the requirement that the treatment of programs and data in memory be interchangeable or uniform. Description In principle, stored-program computers have been designed with various architectural characteristics. A computer with a von Neumann architecture stores program data and instruction data in the same memory, while a computer with a Harvard architecture has separate memories for storing program and data. However, the term stored-program computer is sometimes used as a synonym for the von Neumann architecture. Jack Copeland considers that it is "historically inappropriate, to refer to electronic stored-program digital computers as 'von Neumann machines'". Hennessy and Patterson wrote that the early Harvard machines were regarded as "reactionary by the advocates of stored-program computers". History The concept of the stored-program computer can be traced back to the 1936 theoretical concept of a universal Turing machine. Von Neumann was aware of this paper, and he impressed it on his collaborators. Many early computers, such as the Atanasoff–Berry computer, were not reprogrammable. They executed a single hardwired program. As there were no program instructions, no program storage was necessary. Other computers, though programmable, stored their programs on punched tape, which was physically fed into the system as needed, as was the case for the Zuse Z3 and the Harvard Mark I, or were only programmable by physical manipulation of switches and plugs, as was the case for the Colossus computer. In 1936, Konrad Zuse anticipated in two patent applications that machine instructions could be stored in the same storage used for data. In 1948, the Manchester Baby, built at University of Manchester, is generally recognized as world's first electronic computer that ran a stored program—an event on 21 June 1948. However the Baby was not regarded as a full-fledged computer, but more a proof of concept predecessor to the Manchester Mark 1 computer, which was first put to research work in April 1949. On 6 May 1949 the EDSAC in Cambridge ran its first program, making it another electronic digital stored-program computer. It is sometimes claimed that the IBM SSEC, operational in January 1948, was the first stored-program computer; this claim is controversial, not least because of the hierarchical memory system of the SSEC, and because some aspects of its operations, like access to relays or tape drives, were determined by plugging. The first stored-program computer to be built in continental Europe was the MESM, completed in the Soviet Union in 1950. The first stored-program computers Several computers could be considered the first stored-program computer, depending on the criteria. IBM SSEC, was designed in late 1944 and became operational in January 1948 but was electromechanical In April 1948, modifications were completed to ENIAC to function as a stored-program computer, with the program stored by setting dials in its function tables, which could store 3,600 decimal digits for instructions. It ran its first stored program on 12 April 1948 and its first production program on 17 April This claim is disputed by some computer historians. ARC2, a relay machine developed by Andrew Booth and Kathleen Booth at Birkbeck, University of London, officially came online on 12 May 1948. It featured the first rotating drum storage device. Manchester Baby, a developmental, fully electronic computer that successfully ran a stored program on 21 June 1948. It was subsequently developed into the Manchester Mark 1, which ran its first program in early April 1949. Electronic Delay Storage Automatic Calculator, EDSAC, which ran its first programs on 6 May 1949, and became a full-scale operational computer that served a user community beyond its developers. EDVAC, conceived in June 1945 in First Draft of a Report on the EDVAC, but not delivered until August 1949. It began actual operation (on a limited basis) in 1951. BINAC, delivered to a customer on 22 August 1949. It worked at the factory but there is disagreement about whether or not it worked satisfactorily after being delivered. If it had been finished at the projected time, it would have been the first stored-program computer in the world. It was the first stored-program computer in the U.S. In 1951, the Ferranti Mark 1, a cleaned-up version of the Manchester Mark 1, became the first commercially available electronic digital computer. The Bull Gamma 3 (1952) and IBM 650 (1953) were the first mass produced commercial computers, respectively selling about 1200 and 2000 units. Manchester University Transistor Computer, is generally regarded as the first transistor-based stored-program computer having become operational in November 1953. Telecommunication The concept of using a stored-program computer for switching of telecommunication circuits is called stored program control (SPC). It was instrumental to the development of the first electronic switching systems by American Telephone and Telegraph (AT&T) in the Bell System, a development that started in earnest by c. 1954 with initial concept designs by Erna Schneider Hoover at Bell Labs. The first of such systems was installed on a trial basis in Morris, Illinois in 1960. The storage medium for the program instructions was the flying-spot store, a photographic plate read by an optical scanner that had a speed of about one microsecond access time. For temporary data, the system used a barrier-grid electrostatic storage tube.
Technology
Computer hardware
null
148420
https://en.wikipedia.org/wiki/Euler%20characteristic
Euler characteristic
In mathematics, and more specifically in algebraic topology and polyhedral combinatorics, the Euler characteristic (or Euler number, or Euler–Poincaré characteristic) is a topological invariant, a number that describes a topological space's shape or structure regardless of the way it is bent. It is commonly denoted by (Greek lower-case letter chi). The Euler characteristic was originally defined for polyhedra and used to prove various theorems about them, including the classification of the Platonic solids. It was stated for Platonic solids in 1537 in an unpublished manuscript by Francesco Maurolico. Leonhard Euler, for whom the concept is named, introduced it for convex polyhedra more generally but failed to rigorously prove that it is an invariant. In modern mathematics, the Euler characteristic arises from homology and, more abstractly, homological algebra. Polyhedra The Euler characteristic was classically defined for the surfaces of polyhedra, according to the formula where , , and are respectively the numbers of vertices (corners), edges and faces in the given polyhedron. Any convex polyhedron's surface has Euler characteristic This equation, stated by Euler in 1758, is known as Euler's polyhedron formula. It corresponds to the Euler characteristic of the sphere (i.e. ), and applies identically to spherical polyhedra. An illustration of the formula on all Platonic polyhedra is given below. The surfaces of nonconvex polyhedra can have various Euler characteristics: For regular polyhedra, Arthur Cayley derived a modified form of Euler's formula using the density , vertex figure density and face density This version holds both for convex polyhedra (where the densities are all 1) and the non-convex Kepler–Poinsot polyhedra. Projective polyhedra all have Euler characteristic 1, like the real projective plane, while the surfaces of toroidal polyhedra all have Euler characteristic 0, like the torus. Plane graphs The Euler characteristic can be defined for connected plane graphs by the same formula as for polyhedral surfaces, where is the number of faces in the graph, including the exterior face. The Euler characteristic of any plane connected graph is 2. This is easily proved by induction on the number of faces determined by , starting with a tree as the base case. For trees, and If has components (disconnected graphs), the same argument by induction on shows that One of the few graph theory papers of Cauchy also proves this result. Via stereographic projection the plane maps to the 2-sphere, such that a connected graph maps to a polygonal decomposition of the sphere, which has Euler characteristic 2. This viewpoint is implicit in Cauchy's proof of Euler's formula given below. Proof of Euler's formula There are many proofs of Euler's formula. One was given by Cauchy in 1811, as follows. It applies to any convex polyhedron, and more generally to any polyhedron whose boundary is topologically equivalent to a sphere and whose faces are topologically equivalent to disks. Remove one face of the polyhedral surface. By pulling the edges of the missing face away from each other, deform all the rest into a planar graph of points and curves, in such a way that the perimeter of the missing face is placed externally, surrounding the graph obtained, as illustrated by the first of the three graphs for the special case of the cube. (The assumption that the polyhedral surface is homeomorphic to the sphere at the beginning is what makes this possible.) After this deformation, the regular faces are generally not regular anymore. The number of vertices and edges has remained the same, but the number of faces has been reduced by 1. Therefore, proving Euler's formula for the polyhedron reduces to proving for this deformed, planar object. If there is a face with more than three sides, draw a diagonal—that is, a curve through the face connecting two vertices that are not yet connected. Each new diagonal adds one edge and one face and does not change the number of vertices, so it does not change the quantity (The assumption that all faces are disks is needed here, to show via the Jordan curve theorem that this operation increases the number of faces by one.) Continue adding edges in this manner until all of the faces are triangular. Apply repeatedly either of the following two transformations, maintaining the invariant that the exterior boundary is always a simple cycle: Remove a triangle with only one edge adjacent to the exterior, as illustrated by the second graph. This decreases the number of edges and faces by one each and does not change the number of vertices, so it preserves Remove a triangle with two edges shared by the exterior of the network, as illustrated by the third graph. Each triangle removal removes a vertex, two edges and one face, so it preserves These transformations eventually reduce the planar graph to a single triangle. (Without the simple-cycle invariant, removing a triangle might disconnect the remaining triangles, invalidating the rest of the argument. A valid removal order is an elementary example of a shelling.) At this point the lone triangle has and so that Since each of the two above transformation steps preserved this quantity, we have shown for the deformed, planar object thus demonstrating for the polyhedron. This proves the theorem. For additional proofs, see Eppstein (2013). Multiple proofs, including their flaws and limitations, are used as examples in Proofs and Refutations by Lakatos (1976). Topological definition The polyhedral surfaces discussed above are, in modern language, two-dimensional finite CW-complexes. (When only triangular faces are used, they are two-dimensional finite simplicial complexes.) In general, for any finite CW-complex, the Euler characteristic can be defined as the alternating sum where kn denotes the number of cells of dimension n in the complex. Similarly, for a simplicial complex, the Euler characteristic equals the alternating sum where kn denotes the number of n-simplexes in the complex. Betti number alternative More generally still, for any topological space, we can define the nth Betti number bn as the rank of the n-th singular homology group. The Euler characteristic can then be defined as the alternating sum This quantity is well-defined if the Betti numbers are all finite and if they are zero beyond a certain index n0. For simplicial complexes, this is not the same definition as in the previous paragraph but a homology computation shows that the two definitions will give the same value for . Properties The Euler characteristic behaves well with respect to many basic operations on topological spaces, as follows. Homotopy invariance Homology is a topological invariant, and moreover a homotopy invariant: Two topological spaces that are homotopy equivalent have isomorphic homology groups. It follows that the Euler characteristic is also a homotopy invariant. For example, any contractible space (that is, one homotopy equivalent to a point) has trivial homology, meaning that the 0th Betti number is 1 and the others 0. Therefore, its Euler characteristic is 1. This case includes Euclidean space of any dimension, as well as the solid unit ball in any Euclidean space — the one-dimensional interval, the two-dimensional disk, the three-dimensional ball, etc. For another example, any convex polyhedron is homeomorphic to the three-dimensional ball, so its surface is homeomorphic (hence homotopy equivalent) to the two-dimensional sphere, which has Euler characteristic 2. This explains why the surface of a convex polyhedron has Euler characteristic 2. Inclusion–exclusion principle If M and N are any two topological spaces, then the Euler characteristic of their disjoint union is the sum of their Euler characteristics, since homology is additive under disjoint union: More generally, if M and N are subspaces of a larger space X, then so are their union and intersection. In some cases, the Euler characteristic obeys a version of the inclusion–exclusion principle: This is true in the following cases: if M and N are an excisive couple. In particular, if the interiors of M and N inside the union still cover the union. if X is a locally compact space, and one uses Euler characteristics with compact supports, no assumptions on M or N are needed. if X is a stratified space all of whose strata are even-dimensional, the inclusion–exclusion principle holds if M and N are unions of strata. This applies in particular if M and N are subvarieties of a complex algebraic variety. In general, the inclusion–exclusion principle is false. A counterexample is given by taking X to be the real line, M a subset consisting of one point and N the complement of M. Connected sum For two connected closed n-manifolds one can obtain a new connected manifold via the connected sum operation. The Euler characteristic is related by the formula Product property Also, the Euler characteristic of any product space M × N is These addition and multiplication properties are also enjoyed by cardinality of sets. In this way, the Euler characteristic can be viewed as a generalisation of cardinality; see . Covering spaces Similarly, for a k-sheeted covering space one has More generally, for a ramified covering space, the Euler characteristic of the cover can be computed from the above, with a correction factor for the ramification points, which yields the Riemann–Hurwitz formula. Fibration property The product property holds much more generally, for fibrations with certain conditions. If is a fibration with fiber F, with the base B path-connected, and the fibration is orientable over a field K, then the Euler characteristic with coefficients in the field K satisfies the product property: This includes product spaces and covering spaces as special cases, and can be proven by the Serre spectral sequence on homology of a fibration. For fiber bundles, this can also be understood in terms of a transfer map – note that this is a lifting and goes "the wrong way" – whose composition with the projection map is multiplication by the Euler class of the fiber: Examples Surfaces The Euler characteristic can be calculated easily for general surfaces by finding a polygonization of the surface (that is, a description as a CW-complex) and using the above definitions. Soccer ball It is common to construct soccer balls by stitching together pentagonal and hexagonal pieces, with three pieces meeting at each vertex (see for example the Adidas Telstar). If pentagons and hexagons are used, then there are  faces,  vertices, and  edges. The Euler characteristic is thus Because the sphere has Euler characteristic 2, it follows that That is, a soccer ball constructed in this way always has 12 pentagons. The number of hexagons can be any nonnegative integer except 1. This result is applicable to fullerenes and Goldberg polyhedra. Arbitrary dimensions The  dimensional sphere has singular homology groups equal to hence has Betti number 1 in dimensions 0 and , and all other Betti numbers are 0. Its Euler characteristic is then that is, either 0 if is odd, or 2 if is even. The  dimensional real projective space is the quotient of the  sphere by the antipodal map. It follows that its Euler characteristic is exactly half that of the corresponding sphere – either 0 or 1. The  dimensional torus is the product space of  circles. Its Euler characteristic is 0, by the product property. More generally, any compact parallelizable manifold, including any compact Lie group, has Euler characteristic 0. The Euler characteristic of any closed odd-dimensional manifold is also 0. The case for orientable examples is a corollary of Poincaré duality. This property applies more generally to any compact stratified space all of whose strata have odd dimension. It also applies to closed odd-dimensional non-orientable manifolds, via the two-to-one orientable double cover. Relations to other invariants The Euler characteristic of a closed orientable surface can be calculated from its genus (the number of tori in a connected sum decomposition of the surface; intuitively, the number of "handles") as The Euler characteristic of a closed non-orientable surface can be calculated from its non-orientable genus (the number of real projective planes in a connected sum decomposition of the surface) as For closed smooth manifolds, the Euler characteristic coincides with the Euler number, i.e., the Euler class of its tangent bundle evaluated on the fundamental class of a manifold. The Euler class, in turn, relates to all other characteristic classes of vector bundles. For closed Riemannian manifolds, the Euler characteristic can also be found by integrating the curvature; see the Gauss–Bonnet theorem for the two-dimensional case and the generalized Gauss–Bonnet theorem for the general case. A discrete analog of the Gauss–Bonnet theorem is Descartes' theorem that the "total defect" of a polyhedron, measured in full circles, is the Euler characteristic of the polyhedron. Hadwiger's theorem characterizes the Euler characteristic as the unique (up to scalar multiplication) translation-invariant, finitely additive, not-necessarily-nonnegative set function defined on finite unions of compact convex sets in that is "homogeneous of degree 0". Generalizations For every combinatorial cell complex, one defines the Euler characteristic as the number of 0-cells, minus the number of 1-cells, plus the number of 2-cells, etc., if this alternating sum is finite. In particular, the Euler characteristic of a finite set is simply its cardinality, and the Euler characteristic of a graph is the number of vertices minus the number of edges. (Olaf Post calls this a "well-known formula".) More generally, one can define the Euler characteristic of any chain complex to be the alternating sum of the ranks of the homology groups of the chain complex, assuming that all these ranks are finite. A version of Euler characteristic used in algebraic geometry is as follows. For any coherent sheaf on a proper scheme , one defines its Euler characteristic to be where is the dimension of the -th sheaf cohomology group of . In this case, the dimensions are all finite by Grothendieck's finiteness theorem. This is an instance of the Euler characteristic of a chain complex, where the chain complex is a finite resolution of by acyclic sheaves. Another generalization of the concept of Euler characteristic on manifolds comes from orbifolds (see Euler characteristic of an orbifold). While every manifold has an integer Euler characteristic, an orbifold can have a fractional Euler characteristic. For example, the teardrop orbifold has Euler characteristic where is a prime number corresponding to the cone angle . The concept of Euler characteristic of the reduced homology of a bounded finite poset is another generalization, important in combinatorics. A poset is "bounded" if it has smallest and largest elements; call them 0 and 1. The Euler characteristic of such a poset is defined as the integer , where is the Möbius function in that poset's incidence algebra. This can be further generalized by defining a rational valued Euler characteristic for certain finite categories, a notion compatible with the Euler characteristics of graphs, orbifolds and posets mentioned above. In this setting, the Euler characteristic of a finite group or monoid is , and the Euler characteristic of a finite groupoid is the sum of , where we picked one representative group for each connected component of the groupoid.
Mathematics
Geometry
null
148550
https://en.wikipedia.org/wiki/Antiferromagnetism
Antiferromagnetism
In materials that exhibit antiferromagnetism, the magnetic moments of atoms or molecules, usually related to the spins of electrons, align in a regular pattern with neighboring spins (on different sublattices) pointing in opposite directions. This is, like ferromagnetism and ferrimagnetism, a manifestation of ordered magnetism. The phenomenon of antiferromagnetism was first introduced by Lev Landau in 1933. Generally, antiferromagnetic order may exist at sufficiently low temperatures, but vanishes at and above the Néel temperature – named after Louis Néel, who had first in the West identified this type of magnetic ordering. Above the Néel temperature, the material is typically paramagnetic. Measurement When no external field is applied, the antiferromagnetic structure corresponds to a vanishing total magnetization. In an external magnetic field, a kind of ferrimagnetic behavior may be displayed in the antiferromagnetic phase, with the absolute value of one of the sublattice magnetizations differing from that of the other sublattice, resulting in a nonzero net magnetization. Although the net magnetization should be zero at a temperature of absolute zero, the effect of spin canting often causes a small net magnetization to develop, as seen for example in hematite. The magnetic susceptibility of an antiferromagnetic material typically shows a maximum at the Néel temperature. In contrast, at the transition between the ferromagnetic to the paramagnetic phases the susceptibility will diverge. In the antiferromagnetic case, a divergence is observed in the staggered susceptibility. Various microscopic (exchange) interactions between the magnetic moments or spins may lead to antiferromagnetic structures. In the simplest case, one may consider an Ising model on a bipartite lattice, e.g. the simple cubic lattice, with couplings between spins at nearest neighbor sites. Depending on the sign of that interaction, ferromagnetic or antiferromagnetic order will result. Geometrical frustration or competing ferro- and antiferromagnetic interactions may lead to different and, perhaps, more complicated magnetic structures. The relationship between magnetization and the magnetizing field is non-linear like in ferromagnetic materials. This fact is due to the contribution of the hysteresis loop, which for ferromagnetic materials involves a residual magnetization. Antiferromagnetic materials Antiferromagnetic structures were first shown through neutron diffraction of transition metal oxides such as nickel, iron, and manganese oxides. The experiments, performed by Clifford Shull, gave the first results showing that magnetic dipoles could be oriented in an antiferromagnetic structure. Antiferromagnetic materials occur commonly among transition metal compounds, especially oxides. Examples include hematite, metals such as chromium, alloys such as iron manganese (FeMn), and oxides such as nickel oxide (NiO). There are also numerous examples among high nuclearity metal clusters. Organic molecules can also exhibit antiferromagnetic coupling under rare circumstances, as seen in radicals such as 5-dehydro-m-xylylene. Antiferromagnets can couple to ferromagnets, for instance, through a mechanism known as exchange bias, in which the ferromagnetic film is either grown upon the antiferromagnet or annealed in an aligning magnetic field, causing the surface atoms of the ferromagnet to align with the surface atoms of the antiferromagnet. This provides the ability to "pin" the orientation of a ferromagnetic film, which provides one of the main uses in so-called spin valves, which are the basis of magnetic sensors including modern hard disk drive read heads. The temperature at or above which an antiferromagnetic layer loses its ability to "pin" the magnetization direction of an adjacent ferromagnetic layer is called the blocking temperature of that layer and is usually lower than the Néel temperature. Geometric frustration Unlike ferromagnetism, anti-ferromagnetic interactions can lead to multiple optimal states (ground states—states of minimal energy). In one dimension, the anti-ferromagnetic ground state is an alternating series of spins: up, down, up, down, etc. Yet in two dimensions, multiple ground states can occur. Consider an equilateral triangle with three spins, one on each vertex. If each spin can take on only two values (up or down), there are 23 = 8 possible states of the system, six of which are ground states. The two situations which are not ground states are when all three spins are up or are all down. In any of the other six states, there will be two favorable interactions and one unfavorable one. This illustrates frustration: the inability of the system to find a single ground state. This type of magnetic behavior has been found in minerals that have a crystal stacking structure such as a Kagome lattice or hexagonal lattice. Other properties Synthetic antiferromagnets (often abbreviated by SAF) are artificial antiferromagnets consisting of two or more thin ferromagnetic layers separated by a nonmagnetic layer. Dipole coupling of the ferromagnetic layers results in antiparallel alignment of the magnetization of the ferromagnets. Antiferromagnetism plays a crucial role in giant magnetoresistance, as had been discovered in 1988 by the Nobel Prize winners Albert Fert and Peter Grünberg (awarded in 2007) using synthetic antiferromagnets. There are also examples of disordered materials (such as iron phosphate glasses) that become antiferromagnetic below their Néel temperature. These disordered networks 'frustrate' the antiparallelism of adjacent spins; i.e. it is not possible to construct a network where each spin is surrounded by opposite neighbour spins. It can only be determined that the average correlation of neighbour spins is antiferromagnetic. This type of magnetism is sometimes called speromagnetism.
Physical sciences
Magnetostatics
Physics
148594
https://en.wikipedia.org/wiki/Euler%27s%20constant
Euler's constant
Euler's constant (sometimes called the Euler–Mascheroni constant) is a mathematical constant, usually denoted by the lowercase Greek letter gamma (), defined as the limiting difference between the harmonic series and the natural logarithm, denoted here by : Here, represents the floor function. The numerical value of Euler's constant, to 50 decimal places, is: History The constant first appeared in a 1734 paper by the Swiss mathematician Leonhard Euler, titled De Progressionibus harmonicis observationes (Eneström Index 43), where he described it as "worthy of serious consideration". Euler initially calculated the constant's value to 6 decimal places. In 1781, he calculated it to 16 decimal places. Euler used the notations and for the constant. The Italian mathematician Lorenzo Mascheroni attempted to calculate the constant to 32 decimal places, but made errors in the 20th–22nd and 31st–32nd decimal places; starting from the 20th digit, he calculated ...1811209008239 when the correct value is ...0651209008240. In 1790, he used the notations and for the constant. Other computations were done by Johann von Soldner in 1809, who used the notation . The notation appears nowhere in the writings of either Euler or Mascheroni, and was chosen at a later time, perhaps because of the constant's connection to the gamma function. For example, the German mathematician Carl Anton Bretschneider used the notation in 1835, and Augustus De Morgan used it in a textbook published in parts from 1836 to 1842. Euler's constant was also studied by the Indian mathematician Srinivasa Ramanujan who published one paper on it in 1917. David Hilbert mentioned the irrationality of as an unsolved problem that seems "unapproachable" and, allegedly, the English mathematician Godfrey Hardy offered to give up his Savilian Chair at Oxford to anyone who could prove this. Appearances Euler's constant appears frequently in mathematics, especially in number theory and analysis. Examples include, among others, the following places: (where '*' means that this entry contains an explicit equation): Analysis The Weierstrass product formula for the gamma function and the Barnes G-function. The asymptotic expansion of the gamma function, . Evaluations of the digamma function at rational values. The Laurent series expansion for the Riemann zeta function*, where it is the first of the Stieltjes constants. Values of the derivative of the Riemann zeta function and Dirichlet beta function. In connection to the Laplace and Mellin transform. In the regularization/renormalization of the harmonic series as a finite value. Expressions involving the exponential and logarithmic integral.* A definition of the cosine integral.* In relation to Bessel functions. Asymptotic expansions of modified Struve functions. In relation to other special functions. Number theory An inequality for Euler's totient function. The growth rate of the divisor function. A formulation of the Riemann hypothesis. The third of Mertens' theorems.* The calculation of the Meissel–Mertens constant. Lower bounds to specific prime gaps. An approximation of the average number of divisors of all numbers from 1 to a given n. The Lenstra–Pomerance–Wagstaff conjecture on the frequency of Mersenne primes. An estimation of the efficiency of the euclidean algorithm. Sums involving the Möbius and von Mangolt function. In other fields In some formulations of Zipf's law. The answer to the coupon collector's problem.* The mean of the Gumbel distribution. An approximation of the Landau distribution. The information entropy of the Weibull and Lévy distributions, and, implicitly, of the chi-squared distribution for one or two degrees of freedom. An upper bound on Shannon entropy in quantum information theory. In dimensional regularization of Feynman diagrams in quantum field theory. In the BCS equation on the critical temperature in BCS theory of superconductivity.* Fisher–Orr model for genetics of adaptation in evolutionary biology. Properties Irrationality and transcendence The number has not been proved algebraic or transcendental. In fact, it is not even known whether is irrational. The ubiquity of revealed by the large number of equations below and the fact that has been called the third most important mathematical constant after and makes the irrationality of a major open question in mathematics. However, some progress has been made. In 1959 Andrei Shidlovsky proved that at least one of Euler's constant and the Gompertz constant is irrational; Tanguy Rivoal proved in 2012 that at least one of them is transcendental. Kurt Mahler showed in 1968 that the number is transcendental, where and are the usual Bessel functions. It is known that the transcendence degree of the field is at least two. In 2010, M. Ram Murty and N. Saradha showed that at most one of the Euler-Lehmer constants, i. e. the numbers of the form is algebraic, if and ; this family includes the special case . Using the same approach, in 2013, M. Ram Murty and A. Zaytseva showed that the generalized Euler constants have the same property, where the generalized Euler constant are defined as where is a fixed list of prime numbers, if at least one of the primes in is a prime factor of , and otherwise. In particular, . Using a continued fraction analysis, Papanikolaou showed in 1997 that if is rational, its denominator must be greater than 10244663. If is a rational number, then its denominator must be greater than 1015000. Euler's constant is conjectured not to be an algebraic period, but the values of its first 109 decimal digits seem to indicate that it could be a normal number. Continued fraction The simple continued fraction expansion of Euler's constant is given by: which has no apparent pattern. It is known to have at least 16,695,000,000 terms, and it has infinitely many terms if and only if is irrational. Numerical evidence suggests that both Euler's constant as well as the constant are among the numbers for which the geometric mean of their simple continued fraction terms converges to Khinchin's constant. Similarly, when are the convergents of their respective continued fractions, the limit appears to converge to Lévy's constant in both cases. However neither of these limits has been proven. There also exists a generalized continued fraction for Euler's constant. A good simple approximation of is given by the reciprocal of the square root of 3 or about 0.57735: with the difference being about 1 in 7,429. Formulas and identities Relation to gamma function is related to the digamma function , and hence the derivative of the gamma function , when both functions are evaluated at 1. Thus: This is equal to the limits: Further limit results are: A limit related to the beta function (expressed in terms of gamma functions) is Relation to the zeta function can also be expressed as an infinite sum whose terms involve the Riemann zeta function evaluated at positive integers: The constant can also be expressed in terms of the sum of the reciprocals of non-trivial zeros of the zeta function: Other series related to the zeta function include: The error term in the last equation is a rapidly decreasing function of . As a result, the formula is well-suited for efficient computation of the constant to high precision. Other interesting limits equaling Euler's constant are the antisymmetric limit: and the following formula, established in 1898 by de la Vallée-Poussin: where are ceiling brackets. This formula indicates that when taking any positive integer and dividing it by each positive integer less than , the average fraction by which the quotient falls short of the next integer tends to (rather than 0.5) as tends to infinity. Closely related to this is the rational zeta series expression. By taking separately the first few terms of the series above, one obtains an estimate for the classical series limit: where is the Hurwitz zeta function. The sum in this equation involves the harmonic numbers, . Expanding some of the terms in the Hurwitz zeta function gives: where can also be expressed as follows where is the Glaisher–Kinkelin constant: can also be expressed as follows, which can be proven by expressing the zeta function as a Laurent series: Relation to triangular numbers Numerous formulations have been derived that express in terms of sums and logarithms of triangular numbers. One of the earliest of these is a formula for the harmonic number attributed to Srinivasa Ramanujan where is related to in a series that considers the powers of (an earlier, less-generalizable proof by Ernesto Cesàro gives the first two terms of the series, with an error term): From Stirling's approximation follows a similar series: The series of inverse triangular numbers also features in the study of the Basel problem posed by Pietro Mengoli. Mengoli proved that , a result Jacob Bernoulli later used to estimate the value of , placing it between and . This identity appears in a formula used by Bernhard Riemann to compute roots of the zeta function, where is expressed in terms of the sum of roots plus the difference between Boya's expansion and the series of exact unit fractions : Integrals equals the value of a number of definite integrals: where is the fractional harmonic number, and is the fractional part of . The third formula in the integral list can be proved in the following way: The integral on the second line of the equation stands for the Debye function value of , which is . Definite integrals in which appears include: We also have Catalan's 1875 integral One can express using a special case of Hadjicostas's formula as a double integral with equivalent series: An interesting comparison by Sondow is the double integral and alternating series It shows that may be thought of as an "alternating Euler constant". The two constants are also related by the pair of series where and are the number of 1s and 0s, respectively, in the base 2 expansion of . Series expansions In general, for any . However, the rate of convergence of this expansion depends significantly on . In particular, exhibits much more rapid convergence than the conventional expansion . This is because while Even so, there exist other series expansions which converge more rapidly than this; some of these are discussed below. Euler showed that the following infinite series approaches : The series for is equivalent to a series Nielsen found in 1897: In 1910, Vacca found the closely related series where is the logarithm to base 2 and is the floor function. This can be generalized to: where: In 1926 Vacca found a second series: From the Malmsten–Kummer expansion for the logarithm of the gamma function we get: Ramanujan, in his lost notebook gave a series that approaches : An important expansion for Euler's constant is due to Fontana and Mascheroni where are Gregory coefficients. This series is the special case of the expansions convergent for A similar series with the Cauchy numbers of the second kind is Blagouchine (2018) found an interesting generalisation of the Fontana–Mascheroni series where are the Bernoulli polynomials of the second kind, which are defined by the generating function For any rational this series contains rational terms only. For example, at , it becomes Other series with the same polynomials include these examples: and where is the gamma function. A series related to the Akiyama–Tanigawa algorithm is where are the Gregory coefficients of the second order. As a series of prime numbers: Asymptotic expansions equals the following asymptotic formulas (where is the th harmonic number): (Euler) (Negoi) (Cesàro) The third formula is also called the Ramanujan expansion. Alabdulmohsin derived closed-form expressions for the sums of errors of these approximations. He showed that (Theorem A.1): Exponential The constant is important in number theory. Its numerical value is: equals the following limit, where is the th prime number: This restates the third of Mertens' theorems. We further have the following product involving the three constants , and : Other infinite products relating to include: These products result from the Barnes -function. In addition, where the th factor is the th root of This infinite product, first discovered by Ser in 1926, was rediscovered by Sondow using hypergeometric functions. It also holds that Published digits Generalizations Stieltjes constants Euler's generalized constants are given by for , with as the special case . Extending for gives: with again the limit: This can be further generalized to for some arbitrary decreasing function . Setting gives rise to the Stieltjes constants , that occur in the Laurent series expansion of the Riemann zeta function: with Euler-Lehmer constants Euler–Lehmer constants are given by summation of inverses of numbers in a common modulo class: The basic properties are and if the greatest common divisor then Masser-Gramain constant A two-dimensional generalization of Euler's constant is the Masser-Gramain constant. It is defined as the following limiting difference: where is the smallest radius of a disk in the complex plane containing at least Gaussian integers. The following bounds have been established: .
Mathematics
Basics
null
148819
https://en.wikipedia.org/wiki/Foucault%27s%20measurements%20of%20the%20speed%20of%20light
Foucault's measurements of the speed of light
In 1850, Léon Foucault used a rotating mirror to perform a differential measurement of the speed of light in water versus its speed in air. In 1862, he used a similar apparatus to measure the speed of light in the air. Background In 1834, Charles Wheatstone developed a method of using a rapidly rotating mirror to study transient phenomena, and applied this method to measure the velocity of electricity in a wire and the duration of an electric spark. He communicated to François Arago the idea that his method could be adapted to a study of the speed of light. The early-to-mid 1800s were a period of intense debate on the particle-versus-wave nature of light. Although the observation of the Arago spot in 1819 may have seemed to settle the matter definitively in favor of Fresnel's wave theory of light, various concerns continued to appear to be addressed more satisfactorily by Newton's corpuscular theory. Arago expanded upon Wheatstone's concept in an 1838 publication, suggesting that a differential comparison of the speed of light in the air versus water would serve to distinguish between the particle and wave theories of light. Foucault had worked with Hippolyte Fizeau on projects such as using the Daguerreotype process to take images of the Sun between 1843 and 1845 and characterizing absorption bands in the infrared spectrum of sunlight in 1847. In 1845, Arago suggested to Fizeau and Foucault that they attempt to measure the speed of light. Sometime in 1849, however, it appears that the two had a falling out, and they parted ways. In 1848−49, Fizeau used, not a rotating mirror, but a toothed wheel apparatus to perform an absolute measurement of the speed of light in air. In 1850, Fizeau and Foucault both used rotating mirror devices to perform relative measures of the speed of light in the air versus water. Foucault employed Paul-Gustave Froment to build a rotary-mirror apparatus in which he split a beam of light into two beams, passing one through the water while the other traveled through air. On 27 April 1850, he confirmed that the speed of light was greater as it traveled through the air, seemingly validating the wave theory of light. With Arago's blessing, Fizeau employed L.F.C. Breguet to construct his apparatus. They achieved their result on 17 June 1850, seven weeks after Foucault. To achieve the high rotational speeds necessary, Foucault abandoned clockwork and used a carefully balanced steam-powered apparatus designed by Charles Cagniard de la Tour. Foucault originally used tin-mercury mirrors, however at speeds exceeding 200 rps, the reflecting layer would break off, so he switched to using new silver mirrors. Foucault's determination of the speed of light 1850 experiment In 1850, Léon Foucault measured the relative speeds of light in air and water. The experiment was proposed by Arago, who wrote, The apparatus (Figure 1) involves light passing through slit S, reflecting off a mirror R, and forming an image of the slit on the distant stationary mirror M. The light then passes back to mirror R and is reflected back to the original slit. If mirror R is stationary, then the slit image will reform at S. However, if the mirror R is rotating, it will have moved slightly in the time it takes for the light to bounce from R to M and back, and the light will be deflected away from the original source by a small angle, forming an image to the side of the slit. Foucault measured the differential speed of light through air versus water by using two distant mirrors (Figure 2). He placed a 3-meter tube of water before one of them. The light passing through the slower medium has its image more displaced. By partially masking the air-path mirror, Foucault was able to distinguish the two images super-imposed on top of one another. He found the speed of light was slower in water than in air. This experiment did not determine the absolute speeds of light in water or air, only their relative speeds. The rotational speed of the mirror could not be sufficiently accurately measured to determine the absolute speeds of light in water or air. With a rotational speed of 600-800 revolutions per second, the displacement was 0.2 to 0.3 mm. Guided by similar motivations as his former partner, Foucault in 1850 was more interested in settling the particle-versus-wave debate than in determining an accurate absolute value for the speed of light. His experimental results, announced shortly before Fizeau announced his results on the same topic, were viewed as "driving the last nail in the coffin" of Newton's corpuscle theory of light when it showed that light travels more slowly through water than through air. Newton had explained refraction as a pull of the medium upon the light, implying an increased speed of light in the medium. The corpuscular theory of light went into abeyance, completely overshadowed by the wave theory. This state of affairs lasted until 1905, when Einstein presented heuristic arguments that under various circumstances, such as when considering the photoelectric effect, light exhibits behaviors indicative of a particle nature. For his efforts, Foucault was made chevalier of the Légion d'honneur, and in 1853 was awarded a doctorate from the Sorbonne. 1862 experiment In Foucault's 1862 experiment, he desired to obtain an accurate absolute value for the speed of light, since his concern was to deduce an improved value for the astronomical unit. At the time, Foucault was working at the Paris Observatory under Urbain le Verrier. It was le Verrier's belief, based on extensive celestial mechanics calculations, that the consensus value for the speed of light was perhaps 4% too high. Technical limitations prevented Foucault from separating mirrors R and M by more than about 20 meters. Despite this limited path length, Foucault was able to measure the displacement of the slit image (less than 1 mm) with considerable accuracy. In addition, unlike the case with Fizeau's experiment (which required gauging the rotation rate of an adjustable-speed toothed wheel), he could spin the mirror at a constant, chronometrically determined speed. Foucault's measurement confirmed le Verrier's estimate. His 1862 figure for the speed of light (298000 km/s) was within 0.6% of the modern value. As seen in Figure 3, the displaced image of the source (slit) is at an angle 2θ from the source direction. Michelson's refinement of the Foucault experiment It was seen in Figure 1 that Foucault placed the rotating mirror R as close as possible to lens L so as to maximize the distance between R and the slit S. As R rotates, an enlarged image of slit S sweeps across the face of the distant mirror M. The greater the distance RM, the more quickly that the image sweeps across mirror M and the less light is reflected back. Foucault could not increase the RM distance in his folded optical arrangement beyond about 20 meters without the image of the slit becoming too dim to accurately measure. Between 1877 and 1931, Albert A. Michelson made multiple measurements of the speed of light. His 1877–79 measurements were performed under the auspices of Simon Newcomb, who was also working on measuring the speed of light. Michelson's setup incorporated several refinements on Foucault's original arrangement. As seen in Figure 4, Michelson placed the rotating mirror R near the principal focus of lens L (i.e. the focal point given incident parallel rays of light). If the rotating mirror R were exactly at the principal focus, the moving image of the slit would remain upon the distant plane mirror M (equal in diameter to lens L) as long as the axis of the pencil of light remained on the lens, this being true regardless of the RM distance. Michelson was thus able to increase the RM distance to nearly 2000 feet. To achieve a reasonable value for the RS distance, Michelson used an extremely long focal length lens (150 feet) and compromised on the design by placing R about 15 feet closer to L than the principal focus. This allowed an RS distance of between 28.5 to 33.3 feet. He used carefully calibrated tuning forks to monitor the rotation rate of the air-turbine-powered mirror R, and he would typically measure displacements of the slit image on the order of 115 mm. His 1879 figure for the speed of light, 299944±51 km/s, was within about 0.05% of the modern value. His 1926 repeat of the experiment incorporated still further refinements such as the use of polygonal prism-shaped rotating mirrors (enabling a brighter image) having from eight through sixteen facets and a 22 mile baseline surveyed to fractional parts-per-million accuracy. His figure of 299,796±4 km/s was only about 4 km/s higher than the current accepted value. Michelson's final 1931 attempt to measure the speed of light in vacuum was interrupted by his death. Although his experiment was completed posthumously by F. G. Pease and F. Pearson, various factors militated against a measurement of highest accuracy, including an earthquake which disturbed the baseline measurement.
Physical sciences
Physical constants
Physics
148951
https://en.wikipedia.org/wiki/Speech%20disorder
Speech disorder
Speech disorders, impairments, or impediments, are a type of communication disorder in which normal speech is disrupted. This can mean fluency disorders like stuttering, cluttering or lisps. Someone who is unable to speak due to a speech disorder is considered mute. Speech skills are vital to social relationships and learning, and delays or disorders that relate to developing these skills can impact individuals function. For many children and adolescents, this can present as issues with academics. Speech disorders affect roughly 11.5% of the US population, and 5% of the primary school population. Speech is a complex process that requires precise timing, nerve and muscle control, and as a result is susceptible to impairments. A person who has a stroke, an accident or birth defect may have speech and language problems. Classification There are three different levels of classification when determining the magnitude and type of a speech disorder and the proper treatment or therapy: Sounds the patient can produce Phonemic – can be produced easily; used meaningfully and constructively Phonetic – produced only upon request; not used consistently, meaningfully, or constructively; not used in connected speech Stimulate sounds Easily stimulated Stimulate after demonstration and probing (i.e. with a tongue depressor) Cannot produce the sound Cannot be produced voluntarily No production ever observed Types of disorder Apraxia of speech may result from stroke or progressive illness, and involves inconsistent production of speech sounds and rearranging of sounds in a word ("potato" may become "topato" and next "totapo"). Production of words becomes more difficult with effort, but common phrases may sometimes be spoken spontaneously without effort. Cluttering, a speech and fluency disorder characterized primarily by a rapid rate of speech, which makes speech difficult to understand. Developmental verbal dyspraxia also known as childhood apraxia of speech. Dysarthria is a weakness or paralysis of speech muscles caused by damage to the nerves or brain. Dysarthria is often caused by strokes, Parkinson's disease, ALS, head or neck injuries, surgical accident, or cerebral palsy. Aphasia Dysprosody is an extremely rare neurological speech disorder. It is characterized by alterations in intensity, in the timing of utterance segments, and in rhythm, cadence, and intonation of words. The changes to the duration, the fundamental frequency, and the intensity of tonic and atonic syllables of the sentences spoken, deprive an individual's particular speech of its characteristics. The cause of dysprosody is usually associated with neurological pathologies such as brain vascular accidents, cranioencephalic traumatisms, and brain tumors. Muteness is the complete inability to speak. Speech sound disorders involve difficulty in producing specific speech sounds (most often certain consonants, such as /s/ or /r/), and are subdivided into articulation disorders (also called phonetic disorders) and phonemic disorders. Articulation disorders are characterized by difficulty learning to produce sounds physically. Phonemic disorders are characterized by difficulty in learning the sound distinctions of a language, so that one sound may be used in place of many. However, it is not uncommon for a single person to have a mixed speech sound disorder with both phonemic and phonetic components. Stuttering (AKA “Dysphemia”) affects approximately 1% of the adult population. Voice disorders are impairments, often physical, that involve the function of the larynx or vocal resonance. Causes In some cases the cause is unknown. However, there are various known causes of speech impairments, such as hearing loss, neurological disorders, brain injury, an increase in mental strain, constant bullying, intellectual disability, substance use disorder, physical impairments such as cleft lip and palate, and vocal abuse or misuse. After strokes, there is known to be a higher incidence of apraxia of speech, which is a disorder affecting neurological pathways involved with speech. Poor motor function is also suggested to be highly associated with speech disorders, especially in children. Hereditary causes have also been suggested, as many times children of individuals with speech disorders will develop them as well. 20-40% individuals with a family history of a specific language impairment are likely to be diagnosed, whereas only 4% of the population overall is likely to be diagnosed. There are also language disorders that are known to be genetic, such as hereditary ataxia, which can cause slow and unclear speech. Treatment Many of these types of disorders can be treated by speech therapy, but others require medical attention by a doctor in phoniatrics. Other treatments include correction of organic conditions and psychotherapy. In the United States, school-age children with a speech disorder are often placed in special education programs. Children who struggle to learn to talk often experience persistent communication difficulties in addition to academic struggles. More than 700,000 of the students served in the public schools' special education programs in the 2000–2001 school year were categorized as having a speech or language impairment. This estimate does not include children who have speech and language impairments secondary to other conditions such as deafness". Many school districts provide the students with speech therapy during school hours, although extended day and summer services may be appropriate under certain circumstances. Patients will be treated in teams, depending on the type of disorder they have. A team can include speech–language pathologists, specialists, family doctors, teachers, and family members. Social effects Having a speech disorder can have negative social effects, especially among young children. Those with a speech disorder can be targets of bullying because of their disorder. This bullying can result in decreased self-esteem. Religion and culture also play a large role in the social effects of speech disorders. For example, in many African countries like Kenya cleft palates are largely considered to be caused by a curse from God. This can cause people with cleft palates to not receive care in early childhood, and end in rejection from society. For those with speech disorders, listeners reactions are often negative, which may correlate negative effects to self-esteem. It has also been shown that adults tend to view individuals with stutters in more negative ways than those without them. Language disorders Language disorders are usually considered distinct from speech disorders, although they are often used synonymously. Speech disorders refer to problems in producing the sounds of speech or with the quality of voice, where language disorders are usually an impairment of either understanding words or being able to use words and do not have to do with speech production.
Biology and health sciences
Disabilities
Health
148980
https://en.wikipedia.org/wiki/Chlorpromazine
Chlorpromazine
Chlorpromazine (CPZ), marketed under the brand names Thorazine and Largactil among others, is an antipsychotic medication. It is primarily used to treat psychotic disorders such as schizophrenia. Other uses include the treatment of bipolar disorder, severe behavioral problems in children including those with attention deficit hyperactivity disorder, nausea and vomiting, anxiety before surgery, and hiccups that do not improve following other measures. It can be given orally (by mouth), by intramuscular injection (injection into a muscle), or intravenously (injection into a vein). Chlorpromazine is in the typical antipsychotic class, and, chemically, is one of the phenothiazines. Its mechanism of action is not entirely clear but is believed to be related to its ability as a dopamine antagonist. It has antiserotonergic and antihistaminergic properties. Common side effects include movement problems, sleepiness, dry mouth, low blood pressure upon standing, and increased weight. Serious side effects may include the potentially permanent movement disorder tardive dyskinesia, neuroleptic malignant syndrome, severe lowering of the seizure threshold, and low white blood cell levels. In older people with psychosis as a result of dementia, it may increase the risk of death. It is unclear if it is safe for use in pregnancy. Chlorpromazine was developed in 1950 and was the first antipsychotic on the market. It is on the World Health Organization's List of Essential Medicines. Its introduction has been labeled as one of the great advances in the history of psychiatry. It is available as a generic medication. Medical uses Chlorpromazine is used in the treatment of both acute and chronic psychoses, including schizophrenia and the manic phase of bipolar disorder, as well as amphetamine-induced psychosis. Controversially, some psychiatric patients may be given Chlorpromazine by force, even if they do not suffer any of the typical conditions the drug is prescribed for. In a 2013 comparison of fifteen antipsychotics in schizophrenia, chlorpromazine demonstrated mild-standard effectiveness. It was 13% more effective than lurasidone and iloperidone, approximately as effective as ziprasidone and asenapine, and 12–16% less effective than haloperidol, quetiapine, and aripiprazole. A 2014 systematic review carried out by Cochrane included 55 trials that compared the effectiveness of chlorpromazine versus placebo for the treatment of schizophrenia. Compared to the placebo group, patients under chlorpromazine experienced less relapse during 6 months to 2 years follow-up. No difference was found between the two groups beyond two years of follow-up. Patients under chlorpromazine showed a global improvement in symptoms and functioning. The systematic review also highlighted the fact that the side effects of the drug were 'severe and debilitating', including sedation, considerable weight gain, a lowering of blood pressure, and an increased risk of acute movement disorders. They also noted that the quality of evidence of the 55 included trials was very low and that 315 trials could not be included in the systematic review due to their poor quality. They called for further research on the subject, as chlorpromazine is a cheap benchmark drug and one of the most used treatments for schizophrenia worldwide. Chlorpromazine has also been used in porphyria and as part of tetanus treatment. It is still recommended for short-term management of severe anxiety and psychotic aggression. Resistant and severe hiccups, severe nausea/emesis, and preanesthetic conditioning are other uses. Symptoms of delirium in hospitalized AIDS patients have been effectively treated with low doses of chlorpromazine. Other uses Chlorpromazine is occasionally used off-label for treatment of severe migraine. It is often, particularly as palliation, used in small doses to reduce nausea by opioid-treated cancer patients and to intensify and prolong the analgesia of the opioids as well. Efficacy has been shown in treatment of symptomatic hypertensive emergency. In Germany, chlorpromazine still carries label indications for insomnia, severe pruritus, and preanesthesia. Chlorpromazine has been used as a hallucinogen antidote or "trip killer" to block the effects of serotonergic psychedelics like psilocybin, lysergic acid diethylamide (LSD), and mescaline. However, the results of clinical studies of chlorpromazine for this use have been inconsistent, with reduced effects, no change in effects, and even enhanced effects all reported. Chlorpromazine and other phenothiazines have been demonstrated to possess antimicrobial properties, but are not currently used for this purpose except for a very small number of cases. For example, Miki et al. 1992 trialed daily doses of chlorpromazine, reversing chloroquine resistance in Plasmodium chabaudi isolates in mice. Weeks et al., 2018 find that it also possesses a wide spectrum anthelmintic effect. Chlorpromazine is an antagonist of several insect monoamine receptors. It is the most active antagonist known of silk moth (Bombyx mori) octopamine receptor α, intermediate for Bm tyramine receptors 1 & 2, weak for Drosophila octopamine receptor β, high for Drosophila tyramine receptor 1, intermediate for migratory locust (Locusta migratoria) tyramine receptor 1, and high for American cockroach (Periplaneta americana) octopamine receptor α and tyramine receptor 1. Adverse effects There appears to be a dose-dependent risk for seizures with chlorpromazine treatment. Tardive dyskinesia (involuntary, repetitive body movements) and akathisia (a feeling of inner restlessness and inability to stay still) are less commonly seen with chlorpromazine than they are with high potency typical antipsychotics such as haloperidol or trifluoperazine, and some evidence suggests that, with conservative dosing, the incidence of such effects for chlorpromazine may be comparable to that of newer agents such as risperidone or olanzapine. Chlorpromazine stably and for life alters natural processes in the biological systems of the mitochondria of the nervous system, and inhibits the efficiency of the electron transport chain. Chlorpromazine may deposit in ocular tissues when taken in high dosages for long periods of time. Contraindications Absolute contraindications include: Circulatory depression CNS depression Coma Drug intoxication Bone marrow suppression Phaeochromocytoma Hepatic failure Active liver disease Previous hypersensitivity (including jaundice, agranulocytosis, etc.) to phenothiazines, especially chlorpromazine, or any of the excipients in the formulation being used. Relative contraindications include: Epilepsy Parkinson's disease Myasthenia gravis Hypoparathyroidism Prostatic hypertrophy Very rarely, elongation of the QT interval, due to hERG blockade, may occur, increasing the risk of potentially fatal arrhythmias. Interactions Consuming food prior to taking chlorpromazine orally limits its absorption; likewise, cotreatment with benztropine can also reduce chlorpromazine absorption. Alcohol can also reduce chlorpromazine absorption. Antacids slow chlorpromazine absorption. Lithium and chronic treatment with barbiturates can increase chlorpromazine clearance significantly. Tricyclic antidepressants (TCAs) can decrease chlorpromazine clearance and hence increase chlorpromazine exposure. Cotreatment with CYP1A2 inhibitors like ciprofloxacin, fluvoxamine or vemurafenib can reduce chlorpromazine clearance and hence increase exposure and potentially also adverse effects. Chlorpromazine can also potentiate the CNS depressant effects of drugs like barbiturates, benzodiazepines, opioids, lithium and anesthetics and hence increase the potential for adverse effects such as respiratory depression and sedation. Chlorprozamine is also a moderate inhibitor of CYP2D6 and a substrate for CYP2D6 and hence can inhibit its own metabolism. It can also inhibit the clearance of CYP2D6 substrates such as dextromethorphan, potentiating their effects. Other drugs like codeine and tamoxifen, which require CYP2D6-mediated activation into their respective active metabolites, may have their therapeutic effects attenuated. Likewise, CYP2D6 inhibitors such as paroxetine or fluoxetine can reduce chlorpromazine clearance, increasing serum levels of chlorpromazine and potentially its adverse effects. Chlorpromazine also reduces phenytoin levels and increases valproic acid levels. It also reduces propranolol clearance and antagonizes the therapeutic effects of antidiabetic agents, levodopa (a Parkinson's medication. This is likely because chlorpromazine antagonizes the D2 receptor which is one of the receptors dopamine, a levodopa metabolite, activates), amphetamines and anticoagulants. It may also interact with anticholinergic drugs such as orphenadrine to produce hypoglycaemia (low blood sugar). Chlorpromazine may also interact with epinephrine (adrenaline) to produce a paradoxical fall in blood pressure. Monoamine oxidase inhibitors (MAOIs) and thiazide diuretics may also accentuate the orthostatic hypotension experienced by those receiving chlorpromazine treatment. Quinidine may interact with chlorpromazine to increase myocardial depression. Likewise, it may also antagonize the effects of clonidine and guanethidine. It also may reduce the seizure threshold and hence a corresponding titration of anticonvulsant treatments should be considered. Prochlorperazine and desferrioxamine may also interact with chlorpromazine to produce transient metabolic encephalopathy. Other drugs that prolong the QT interval, such as quinidine, verapamil, amiodarone, sotalol and methadone, may also interact with chlorpromazine to produce additive QT interval prolongation. Discontinuation The British National Formulary recommends a gradual withdrawal when discontinuing antipsychotics to avoid acute withdrawal syndrome or rapid relapse. Symptoms of withdrawal commonly include nausea, vomiting, and loss of appetite. Other symptoms may include restlessness, increased sweating, and trouble sleeping. Less commonly, there may be a feeling of the world spinning, numbness, or muscle pains. Symptoms generally resolve after a short period of time. There is tentative evidence that discontinuation of antipsychotics can result in psychosis. It may also result in reoccurrence of the condition that is being treated. Rarely, tardive dyskinesia can occur when the medication is stopped. Pharmacology Chlorpromazine is classified as a low-potency typical antipsychotic. Low-potency antipsychotics have more anticholinergic side effects, such as dry mouth, sedation, and constipation, and lower rates of extrapyramidal side effects, while high-potency antipsychotics (such as haloperidol) have the reverse profile. Pharmacodynamics Chlorpromazine is a very effective antagonist of D2 dopamine receptors and similar receptors, such as D3 and D5. Unlike most other drugs of this genre, it also has a high affinity for D1 receptors. Blocking these receptors causes diminished neurotransmitter binding in the forebrain, resulting in many different effects. Dopamine, unable to bind with a receptor, causes a feedback loop that causes dopaminergic neurons to release more dopamine. Therefore, upon first taking the drug, patients will experience an increase in dopaminergic neural activity. Eventually, dopamine production in the neurons will drop substantially and dopamine will be removed from the synaptic cleft. At this point, neural activity decreases greatly; the continual blockade of receptors only compounds this effect. Chlorpromazine acts as an antagonist (blocking agent) on different postsynaptic and presynaptic receptors: Dopamine receptors (subtypes D1, D2, D3 and D4), which account for its different antipsychotic properties on productive and unproductive symptoms, in the mesolimbic dopamine system accounts for the antipsychotic effect whereas the blockade in the nigrostriatal system produces the extrapyramidal effects Serotonin receptors (5-HT2, 5-HT6 and 5-HT7), with anxiolytic, antidepressant and antiaggressive properties as well as an attenuation of extrapyramidal side effects, but also leading to weight gain and ejaculation difficulties. Histamine receptors (H1 receptors, accounting for sedation, antiemetic effect, vertigo, and weight gain) α1- and α2-adrenergic receptors (accounting for sympatholytic properties, lowering of blood pressure, reflex tachycardia, vertigo, sedation, hypersalivation and incontinence as well as sexual dysfunction, but may also attenuate pseudoparkinsonism – controversial. Also associated with weight gain as a result of blockage of the adrenergic alpha 1 receptor as well as with intraoperative floppy iris syndrome due to its effect on the iris dilator muscle. M1 and M2 muscarinic acetylcholine receptors (causing anticholinergic symptoms such as dry mouth, blurred vision, constipation, difficulty or inability to urinate, sinus tachycardia, electrocardiographic changes and loss of memory, but the anticholinergic action may attenuate extrapyramidal side effects). The presumed effectiveness of the antipsychotic drugs relied on their ability to block dopamine receptors. This assumption arose from the dopamine hypothesis that maintains that both schizophrenia and bipolar disorder are a result of excessive dopamine activity. Furthermore, psychomotor stimulants like cocaine that increase dopamine levels can cause psychotic symptoms if taken in excess. Chlorpromazine and other typical antipsychotics are primarily blockers of D2 receptors. An almost perfect correlation exists between the therapeutic dose of a typical antipsychotic and the drug's affinity for the D2 receptor. Therefore, a larger dose is required if the drug's affinity for the D2 receptor is relatively weak. A correlation exists between average clinical potency and affinity of the antipsychotics for dopamine receptors. Chlorpromazine tends to have a greater effect at serotonin receptors than at D2 receptors, which is notably the opposite effect of the other typical antipsychotics. Therefore, chlorpromazine's effects on dopamine and serotonin receptors are more similar to the atypical antipsychotics than to the typical antipsychotics. Chlorpromazine and other antipsychotics with sedative properties such as promazine and thioridazine are among the most potent agents at α-adrenergic receptors. Furthermore, they are also among the most potent antipsychotics at histamine H1 receptors. This finding is in agreement with the pharmaceutical development of chlorpromazine and other antipsychotics as anti-histamine agents. Furthermore, the brain has a higher density of histamine H1 receptors than any body organ examined which may account for why chlorpromazine and other phenothiazine antipsychotics are as potent at these sites as the most potent classical antihistamines. In addition to influencing the neurotransmitters dopamine, serotonin, epinephrine, norepinephrine, and acetylcholine it has been reported that antipsychotic drugs could achieve glutamatergic effects. This mechanism involves the direct effects of antipsychotic drugs on glutamate receptors. By using the technique of functional neurochemical assay chlorpromazine and phenothiazine derivatives have been shown to have inhibitory effects on NMDA receptors that appeared to be mediated by action at the Zn site. It was found that there is an increase of NMDA activity at low concentrations and suppression at high concentrations of the drug. No significant difference in glycine activity from the effects of chlorpromazine was reported. Further work will be necessary to determine if the influence in NMDA receptors by antipsychotic drugs contributes to their effectiveness. Chlorpromazine does also act as a FIASMA (functional inhibitor of acid sphingomyelinase). Peripheral effects Chlorpromazine is an antagonist to H1 receptors (provoking antiallergic effects), H2 receptors (reduction of forming of gastric juice), M1 and M2 receptors (dry mouth, reduction in forming of gastric juice) and some 5-HT receptors (different anti-allergic/gastrointestinal actions). Because it acts on so many receptors, chlorpromazine is often referred to as a "dirty drug". Pharmacokinetics History In 1933, the French pharmaceutical company Laboratoires Rhône-Poulenc began to search for new antihistamines. In 1947, it synthesized promethazine, a phenothiazine derivative, which was found to have more pronounced sedative and antihistaminic effects than earlier drugs. A year later, the French surgeon Pierre Huguenard used promethazine together with pethidine as part of a cocktail to induce relaxation and indifference in surgical patients. Another surgeon, Henri Laborit, believed the compound stabilized the central nervous system by causing "artificial hibernation" and described this state as "sedation without narcosis". He suggested to Rhône-Poulenc that they develop a compound with better-stabilizing properties. In December 1950, the chemist Paul Charpentier produced a series of compounds that included RP4560 or chlorpromazine. Chlorpromazine was distributed for testing to physicians between April and August 1951. Laborit trialled the medicine at the Val-de-Grâce military hospital in Paris, using it as an anaesthetic booster in intravenous doses of 50 to 100 mg in surgery patients and confirming it as the best drug to date in calming and reducing shock, with patients reporting improved well being afterward. He also noted its hypothermic effect and suggested it may induce artificial hibernation. Laborit thought this would allow the body to better tolerate major surgery by reducing shock, a novel idea at the time. Known colloquially as "Laborit's drug", chlorpromazine was released onto the market in 1953 by Rhône-Poulenc and given the trade name Largactil, derived from large "broad" and acti* "activity". Following on, Laborit considered whether chlorpromazine may have a role in managing patients with severe burns, Raynaud's phenomenon, or psychiatric disorders. At the Villejuif Mental Hospital in November 1951, he and Montassut administered an intravenous dose to psychiatrist Cornelia Quarti, who was acting as a volunteer. Quarti noted the indifference but fainted upon getting up to go to the toilet, and so further testing was discontinued. (Orthostatic hypotension is a known side effect of chlorpromazine). Despite this, Laborit continued to push for testing in psychiatric patients during early 1952. Psychiatrists were reluctant initially, but on 19 January 1952, it was administered (alongside pethidine, pentothal and ECT) to Jacques Lh., a 24-year-old manic patient, who responded dramatically; he was discharged after three weeks, having received 855 mg of the drug in total. Pierre Deniker had heard about Laborit's work from his brother-in-law, who was a surgeon, and ordered chlorpromazine for a clinical trial at the Sainte-Anne Hospital Center in Paris where he was chief of the men's service. Together with the hospital director Jean Delay, they published their first clinical trial in 1952, in which they treated thirty-eight psychotic patients with daily injections of chlorpromazine without the use of other sedating agents. The response was dramatic; treatment with chlorpromazine went beyond simple sedation, with patients showing improvements in thinking and emotional behaviour. They also found that doses higher than those used by Laborit were required, giving patients 75–100 mg daily. Deniker then visited America, where the publication of their work alerted the American psychiatric community that the new treatment might represent a real breakthrough. Heinz Lehmann of the Verdun Protestant Hospital in Montreal trialled it in seventy patients and also noted its striking effects, with patients' symptoms resolving after many years of unrelenting psychosis. By 1954, chlorpromazine was being used in the United States to treat schizophrenia, mania, psychomotor excitement, and other psychotic disorders. Rhône-Poulenc licensed chlorpromazine to Smith Kline & French (today's GlaxoSmithKline) in 1953. In 1955 it was approved in the United States for the treatment of emesis (vomiting). The effect of this drug in emptying psychiatric hospitals has been compared to that of penicillin on infectious diseases. The popularity of the drug fell in the late 1960s as newer drugs came on the scene. From chlorpromazine several other similar antipsychotics were developed, leading to the discovery of antidepressants. Chlorpromazine largely replaced electroconvulsive therapy, hydrotherapy, psychosurgery, and insulin shock therapy. By 1964, about fifty million people worldwide had taken it. Chlorpromazine, in widespread use for fifty years, remains a "benchmark" drug in the treatment of schizophrenia, an effective drug although not a perfect one. Society and culture In literature Thorazine was often depicted in Tom Wolfe's The Electric Kool-Aid Acid Test to abort bad trips on LSD. Thorazine is also mentioned in Fear and Loathing in Las Vegas, where it was reported to negate the effects of LSD. Names Brand names include Thorazine, Largactil, Hibernal, and Megaphen (sold by Bayer in West-Germany since July 1953). Research Chlorpromazine has tentative benefit in animals infected with Naegleria fowleri and shows antifungal and antibacterial activity in vitro. Veterinary use The veterinary use of chlorpromazine has generally been superseded by the use of acepromazine. Chlorpromazine may be used as an antiemetic in dogs and cats, or, less often, as a sedative before anesthesia. In horses, it often causes ataxia and lethargy and is therefore seldom used. It is commonly used to decrease nausea in animals that are too young for other common antiemetics. It is sometimes used as a preanesthetic and muscle relaxant in cattle, swine, sheep, and goats. The use of chlorpromazine in food-producing animals is not permitted in the European Union, as a maximum residue limit could not be determined following assessment by the European Medicines Agency.
Biology and health sciences
Psychiatric drugs
Health
149058
https://en.wikipedia.org/wiki/Bulletproof%20vest
Bulletproof vest
A bulletproof vest, also known as a ballistic vest or bullet-resistant vest, is a type of body armour designed to absorb impact and prevent the penetration of firearm projectiles and explosion fragments to the torso. The vest can be either soft—as worn by police officers, security personnel, prison guards, and occasionally private citizens to protect against stabbing attacks or light projectiles—or hard, incorporating metallic or para-aramid components. Soldiers and police tactical units typically wear hard armour, either alone or combined with soft armour, to protect against rifle ammunition or fragmentation. Additional protection includes trauma plates for blunt force and ceramic inserts for high-caliber rounds. Bulletproof vests have evolved over centuries, from early designs like those made for knights and military leaders to modern-day versions. Early ballistic protection used materials like cotton and silk, while contemporary vests employ advanced fibers and ceramic plates. Ongoing research focuses on improving materials and effectiveness against emerging threats. History Early modern era In 1538, Duke Francesco Maria della Rovere, a condottiero, commissioned Filippo Negroli to create a bulletproof vest. In 1561, Maximilian II, Holy Roman Emperor is recorded as testing his armour against gun-fire. Similarly, in 1590 Henry Lee of Ditchley expected his Greenwich armour to be "pistol proof". Its actual effectiveness was controversial at the time. During the English Civil War, Oliver Cromwell's Ironside cavalry were equipped with lobster-tailed pot helmet and musket-proof cuirasses which consisted of two layers of armour plating. The outer layer was designed to absorb the bullet's energy and the thicker inner layer stopped further penetration. The armour would be left badly dented but still serviceable. Industrial era One of the first examples of commercially sold bulletproof armour was produced by a tailor in Dublin in the 1840s. The Cork Examiner reported on his line of business in December 1847. Another soft ballistic vest, Myeonje baegab, was invented in Joseon Korea in the 1860s shortly after the punitive 1866 French expedition to Korea. The regent of Joseon ordered the development of bulletproof armour because of increasing threats from Western armies. Kim Gidu and Gang Yun found that cotton could protect against bullets if 10 layers of cotton fabric were used. The vests were used in battle during the United States expedition to Korea, when the US Navy attacked Ganghwa Island in 1871. The US Navy captured one of the vests and took it to the US, where it was stored at the Smithsonian Museum until 2007. The vest has since been sent back to Korea and is currently on display to the public. Simple ballistic armor was sometimes constructed by criminals. In 1880, a gang of Australian bushrangers led by Ned Kelly devised their own suits of bulletproof armour. The suits had a mass of around and were fashioned from stolen plough mouldboards, most likely in a crude bush forge and possibly with the assistance of blacksmiths. With a cylindrical helmet and apron, the armour protected the wearer's head, torso, upper arms, and upper legs. In June 1880, the four outlaws wore the suits in a gunfight with the police, during which Kelly survived at least 18 bullets striking his armour. In the 1890s, American outlaw and gunfighter Jim Miller was infamous for wearing a steel breastplate under his frock coat as a form of body armor. This plate saved Miller on two occasions, and it proved to be highly resistant to pistol bullets and shotguns. One example can be seen in his gun battle with a sheriff named George A. "Bud" Frazer, where the plate managed to deflect all bullets from the lawman's revolver. In 1881, the Tombstone, Arizona physician George E. Goodfellow noticed that Charlie Storms, who was shot twice by faro dealer Luke Short, had one bullet stopped by a silk handkerchief in his breast pocket that prevented that bullet from penetrating. In 1887, he wrote an article titled "Impenetrability of Silk to Bullets" for the Southern California Practitioner documenting the first known instance of bulletproof fabric. He experimented with silk vests resembling gambesons that used 18 to 30 layers of silk to protect the wearers from penetration. Kazimierz Żegleń used Goodfellow's findings to develop a silk bulletproof vest at the end of the 19th century, which could stop the relatively slow rounds from black powder handguns. The vests cost US$800 each in 1914, . A similar vest made by Polish inventor Jan Szczepanik in 1901 saved the life of Alfonso XIII of Spain when he was shot by an attacker. By 1900, US gangsters were wearing $800 silk vests to protect themselves. First World War The combatants of World War I started the war without any attempt at providing the soldiers with body armor. Various private companies advertised body protection suits such as the Birmingham Chemico Body Shield, although these products were generally far too expensive for an average soldier. The first official attempts at commissioning body armor were made in 1915 by the British Army Design Committee, Trench Warfare Section in particular a 'Bomber's Shield'; "bomber" being the term for those who threw grenades rather than grenadier. The Experimental Ordnance Board also reviewed potential materials for bullet and fragment proof armor, such as steel plate. A 'necklet' was successfully issued on a small scale (due to cost considerations), which protected the neck and shoulders from bullets traveling at with interwoven layers of silk and cotton stiffened with resin. The Dayfield body shield entered service in 1916 and a hardened breastplate was introduced the following year. The British army medical services calculated towards the end of the War that three quarters of all battle injuries could have been prevented if an effective armor had been issued. The French experimented with steel visors attached to the Adrian helmet and 'abdominal armor' designed by General Adrian, in addition to shoulder "epaulets" to protect from falling debris and darts. These failed to be practical, because they severely impeded the soldier's mobility. The Germans officially issued body armor in the form of nickel and silicon armor plates that was called sappenpanzer (nicknamed 'Lobster armor') from late 1916. These were similarly too heavy to be practical for the rank-and-file, but were used by static units, such as sentries and occasionally machine-gunners. An improved version, the Infanterie-Panzer, was introduced in 1918, with hooks for equipment. The United States developed several types of body armor, including the chrome nickel steel Brewster Body Shield, which consisted of a breastplate and a headpiece and could withstand Lewis Gun bullets at , but was clumsy and heavy at . A scaled waistcoat of overlapping steel scales fixed to a leather lining was also designed; this armor weighed , fit close to the body, and was considered more comfortable. Interwar period During the late 1920s through the early 1930s, gunmen from criminal gangs in the United States began wearing less-expensive vests made from thick layers of cotton padding and cloth. These early vests could absorb the impact of handgun rounds such as .22 Long Rifle, .25 ACP, .32 S&W Long, .32 S&W, .380 ACP, .38 Special and .45 ACP traveling at speeds of up to . To overcome these vests, law enforcement agents began using the newer and more powerful .38 Super, and later the .357 Magnum cartridge. Second World War In 1940, the Medical Research Council in Britain proposed the use of a lightweight suit of armour for general use by infantry, and a heavier suit for troops in more dangerous positions, such as anti-aircraft and naval gun crews. By February 1941, trials had begun on body armour made of mangalloy plates. Two plates covered the front area and one plate on the lower back protected the kidneys and other vital organs. Five thousand sets were made and evaluated to almost unanimous approval – as well as providing adequate protection, the armour didn't severely impede the mobility of the soldier and were reasonably comfortable to wear. The armor was introduced in 1942 although the demand for it was later scaled down. In northwestern Europe, The 2nd Canadian Division during World War II also adopted this armour for medical personnel. The British company Wilkinson Sword began to produce flak jackets for bomber crews in 1943 under contract with the Royal Air Force. The majority of pilot deaths in the air were due to low-velocity fragments rather than bullets. The Surgeon General of the United States Air Force, Colonel M. C. Grow, who was stationed in Britain, thought that many wounds he was treating could have been prevented by some kind of light armor. Two types of armor were issued for different specifications. These jackets were made of nylon and capable of stopping flak and fragmentation, but were not designed to stop bullets. Although they were considered too bulky for pilots using the Avro Lancaster, they were adopted by the United States Army Air Forces. In the early stages of World War II, the United States also designed body armor for infantrymen, but most models were too heavy and mobility-restricting to be useful in the field and incompatible with existing required equipment. Near the middle of 1944, development of infantry body armor in the United States restarted. Several vests were produced for the US military, including but not limited to the T34, the T39, the T62E1, and the M12. The United States developed a vest using doron plate, a fiberglass-based fibre-reinforced plastic. These vests were first used in the Battle of Okinawa in 1945. The Soviet Armed Forces used several types of body armour, including the SN-42 (from Stalnoi Nagrudnik, Russian for "steel breastplate" and the number denotes the design year). All were tested, but only the SN-42 was put in production. It consisted of two pressed steel plates that protected the front torso and groin. The plates were 2 mm thick and weighed 3.5 kg (7.7 lb). This armour was generally supplied to assault engineers (SHISBr) and tank desantniki. The SN armour protected wearers from 9×19mm bullets fired by an MP 40 submachine gun at around , and sometimes it was able to deflect 7.92 Mauser rifle bullets (and bayonet blades), but only at very low angle. This made it useful in urban battles such as the Battle of Stalingrad. However, the SN's weight made it impractical for infantry in the open. Some apocryphal accounts note point blank deflection of 9mm bullets, and testing of similar armour supports this theory. Postwar During the Korean War several new vests were produced for the United States military, including the M-1951, which made use of fibre-reinforced plastic or aluminium segments woven into a nylon vest. These vests represented "a vast improvement on weight, but the armor failed to stop bullets and fragments very successfully," although officially they were claimed to be able to stop 7.62×25mm Tokarev pistol rounds at the muzzle. Such vests equipped with Doron Plate have, in informal testing, defeated .45 ACP handgun ammunition. Developed by Natick Laboratories (now the Combat Capabilities Development Command Soldier Center) and introduced in 1967, T65-2 plate carriers were the first vests designed to hold hard ceramic plates, making them capable of stopping 7 mm rifle rounds. These "Chicken Plates" were made of either boron carbide, silicon carbide, or aluminium oxide. They were issued to the crew of low-flying aircraft, such as the UH-1 and UC-123, during the Vietnam War. Conscious of US developments during the Korean War, the Soviet Union also began the development of body armour for its troops, resulting in the adoption of the 6b1 vest in 1957. This marked a shift away from previous systems like the SN-42, which relied on large, monolithic plates that were inflexible and substantially affected a soldier's balance. The 6b1, and all subsequent Soviet body armour, would rely upon ballistic-fabric wrapped plates, initially steel and later titanium and boron carbide. Between 1957 and 1958, anywhere between 1500 and 5000 6b1 vests were produced, but they were subsequently put in storage and not issued until the early years of the Soviet–Afghan War, where they were used in limited quantities, and were able to resist shrapnel and Tokarev rounds. In 1969, American Body Armor was founded and began to produce a patented combination of quilted nylon faced with multiple steel plates. This armor configuration was marketed to American law enforcement agencies by Smith & Wesson under the trade name "Barrier Vest." The Barrier Vest was the first police vest to gain wide use during high-threat police operations. In 1971, research chemist Stephanie Kwolek discovered a liquid crystalline polymer solution. Its exceptional strength and stiffness led to the invention of Kevlar, a synthetic fibre, woven into a fabric and layered, that, by weight, has five times the tensile strength of steel. In the mid-1970s, DuPont, the company which employed Kwolek, introduced Kevlar. Immediately Kevlar was incorporated into a National Institute of Justice (NIJ) evaluation program to provide lightweight, able body armour to a test pool of American law enforcement officers to ascertain if everyday able wearing was possible. Lester Shubin, a program manager at the NIJ, managed this law enforcement feasibility study within a few selected large police agencies and quickly determined that Kevlar body armor could be comfortably worn by police daily, and would save lives. In 1975 Richard A. Armellino, the founder of American Body Armor, marketed an all Kevlar vest called the K-15, consisting of 15 layers of Kevlar that also included a 5" × 8" ballistic steel "Shok Plate" positioned vertically over the heart and was issued US Patent #3,971,072 for this innovation. Similarly sized and positioned "trauma plates" are still used today on most vests, reducing blunt trauma and increasing ballistic protection in the center-mass heart/sternum area. In 1976, Richard Davis, founder of Second Chance Body Armor, designed the company's first all-Kevlar vest, the Model Y. The lightweight, able vest industry was launched and a new form of daily protection for the modern police officer was quickly adopted. By the mid-to-late 1980s, an estimated 1/3 to 1/2 of police patrol officers wore able vests daily. By 2006, more than 2,000 documented police vest "saves" were recorded, validating the success and efficiency of lightweight able body armor as a standard piece of everyday police equipment. Recent years During the 1980s, the US military issued the PASGT kevlar vest, tested privately at NIJ level IIA by several sources, able to stop pistol rounds (including 9 mm FMJ), but intended and approved only for fragmentation. West Germany issued a similar rated vest called the Splitterschutzweste. During the early 1980s, body armor vests began to see widespread use by several countries in addition to more prolific users like the US and UK. Following the 1982 Israeli intervention during the Lebanese Civil War, body armor was widely issued to Israeli troops as well as European peacekeepers and to a lesser degree, by Syrian troops. During the Soviet-Afghan war the obsolete 6b1 was rapidly replaced by the 6b2, which was issued from 1980 onward and by 1983 was issued to the vast majority of the 40th army. Kevlar soft armor had its shortcomings because if "large fragments or high velocity bullets hit the vest, the energy could cause life-threatening, blunt trauma injuries" in selected, vital areas. Ranger Body Armor was developed for the American military in 1991. Although it was the second modern US body armor that was able to stop rifle caliber rounds and still be light enough to be worn by infantry soldiers in the field, (first being the ISAPO, or Interim Small Arms Protective Overvest,) it still had its flaws: "it was still heavier than the concurrently issued PASGT (Personal Armor System for Ground Troops) anti-fragmentation armor worn by regular infantry and ... did not have the same degree of ballistic protection around the neck and shoulders." The format of Ranger Body Armor (and more recent body armor issued to US special operations units) highlights the trade-offs between force protection and mobility that modern body armor forces organizations to address. Newer armor issued by the United States armed forces to large numbers of troops includes the United States Army's Improved Outer Tactical Vest and the United States Marine Corps Modular Tactical Vest. All of these systems are designed with the vest intended to provide protection from fragments and pistol rounds. Hard ceramic plates, such as the Small Arms Protective Insert, as used with Interceptor Body Armor, are worn to protect the vital organs from higher level threats. These threats mostly take the form of high velocity and armor-piercing rifle rounds. Similar types of protective equipment have been adopted by modern armed forces over the world. Since the 1970s, several new fibers and construction methods for bulletproof fabric have been developed besides woven Kevlar, such as DSM's Dyneema, Honeywell's Gold Flex and Spectra, Teijin Aramid's Twaron, Pinnacle Armor's Dragon Skin, and Toyobo's Zylon. The US military has developed body armor for the working dogs who aid soldiers in battle. Performance standards Due to the various types of projectile, it is often inaccurate to refer to a particular product as "bulletproof" because this implies that it will protect against any and all threats. Instead, the term bullet resistant is generally preferred. Vest specifications will typically include both penetration resistance requirements and limits on the amount of impact force that is delivered to the body. Even without penetration, heavy bullets can deal enough force to cause blunt force trauma under the impact point. On the other hand, some bullets can penetrate the vest, but deal low damage to its wearer due to the loss of speed or small/reduced mass/form. Armour piercing ammunition tends to have poor terminal ballistics due to it being specifically not intended to fragment or expand. Body armor standards are regional. Around the world ammunition varies and as a result the armor testing must reflect the threats found locally. Law enforcement statistics show that many shootings where officers are injured or killed involve the officer's own weapon. As a result, each law enforcement agency or para-military organization will have their own standard for armor performance if only to ensure that their armor protects them from their own weapons. While many standards exist, a few standards are widely used as models. The US National Institute of Justice ballistic and stab documents are examples of broadly accepted standards. In addition to the NIJ, the UK Home Office Scientific Development Branch (HOSDB – formerly the Police Scientific Development Branch (PSDB)) and VPAM (German acronym for the Association of Laboratories for Bullet Resistant Materials And Constructions), originally from Germany, are other widely accepted standards. In the Russian area, the GOST standard is dominant. Soft and hard armor Modern body armor is generally split into one of two categories: soft armor and hard armor. Soft armor is typically made of woven fabrics, like Dyneema or Kevlar, and usually provides protection against fragmentation and handgun threats. Hard armor usually refers to ballistic plates; these hardened plates are designed to defend against rifle threats, in addition to the threats covered by soft armor. Soft armor Soft armour is usually made of woven fabrics (synthetic or natural) and protects up to NIJ level IIIA. Soft armour can be worn stand-alone or can be combined with hard armor as part of an "In-Conjunction" armor system. In these in-conjunction systems, a soft armor "plate backer" is usually placed behind the ballistic plate and the combination of soft and hard armor provides the designated level of protection. Hard armor Broadly, there are three basic types of hard armor ballistic plates: ceramic plate-based systems, steel plate with spall fragmentation protective coating (or backer), and hard fiber-based laminate systems. These hard armor plates may be designed to be used stand-alone or "In-Conjunction" with soft armor backers, also called "plate backers". Many systems contain both hard ceramic components and laminated textile materials used together. Various ceramic materials types are in use, however: aluminum oxide, boron carbide and silicon carbide are the most common. The fibers used in these systems are the same as found in soft textile armor. However, for rifle protection, high pressure lamination of ultra high molecular weight polyethylene with a Kraton matrix is the most common. The Small Arms Protective Insert (SAPI) and the enhanced SAPI plate for the United States Department of Defense generally has this form. Due to the use of ceramic plates for rifle protection, these vests are 5–8 times as heavy on an area basis as handgun protection. The weight and stiffness of rifle armor is a major technical challenge. Density, hardness and impact toughness are among the materials properties that are balanced to design these systems. While ceramic materials have some outstanding properties for ballistics, they have poor fracture toughness. Failure of ceramic plates by cracking must also be controlled. For this reason many ceramic rifle plates are a composite. The strike face is ceramic with the backface formed of laminated fiber and resin materials. The hardness of the ceramic prevents the penetration of the bullet while the tensile strength of the fiber backing helps prevent tensile failure. The U.S. military's Small Arms Protective Insert family is a well-known example of these plates. When a ceramic plate is shot, it cracks in the vicinity of the impact, which reduces the protection in this area. Although NIJ 0101.06 requires a Level III plate to stop six rounds of 7.62×51mm M80 ball ammunition, it imposes a minimum distance between shots of 2.0 inches (51mm); if two rounds impact the plate closer than this requirement permits, it may result in a penetration. To counter this, some plates, such as the Ceradyne Model AA4 and IMP/ACT (Improved Multi-hit Performance/Advanced Composite Technology) series, use a stainless steel crack arrestor embedded between the strike face and backer. This layer contains cracks in the strike face to the immediate area around an impact, resulting in markedly improved multi-hit ability; in conjunction with NIJ IIIA soft armor, a 3.9 lb IMP/ACT plate can stop eight rounds of 5.56×45mm M995, and a 4.2 lb plate such as the MH3 CQB can stop either ten rounds of 5.56×45mm M995 or six rounds of 7.62×39mm BZ API. The standards for armor-piercing rifle bullets are not clear-cut, because the penetration of a bullet depends on the hardness of the target armor, and the armor type. However, there are a few general rules. For example, bullets with a soft lead-core and copper jacket are too easily deformed to penetrate hard materials, whereas rifle bullets intended for maximum penetration into hard armor are nearly always manufactured with high-hardness core materials such as tungsten carbide. Most other core materials would have effects between lead and tungsten carbide. Many common bullets, such as the 7.62×39mm M43 standard cartridge for the AK-47/AKM rifle, have a steel core with hardness rating ranging from Rc35 mild steel up to Rc45 medium hard steel. However, there is a caveat to this rule: with regards to penetration, the hardness of a bullet's core is significantly less important than the sectional density of that bullet. This is why there are many more bullets made with tungsten instead of tungsten carbide. Additionally, as the hardness of the bullet core increases, so must the amount of ceramic plating used to stop penetration. Like in soft ballistics, a minimum ceramic material hardness of the bullet core is required to damage their respective hard core materials, however in armor-piercing rounds the bullet core is eroded rather than deformed. The US Department of Defense uses several hard armor plates. The first, the Small Arms Protective Insert (SAPI), called for ceramic composite plates with a mass of 20–30 kg/m2 (4–5 lb/ft2). SAPI plates have a black fabric cover with the text "7.62mm M80 Ball Protection"; as expected, they are required to stop three rounds of 7.62×51mm M80 ball, with the plate tilted thirty degrees towards the shooter for the third shot; this practice is common for all three-hit-protective plates in the SAPI series. Later, the Enhanced SAPI (ESAPI) specification was developed to protect from more penetrative ammunition. ESAPI ceramic plates have a green fabric cover with the text "7.62mm APM2 Protection" on the back and a density of 35–45 kg/m2 (7–9 lb/ft2); they are designed to stop bullets like the .30-06 AP (M2) with a hardened steel core. Depending on revision, the plate may stop more than one. Since the issuance of CO/PD 04-19D on January 14, 2007, ESAPI plates are required to stop three rounds of M2AP. The plates may be differentiated by the text "REV." on the back, followed by a letter. A few years after the fielding of the ESAPI, the Department of Defense began to issue XSAPI plates in response to a perceived threat of AP projectiles in Iraq and Afghanistan. Over 120,000 inserts were procured; however, the AP threats they were meant to stop never materialized, and the plates were put into storage. XSAPI plates are required to stop three rounds of either the 7.62×51mm M993 or 5.56×45mm M995 tungsten-carbide armor-piercing projectiles (like newer ESAPIs, the third shot occurs with the plate tilted towards the shooter), and are distinguished by a tan cover with the text "7.62mm AP/WC Protection" on the back. Cercom (now BAE Systems), CoorsTek, Ceradyne, TenCate Advanced Composites, Honeywell, DSM, Pinnacle Armor and a number of other engineering companies develop and manufacture the materials for composite ceramic rifle armor. Body armor standards in the Russian Federation, as established in GOST R 50744-95, differ significantly from American standards, on account of a different security situation. The 7.62×25mm Tokarev round is a relatively common threat in Russia and is known to be able to penetrate NIJ IIIA soft armor. Armor protection in the face of the large numbers of these rounds, therefore, necessitates higher standards. GOST armor standards are more stringent than those of the NIJ with regards to protection and blunt impact. For example, one of the highest protection level, GOST 6A, requires the armor to withstand three 7.62×54mmR B32 API hits fired from 5.10m away with 16mm of back-face deformation (BFD). NIJ Level IV-rated armor is only required to stop 1 hit of .30–06, or 7.62×63mm, M2AP with 44mm BFD. Trauma plates Trauma plates, also called trauma pads, are inserts or pads which are placed behind ballistic armour plates/panels and serve to reduce the blunt force trauma absorbed by the body; they do not necessarily have any ballistic protective properties. While an armour system (hard or soft) may stop a projectile from penetrating, the projectile may still cause significant indentation and deformation of the armour, also called backface deformation. Trauma plates help protect against damage to the body from this backface deformation. Trauma plates should not be confused with soft armor or with ballistic plates, both of which do inherently provide ballistic protection. Explosive protection Bomb disposal officers often wear heavy armor designed to protect against most effects of a moderate sized explosion, such as bombs encountered in terror threats. Full head helmet, covering the face and some degree of protection for limbs is mandatory in addition to very strong armor for the torso. An insert to protect the spine is usually applied to the back, in case an explosion throws the wearer. Visibility and mobility of the wearer is severely limited, as is the time that can be spent working on the device. Armor designed primarily to counter explosives is often somewhat less effective against bullets than armor designed for that purpose. The sheer mass of most bomb disposal armor usually provides some protection, and bullet-specific armor plates are compatible with some bomb disposal suits. Bomb disposal technicians try to accomplish their task if possible using remote methods (e.g., robots, line and pulleys). Actually laying hands on a bomb is only done in an extremely life-threatening situation, where the hazards to people and critical structures cannot be lessened by using wheeled robots or other techniques. It is notable that despite the protection offered, much of it is in fragmentation. According to some sources, overpressure from ordnance beyond the charge of a typical hand grenade can overwhelm a bomb suit. In some media, an EOD suit is portrayed as a heavily armoured bulletproof suit capable of ignoring explosions and gunfire; in real life, this is not the case, as much of a bomb suit is made up of only soft armor. Stab and stab-ballistic armor Early "ice pick" test In the mid-1980s the state of California Department of Corrections issued a requirement for a body armor using a commercial ice pick as the test penetrator. The test method attempted to simulate the capacity of a human attacker to deliver impact energy with their upper body. As was later shown by the work of the former British PSDB, this test overstated the capacity of human attackers. The test used a drop mass or sabot that carried the ice pick. Using gravitational force, the height of the drop mass above the vest was proportional to the impact energy. This test specified 109 joules (81 ft·lb) of energy and a 7.3 kg (16 lb) drop mass with a drop height of 153 cm (60 in). The ice pick has a 4 mm (0.16 in) diameter with a sharp tip with a 5.4 m/s (17 ft/s) terminal velocity in the test. The California standard did not include knife or cutting-edge weapons in the test protocol. The test method used the oil/clay (Roma Plastilena) tissue simulant as a test backing. In this early phase only titanium and steel plate offerings were successful in addressing this requirement. Point Blank developed the first ice pick certified offerings for CA Department of Corrections in shaped titanium sheet metal. Vests of this type are still in service in US corrections facilities as of 2008. Beginning in the early 1990s, an optional test method was approved by California which permitted the use of 10% ballistic gelatin as a replacement for Roma clay. The transition from hard, dense clay-based Roma to soft low-density gelatin allowed all textile solutions to meet this attack energy requirement. Soon all textile "ice pick" vests began to be adopted by California and other US states as a result of this migration in the test methods. It is important for users to understand that the smooth, round tip of the ice pick does not cut fiber on impact and this permits the use of textile based vests for this application. The earliest of these "all" fabric vests designed to address this ice pick test was Warwick Mills's TurtleSkin ultra tightly woven para-aramid fabric with a patent filed in 1993. Shortly after the TurtleSkin work, in 1995 DuPont patented a medium density fabric that was designated as Kevlar Correctional. These textile materials do not have equal performance with cutting-edge threats and these certifications were only with ice pick and were not tested with knives. HOSDB-Stab and Slash standards Parallel to the US development of "ice pick" vests, the British police, PSDB, was working on standards for knife-resistant body armor. Their program adopted a rigorous scientific approach and collected data on human attack capacity. Their ergonomic study suggested three levels of threat: 25, 35 and 45 joules of impact energy. In addition to impact energy attack, velocities were measured and were found to be 10–20 m/s (much faster than the California test). Two commercial knives were selected for use in this PSDB test method. In order to test at a representative velocity, an air cannon method was developed to propel the knife and sabot at the vest target using compressed air. In this first version, the PSDB ’93 test also used oil/clay materials as the tissue simulant backing. The introduction of knives which cut fiber and a hard-dense test backing required stab vest manufacturers to use metallic components in their vest designs to address this more rigorous standard. The current standard HOSDB Body Armour Standards for UK Police (2007) Part 3: Knife and Spike Resistance is harmonized with the US NIJ OO15 standard, use a drop test method and use a composite foam backing as a tissue simulant. Both the HOSDB and the NIJ test now specify engineered blades, double-edged S1 and single-edge P1 as well as the spike. In addition to the stab standards, HOSDB has developed a standard for slash resistance (2006). This standard, like the stab standards, is based on drop testing with a test knife in a mounting of controlled mass. The slash test uses the Stanley Utility knife or box cutter blades. The slash standard tests the cut resistance of the armor panel parallel to the direction of blade travel. The test equipment measures the force at the instant the blade tip produces a sustained slash through the vest. The criteria require that slash failure of the armor be greater than 80 newtons of force. Combination stab and ballistic vests Vests that combined stab and ballistic protection were a significant innovation in the 1990s period of vest development. The starting point for this development were the ballistic-only offerings of that time using NIJ Level 2A, 2, and 3A or HOSDB HG 1 and 2, with compliant ballistic vest products being manufactured with areal densities of between 5.5 and 6 kg/m2 (1.1 and 1.2 lb/ft2 or 18 and 20 oz/ft2). However police forces were evaluating their "street threats" and requiring vests with both knife and ballistic protection. This multi-threat approach is common in the United Kingdom and other European countries and is less popular in the USA. Unfortunately for multi-threat users, the metallic array and chainmail systems that were necessary to defeat the test blades offered little ballistic performance. The multi-threat vests have areal densities close to the sum of the two solutions separately. These vests have mass values in the 7.5–8.5 kg/m2 (1.55–1.75 lb/ft2) range. Ref (NIJ and HOSDB certification listings). Rolls-Royce Composites -Megit and Highmark produced metallic array systems to address this HOSDB standard. These designs were used extensively by the London Metropolitan Police Service and other agencies in the United Kingdom. Standards update US and UK As vest manufacturers and the specifying authorities worked with these standards, the UK and US Standards teams began a collaboration on test methods. A number of issues with the first versions of the tests needed to be addressed. The use of commercial knives with inconsistent sharpness and tip shape created problems with test consistency. As a result, two new "engineered blades" were designed that could be manufactured to have reproducible penetrating behavior. The tissue simulants, Roma clay and gelatin, were either unrepresentative of tissue or not practical for the test operators. A composite-foam and hard-rubber test backing was developed as an alternative to address these issues. The drop test method was selected as the baseline for the updated standard over the air cannon option. The drop mass was reduced from the "ice pick test" and a wrist-like soft linkage was engineered into the penetrator-sabot to create a more realistic test impact. These closely related standards were first issued in 2003 as HOSDB 2003 and NIJ 0015. (The Police Scientific Development Branch (PSDB) was renamed the Home Office Scientific Development Branch in 2004.) Stab and spike vests These new standards created a focus on Level 1 at , Level 2 at , Level 3 at protection as tested with the new engineered knives defined in these test documents. The lowest level of this requirement at 25 joules was addressed by a series of textile products of both wovens, coated wovens and laminated woven materials. All of these materials were based on Para-aramid fiber. The co-efficient of friction for ultra high molecular weight polyethylene (UHMWPE) prevented its use in this application. The TurtleSkin DiamondCoat and Twaron SRM products addressed this requirement using a combination of Para-Aramid wovens and bonded ceramic grain. These ceramic-coated products do not have the flexibility and softness of un-coated textile materials. For the higher levels of protection L2 and L3, the very aggressive penetration of the small, thin P1 blade has resulted in the continued use of metallic components in stab armor. In Germany, Mehler Vario Systems developed hybrid vests of woven para-aramid and chainmail, and their solution was selected by London's Metropolitan Police Service. Another German company BSST, in cooperation with Warwick Mills, has developed a system to meet the ballistic-stab requirement using Dyneema laminate and an advanced metallic-array system, TurtleSkin MFA. This system is currently implemented in the Netherlands. The trend in multi threat armor continues with requirements for needle protection in the Draft ISO prEN ISO 14876 norm. In many countries there is also an interest to combine military style explosive fragmentation protection with bullet-ballistics and stab requirements. Armor carriers In order for ballistic protection to be wearable, the ballistic panels and/or hard rifle-resistant plates are placed within a carrier. The term "plate carrier" is used specifically to refer to armour carriers which can hold ballistic plates. Broadly, there are two major types of carriers: overt carriers, and low-profile carriers which are meant to be concealed: Overt/Tactical carriers Overt/Tactical armour carriers typically include pouches and/or mounting systems, like MOLLE, for carrying gear and are usually designed to provide higher amounts of protection. The Improved Outer Tactical Vest and Soldier Plate Carrier Systems are examples of military carriers design to be used with ballistic plate inserts. In addition to load carriage, this type of carrier may include pockets for neck protection, side plates, groin plates, and backside protection. As this style of carrier is not close fitting, sizing in this system is straightforward for both men and women, making custom fabrication unnecessary. Low-Profile/Concealable carriers Low profile/concealable carriers holds the ballistic panels and/or ballistic plates close to the wearer's body and a uniform shirt may be worn over the carrier. This type of carrier must be designed to conform closely to the officer's body shape. For concealable armor to conform to the body it must be correctly fitted to a particular individual. Many programs specify full custom measurement and manufacturing of armor panels and carriers to ensure good fit and comfortable armor. Officers who are either female or significantly overweight have more difficulty in getting accurately measured and having comfortable armor fabricated. Vest slips A third textile layer is often found between the carrier and the ballistic components. The ballistic panels are covered in a coated pouch or slip. This slip provides the encapsulation of the ballistic materials. Slips are manufactured in two types: heat sealed hermetic slips and simple sewn slips. For some ballistic fibers such as Kevlar the slip is a critical part of the system. The slip prevents moisture from the user's body from saturating the ballistic materials. This protection from moisture cycling increases the useful life of the armor. Research Non-standard designs of hard armour The vast majority of hard body armor plates, including the U.S. military's Small Arms Protective Insert family, are monolithic; their strike faces consist of a single ceramic tile. Monolithic plates are lighter than their non-monolithic counterparts, but suffer from reduced effectiveness when shot multiple times in a close area (i.e., shots spaced less than two inches/5.1 cm apart). However, several non-monolithic armor systems have emerged, the most well-known being the controversial Dragon Skin system. Dragon Skin, composed of dozens of overlapping ceramic scales, promised superior multi-hit performance and flexibility compared to the then-current ESAPI plate; however, it failed to deliver. When the U.S. Army tested the system against the same requirements as the ESAPI, Dragon Skin showed major issues with environmental damage; the scales would come apart when subjected to temperatures above 120 °F (49 °C) – not uncommon in Middle Eastern climates – when exposed to diesel vehicle fuel, or after the two four-foot drop tests (after these drops, ESAPI plates are put in an X-ray machine to determine the location of cracks, and then shot directly on said cracks), leaving the plate unable to reach its stated threat level and suffering 13 first- or second-shot complete penetrations by .30–06 M2 AP (the ESAPI test threat) out of 48 shots. Perhaps less-well known is LIBA (Light Improved Body Armor), manufactured by Royal TenCate, ARES Protection, and Mofet Etzion in the early 2000s. LIBA uses an innovative array of ceramic pellets embedded in a polyethylene backer; although this layout lacks the flexibility of Dragon Skin, it provides impressive multi-hit ability as well as the unique ability to repair the armor by replacing damaged pellets and epoxying them over. In addition, there are variants of LIBA with multi-hit capacity against threats analogous to 7.62×51mm NATO M993 AP/WC, a tungsten-cored armor-piercing round. Field tests of LIBA have yielded successful results, with 15 AKM hits producing only minor bruises. Progress in material science Ballistic vests use layers of very strong fibers to "catch" and deform a bullet, mushrooming it into a dish shape, and spreading its force over a larger portion of the vest fiber. The vest absorbs the energy from the deforming bullet, bringing it to a stop before it can completely penetrate the textile matrix. Some layers may be penetrated but as the bullet deforms, the energy is absorbed by a larger and larger fiber area. In recent years, advances in material science have opened the door to the idea of a literal "bulletproof vest" able to stop handgun and rifle bullets with a soft textile vest, without the assistance of additional metal or ceramic plating. However, progress is moving at a slower rate compared to other technical disciplines. The most recent offering from Kevlar, Protera, was released in 1996. Current soft body armor can stop most handgun rounds (which has been the case for roughly 15 years ), but armor plates are needed to stop rifle rounds and steel-core handgun rounds such as 7.62×25mm. The para-aramids have not progressed beyond the limit of 23 grams per denier in fiber tenacity. Modest ballistic performance improvements have been made by new producers of this fiber type. Much the same can be said for the UHMWPE material; the basic fiber properties have only advanced to the 30–35 g/d range. Improvements in this material have been seen in the development of cross-plied non-woven laminate, e.g. Spectra Shield. The major ballistic performance advance of fiber PBO is known as a "cautionary tale" in materials science. This fiber permitted the design of handgun soft armor that was 30–50% lower in mass as compared to the aramid and UHMWPE materials. However this higher tenacity was delivered with a well-publicized weakness in environmental durability. Akzo-Magellan (now DuPont) teams have been working on fiber called M5 fiber; however, its announced startup of its pilot plant has been delayed more than 2 years. Data suggests if the M5 material can be brought to market, its performance will be roughly equivalent to PBO. In May 2008, the Teijin Aramid group announced a "super-fibers" development program. The Teijin emphasis appears to be on computational chemistry to define a solution to high tenacity without environmental weakness. The materials science of second generation "super" fibers is complex, requires large investments, and represent significant technical challenges. Research aims to develop artificial spider silk which could be super strong, yet light and flexible. Other research has been done to harness nanotechnology to help create super-strong fibers that could be used in future bulletproof vests. In 2018, the US military began conducting research into the feasibility of using artificial silk as body armor, which has the advantages of its light weight and its cooling capability. Textile wovens and laminates research Finer yarns and lighter woven fabrics have been a key factor in improving ballistic results. The cost of ballistic fibers increases dramatically as the yarn size decreases, so it's unclear how long this trend can continue. The current practical limit of fiber size is 200 denier with most wovens limited at the 400 denier level. Three-dimensional weaving with fibers connecting flat wovens together into a 3D system are being considered for both hard and soft ballistics. Team Engineering Inc is designing and weaving these multi layer materials. Dyneema DSM has developed higher performance laminates using a new, higher strength fiber designated SB61, and HB51. DSM feels this advanced material provides some improved performance, however the SB61 "soft ballistic" version has been recalled. At the Shot Show in 2008, a unique composite of interlocking steel plates and soft UHWMPE plate was exhibited by TurtleSkin. In combination with more traditional woven fabrics and laminates a number of research efforts are working with ballistic felts. Tex Tech has been working on these materials. Like the 3D weaving, Tex Tech sees the advantage in the 3-axis fiber orientation. Fibers used Ballistic nylon (until the 1970s) or Kevlar, Twaron or Spectra (a competitor for Kevlar) or polyethylene fiber could be used to manufacture bullet proof vests. The vests of the time were made of ballistic nylon & supplemented by plates of fiber-glass, steel, ceramic, titanium, Doron & composites of ceramic and fiberglass, the last being the most effective. Developments in ceramic armor Ceramic materials, materials processing and progress in ceramic penetration mechanics are significant areas of academic and industrial activity. This combined field of ceramics armor research is broad and is perhaps summarized best by The American Ceramics Society. ACerS has run an annual armor conference for a number of years and compiled a proceedings 2004–2007. An area of special activity pertaining to vests is the emerging use of small ceramic components. Large torso sized ceramic plates are complex to manufacture and are subject to cracking in use. Monolithic plates also have limited multi hit capacity as a result of their large impact fracture zone. These are the motivations for new types of armor plate. These new designs use two- and three-dimensional arrays of ceramic elements that can be rigid, flexible, or semi-flexible. Dragon Skin body armor is one of these systems. European developments in spherical and hexagonal arrays have resulted in products that have some flex and multi hit performance. The manufacture of array type systems with flex, consistent ballistic performance at edges of ceramic elements is an active area of research. In addition advanced ceramic processing techniques arrays require adhesive assembly methods. One novel approach is use of hook and loop fasteners to assemble the ceramic arrays. Nanomaterials in ballistics Currently, there are a number of methods by which nanomaterials are being implemented into body armor production. The first, developed at University of Delaware is based on nanoparticles within the suit that become rigid enough to protect the wearer as soon as a kinetic energy threshold is surpassed. These coatings have been described as shear thickening fluids. These nano-infused fabrics have been licensed by BAE systems, but as of mid-2008, no products have been released based on this technology. In 2005 an Israeli company, ApNano, developed a material that was always rigid. It was announced that this nanocomposite based on tungsten disulfide nanotubes was able to withstand shocks generated by a steel projectile traveling at velocities of up to 1.5 km/s. The material was also reportedly able to withstand shock pressures generated by other impacts of up to 250 metric tons-force per square centimeter (24.5 gigapascals; 3,550,000 psi). During the tests, the material proved to be so strong that after the impact the samples remained essentially unmarred. Additionally, a study in France tested the material under isostatic pressure and found it to be stable up to at least 350 tf/cm2 (34 GPa; 5,000,000 psi). As of mid-2008, spider silk bulletproof vests and nano-based armors are being developed for potential market release. Both the British and American militaries have expressed interest in a carbon fiber woven from carbon nanotubes that was developed at University of Cambridge and has the potential to be used as body armor. In 2008, large format carbon nanotube sheets began being produced at Nanocomp. Graphene composite In late 2014, researchers began studying and testing graphene as a material for use in body armor. Graphene is manufactured from carbon and is the thinnest, strongest, and most conductive material on the planet. Taking the form of hexagonally arranged atoms, its tensile strength is known to be 200 times greater than steel, but studies from Rice University have revealed it is also 10 times better than steel at dissipating energy, an ability that had previously not been thoroughly explored. To test its properties, the University of Massachusetts stacked together graphene sheets only a single carbon atom thick, creating layers ranging in thickness from 10 nanometers to 100 nanometers from 300 layers. Microscopic spherical silica "bullets" were fired at the sheets at speeds of up to 3 km (1.9 mi) per second, almost nine times the speed of sound. Upon impact, the projectiles deformed into a cone shape around the graphene before ultimately breaking through. In the three nanoseconds it held together however, the transferred energy traveled through the material at a speed of 22.2 km (13.8 mi) per second, faster than any other known material. If the impact stress can be spread out over a large enough area that the cone moves out at an appreciable velocity compared with the velocity of the projectile, stress will not be localized under where it hit. Although a wide impact hole opened up, a composite mixture of graphene and other materials could be made to create a new, revolutionary armor solution. Legality Australia In Australia, it is illegal to import body armour without prior authorisation from Australian Customs and Border Protection Service. It is also illegal to possess body armour without authorization in South Australia, Victoria, Northern Territory, ACT, Queensland, New South Wales, and Tasmania. Canada In all Canadian provinces except for Alberta, British Columbia and Manitoba, it is legal to wear and to purchase body armour such as ballistic vests. Under the laws of these provinces, it is illegal to possess body armour without a license (unless exempted) issued by the provincial government. As of February 2019, Nova Scotia allows “only those who require such armour due to their employment” to own body armor, such as police and corrections officers, citing the use of body armor by criminals. According to the Body Armour Control Act of Alberta which came into force on June 15, 2012, any individual in possession of a valid firearms licence under the Firearms Act of Canada can legally purchase, possess and wear body armour. European Union In the European Union, the import and sale of ballistic vests and body armor is allowed. There is an exception for vests that are developed under strict military specifications and/or for main military usage; shield above the level of protection NIJ 4 are considered by the law as "armament materials" and forbidden for civilian use. There are many shops in the EU that sell ballistic vests and body armor, used or new. In Italy, the purchase, ownership and wear of ballistic vests and body armor is not subject to any restriction, except for those ballistic protections that are developed under strict military specifications and/or for main military usage, thus considered by the law as "armament materials" and forbidden to civilians. Furthermore, a number of laws and court rulings during the years have rehearsed the concept of a ballistic vest being mandatory to wear for those individuals who work in the private security sector. In the Netherlands the civilian ownership of body armour is subject to the European Union regulations. Body armour in various ballistic grades is sold by a range of different vendors, mainly aimed at providing to security guards and VIP's. The use of body armour while committing a crime is not an additional offence in itself, but may be interpreted as so under different laws such as resisting arrest. Hong Kong Under Schedule C (item ML13) of Cap. 60G Import and Export (Strategic Commodities) Regulations, "armoured or protective equipment, constructions and components" are not regulated "when accompanying their user for the user's own personal protection". United States United States law restricts possession of body armor for convicted violent felons. Many U.S. states also have penalties for possession or use of body armor by felons. In other states, such as Kentucky, possession is not prohibited, but probation or parole is denied to a person convicted of committing certain violent crimes while wearing body armor and carrying a deadly weapon. Most states do not have restrictions for non-felons.
Technology
Armour
null
149215
https://en.wikipedia.org/wiki/Hilbert%27s%20Nullstellensatz
Hilbert's Nullstellensatz
In mathematics, Hilbert's Nullstellensatz (German for "theorem of zeros", or more literally, "zero-locus-theorem") is a theorem that establishes a fundamental relationship between geometry and algebra. This relationship is the basis of algebraic geometry. It relates algebraic sets to ideals in polynomial rings over algebraically closed fields. This relationship was discovered by David Hilbert, who proved the Nullstellensatz in his second major paper on invariant theory in 1893 (following his seminal 1890 paper in which he proved Hilbert's basis theorem). Formulation Let be a field (such as the rational numbers) and be an algebraically closed field extension of (such as the complex numbers). Consider the polynomial ring and let be an ideal in this ring. The algebraic set defined by this ideal consists of all -tuples in such that for all in Hilbert's Nullstellensatz states that if p is some polynomial in that vanishes on the algebraic set , i.e. for all in , then there exists a natural number such that is in . An immediate corollary is the weak Nullstellensatz: The ideal contains 1 if and only if the polynomials in do not have any common zeros in Kn. Specializing to the case , one immediately recovers a restatement of the fundamental theorem of algebra: a polynomial P in has a root in if and only if deg P ≠ 0. For this reason, the (weak) Nullstellensatz has been referred to as a generalization of the fundamental theorem of algebra for multivariable polynomials. The weak Nullstellensatz may also be formulated as follows: if I is a proper ideal in then V(I) cannot be empty, i.e. there exists a common zero for all the polynomials in the ideal in every algebraically closed extension of k. This is the reason for the name of the theorem, the full version of which can be proved easily from the 'weak' form using the Rabinowitsch trick. The assumption of considering common zeros in an algebraically closed field is essential here; for example, the elements of the proper ideal (X2 + 1) in do not have a common zero in With the notation common in algebraic geometry, the Nullstellensatz can also be formulated as for every ideal J. Here, denotes the radical of J and I(U) is the ideal of all polynomials that vanish on the set U. In this way, taking we obtain an order-reversing bijective correspondence between the algebraic sets in Kn and the radical ideals of In fact, more generally, one has a Galois connection between subsets of the space and subsets of the algebra, where "Zariski closure" and "radical of the ideal generated" are the closure operators. As a particular example, consider a point . Then . More generally, Conversely, every maximal ideal of the polynomial ring (note that is algebraically closed) is of the form for some . As another example, an algebraic subset W in Kn is irreducible (in the Zariski topology) if and only if is a prime ideal. Proofs There are many known proofs of the theorem. Some are non-constructive, such as the first one. Others are constructive, as based on algorithms for expressing or as a linear combination of the generators of the ideal. Using Zariski's lemma Zariski's lemma asserts that if a field is finitely generated as an associative algebra over a field , then it is a finite field extension of (that is, it is also finitely generated as a vector space). Here is a sketch of a proof using this lemma. Let (k algebraically closed field), I an ideal of A, and V the common zeros of I in . Clearly, . Let . Then for some prime ideal in A. Let and a maximal ideal in . By Zariski's lemma, is a finite extension of k; thus, is k since k is algebraically closed. Let be the images of under the natural map passing through . It follows that and . Using resultants The following constructive proof of the weak form is one of the oldest proofs (the strong form results from the Rabinowitsch trick, which is also constructive). The resultant of two polynomials depending on a variable and other variables is a polynomial in the other variables that is in the ideal generated by the two polynomials, and has the following properties: if one of the polynomials is monic in , every zero (in the other variables) of the resultant may be extended into a common zero of the two polynomials. The proof is as follows. If the ideal is principal, generated by a non-constant polynomial that depends on , one chooses arbitrary values for the other variables. The fundamental theorem of algebra asserts that this choice can be extended to a zero of . In the case of several polynomials a linear change of variables allows to suppose that is monic in the first variable . Then, one introduces new variables and one considers the resultant As is in the ideal generated by the same is true for the coefficients in of the monomials in So, if is in the ideal generated by these coefficients, it is also in the ideal generated by On the other hand, if these coefficients have a common zero, this zero can be extended to a common zero of by the above property of the resultant. This proves the weak Nullstellensatz by induction on the number of variables. Using Gröbner bases A Gröbner basis is an algorithmic concept that was introduced in 1973 by Bruno Buchberger. It is presently fundamental in computational geometry. A Gröbner basis is a special generating set of an ideal from which most properties of the ideal can easily be extracted. Those that are related to the Nullstellensatz are the following: An ideal contains if and only if its reduced Gröbner basis (for any monomial ordering) is . The number of the common zeros of the polynomials in a Gröbner basis is strongly related to the number of monomials that are irreducibles by the basis. Namely, the number of common zeros is infinite if and only if the same is true for the irreducible monomials; if the two numbers are finite, the number of irreducible monomials equals the numbers of zeros (in an algebraically closed field), counted with multiplicities. With a lexicographic monomial order, the common zeros can be computed by solving iteratively univariate polynomials (this is not used in practice since one knows better algorithms). Strong Nullstellensatz: a power of belongs to an ideal if and only the saturation of by produces the Gröbner basis . Thus, the strong Nullstellensatz results almost immediately from the definition of the saturation. Generalizations The Nullstellensatz is subsumed by a systematic development of the theory of Jacobson rings, which are those rings in which every radical ideal is an intersection of maximal ideals. Given Zariski's lemma, proving the Nullstellensatz amounts to showing that if k is a field, then every finitely generated k-algebra R (necessarily of the form ) is Jacobson. More generally, one has the following theorem: Let be a Jacobson ring. If is a finitely generated R-algebra, then is a Jacobson ring. Furthermore, if is a maximal ideal, then is a maximal ideal of , and is a finite extension of . Other generalizations proceed from viewing the Nullstellensatz in scheme-theoretic terms as saying that for any field k and nonzero finitely generated k-algebra R, the morphism admits a section étale-locally (equivalently, after base change along for some finite field extension ). In this vein, one has the following theorem: Any faithfully flat morphism of schemes locally of finite presentation admits a quasi-section, in the sense that there exists a faithfully flat and locally quasi-finite morphism locally of finite presentation such that the base change of along admits a section. Moreover, if is quasi-compact (resp. quasi-compact and quasi-separated), then one may take to be affine (resp. affine and quasi-finite), and if is smooth surjective, then one may take to be étale. Serge Lang gave an extension of the Nullstellensatz to the case of infinitely many generators: Let be an infinite cardinal and let be an algebraically closed field whose transcendence degree over its prime subfield is strictly greater than . Then for any set of cardinality , the polynomial ring satisfies the Nullstellensatz, i.e., for any ideal we have that . Effective Nullstellensatz In all of its variants, Hilbert's Nullstellensatz asserts that some polynomial belongs or not to an ideal generated, say, by ; we have in the strong version, in the weak form. This means the existence or the non-existence of polynomials such that . The usual proofs of the Nullstellensatz are not constructive, non-effective, in the sense that they do not give any way to compute the . It is thus a rather natural question to ask if there is an effective way to compute the (and the exponent in the strong form) or to prove that they do not exist. To solve this problem, it suffices to provide an upper bound on the total degree of the : such a bound reduces the problem to a finite system of linear equations that may be solved by usual linear algebra techniques. Any such upper bound is called an effective Nullstellensatz. A related problem is the ideal membership problem, which consists in testing if a polynomial belongs to an ideal. For this problem also, a solution is provided by an upper bound on the degree of the . A general solution of the ideal membership problem provides an effective Nullstellensatz, at least for the weak form. In 1925, Grete Hermann gave an upper bound for ideal membership problem that is doubly exponential in the number of variables. In 1982 Mayr and Meyer gave an example where the have a degree that is at least double exponential, showing that every general upper bound for the ideal membership problem is doubly exponential in the number of variables. Since most mathematicians at the time assumed the effective Nullstellensatz was at least as hard as ideal membership, few mathematicians sought a bound better than double-exponential. In 1987, however, W. Dale Brownawell gave an upper bound for the effective Nullstellensatz that is simply exponential in the number of variables. Brownawell's proof relied on analytic techniques valid only in characteristic 0, but, one year later, János Kollár gave a purely algebraic proof, valid in any characteristic, of a slightly better bound. In the case of the weak Nullstellensatz, Kollár's bound is the following: Let be polynomials in variables, of total degree . If there exist polynomials such that , then they can be chosen such that This bound is optimal if all the degrees are greater than 2. If is the maximum of the degrees of the , this bound may be simplified to An improvement due to M. Sombra is His bound improves Kollár's as soon as at least two of the degrees that are involved are lower than 3. Projective Nullstellensatz We can formulate a certain correspondence between homogeneous ideals of polynomials and algebraic subsets of a projective space, called the projective Nullstellensatz, that is analogous to the affine one. To do that, we introduce some notations. Let The homogeneous ideal, is called the maximal homogeneous ideal (see also irrelevant ideal). As in the affine case, we let: for a subset and a homogeneous ideal I of R, By we mean: for every homogeneous coordinates of a point of S we have . This implies that the homogeneous components of f are also zero on S and thus that is a homogeneous ideal. Equivalently, is the homogeneous ideal generated by homogeneous polynomials f that vanish on S. Now, for any homogeneous ideal , by the usual Nullstellensatz, we have: and so, like in the affine case, we have: There exists an order-reversing one-to-one correspondence between proper homogeneous radical ideals of R and subsets of of the form The correspondence is given by and Analytic Nullstellensatz (Rückert’s Nullstellensatz) The Nullstellensatz also holds for the germs of holomorphic functions at a point of complex n-space Precisely, for each open subset let denote the ring of holomorphic functions on U; then is a sheaf on The stalk at, say, the origin can be shown to be a Noetherian local ring that is a unique factorization domain. If is a germ represented by a holomorphic function , then let be the equivalence class of the set where two subsets are considered equivalent if for some neighborhood U of 0. Note is independent of a choice of the representative For each ideal let denote for some generators of I. It is well-defined; i.e., is independent of a choice of the generators. For each subset , let It is easy to see that is an ideal of and that if in the sense discussed above. The analytic Nullstellensatz then states: for each ideal , where the left-hand side is the radical of I.
Mathematics
Abstract algebra
null
149223
https://en.wikipedia.org/wiki/Borderline%20personality%20disorder
Borderline personality disorder
Borderline personality disorder (BPD) is a personality disorder characterized by a pervasive, long-term pattern of significant interpersonal relationship instability, a distorted sense of self, and intense emotional responses. People diagnosed with BPD frequently exhibit self-harming behaviours and engage in risky activities, primarily due to challenges regulating emotional states to a healthy, stable baseline. Symptoms such as dissociation (a feeling of detachment from reality), a pervasive sense of emptiness, and an acute fear of abandonment are prevalent among those affected. The onset of BPD symptoms can be triggered by events that others might perceive as normal, with the disorder typically manifesting in early adulthood and persisting across diverse contexts. BPD is often comorbid with substance use disorders, depressive disorders, and eating disorders. BPD is associated with a substantial risk of suicide; studies estimated that up to 10 percent of people with BPD die by suicide. Despite its severity, BPD faces significant stigmatization in both media portrayals and the psychiatric field, potentially leading to underdiagnosis and insufficient treatment. The causes of BPD are unclear and complex, implicating genetic, neurological, and psychosocial conditions in its development. A genetic predisposition is evident, with the disorder significantly more common in people with a family history of BPD, particularly immediate relatives. Psychosocial factors, particularly adverse childhood experiences, have been proposed. The American Diagnostic and Statistical Manual of Mental Disorders (DSM) classifies BPD in the dramatic cluster of personality disorders. There is a risk of misdiagnosis, with BPD most commonly confused with a mood disorder, substance use disorder, or other mental health disorders. Therapeutic interventions for BPD predominantly involve psychotherapy, with dialectical behavior therapy (DBT) and schema therapy the most effective modalities. Although pharmacotherapy cannot cure BPD, it may be employed to mitigate associated symptoms, with atypical antipsychotics (e.g., Quetiapine) and selective serotonin reuptake inhibitor (SSRI) antidepressants commonly being prescribed, though their efficacy is unclear. A 2020 meta-analysis found the use of medications was still unsupported by evidence. BPD has a point prevalence of 1.6% and a lifetime prevalence of 5.9% of the global population, with a higher incidence rate among women compared to men in the clinical setting of up to three times. Despite the high utilization of healthcare resources by people with BPD, up to half may show significant improvement over a ten-year period with appropriate treatment. The name of the disorder, particularly the suitability of the term borderline, is a subject of ongoing debate. Initially, the term reflected historical ideas of borderline insanity and later described patients on the border between neurosis and psychosis. These interpretations are now regarded as outdated and clinically imprecise. Signs and symptoms Borderline personality disorder, as outlined in the DSM-5, manifests through nine distinct symptoms, with a diagnosis requiring at least five of the following criteria to be met: Frantic efforts to avoid real or imagined emotional abandonment. Unstable and chaotic interpersonal relationships, often characterized by a pattern of alternating between extremes of idealization and devaluation, also known as 'splitting'. A markedly disturbed sense of identity and distorted self-image. Impulsive or reckless behaviors, including uncontrollable spending, unsafe sexual practices, substance use disorder, reckless driving, and binge eating. Recurrent suicidal ideation or behaviors involving self-harm. Rapidly shifting intense emotional dysregulation. Chronic feelings of emptiness. Inappropriate, intense anger that can be difficult to control. Transient, stress-related paranoid ideation or severe dissociative symptoms. The distinguishing characteristics of BPD include a pervasive pattern of instability in one's interpersonal relationships and in one's self-image, with frequent oscillation between extremes of idealization and devaluation of others, alongside fluctuating moods and difficulty regulating intense emotional reactions. Dangerous or impulsive behaviors are commonly associated with BPD. Additional symptoms may encompass uncertainty about one's identity, values, morals, and beliefs; experiencing paranoid thoughts under stress; episodes of depersonalization; and, in moderate to severe cases, stress-induced breaks with reality or episodes of psychosis. It is also common for individuals with BPD to have comorbid conditions such as depressive or bipolar disorders, substance use disorders, eating disorders, post-traumatic stress disorder (PTSD), and attention deficit hyperactivity disorder (ADHD). Mood and affect Individuals with BPD exhibit emotional dysregulation. Emotional dysregulation is characterized by an inability in flexibly responding to and managing emotional states, resulting in intense and prolonged emotional reactions that deviate from social norms, given the nature of the environmental stimuli encountered. Such reactions not only deviate from accepted social norms but also surpass what is informally deemed appropriate or proportional to the encountered stimuli. A core characteristic of BPD is affective instability, which manifests as rapid and frequent shifts in mood of high affect intensity and rapid onset of emotions, triggered by environmental stimuli. The return to a stable emotional state is notably delayed, exacerbating the challenge of achieving emotional equilibrium. This instability is further intensified by an acute sensitivity to psychosocial cues, leading to significant challenges in managing emotions effectively. As the first component of emotional dysregulation, individuals with BPD are shown to have increased emotional sensitivity, especially towards negative mood states such as fear, anger, sadness, rejection, criticism, isolation, and perceived failure. This increased sensitivity results in an intensified response to environmental cues, including the emotions of others. Studies have identified a negativity bias in those with BPD, showing a predisposition towards recognizing and reacting more strongly to negative emotions in others, along with an attentional bias towards processing negatively-valenced stimuli. Without effective coping mechanisms, individuals might resort to self-harm, or suicidal behaviors to manage or escape from these intense negative emotions. While conscious of the exaggerated nature of their emotional responses, individuals with BPD face challenges in regulating these emotions. To mitigate further distress, there may be an unconscious suppression of emotional awareness, which paradoxically hinders the recognition of situations requiring intervention. A second component of emotional dysregulation in BPD is high levels of negative affectivity, stemming directly from the individual's emotional sensitivity to negative emotions. This negative affectivity causes emotional reactions that diverge from socially accepted norms, in ways that are disproportionate to the environmental stimuli presented. Those with BPD are relatively unable to tolerate the distress that is encountered in daily life, and they are prone to engage in maladaptive strategies to try to reduce the distress experienced. Maladaptive coping strategies include rumination, thought suppression, experiential avoidance, emotional isolation, as well as impulsive and self-injurious behaviours. American psychologist Marsha Linehan highlights that while the sensitivity, intensity, and duration of emotional experiences in individuals with BPD can have positive outcomes, such as exceptional enthusiasm, idealism, and capacity for joy and love, it also predisposes them to be overwhelmed by negative emotions. This includes experiencing profound grief instead of mere sadness, intense shame instead of mild embarrassment, rage rather than annoyance, and panic over nervousness. Research indicates that individuals with BPD endure chronic and substantial emotional suffering. Emotional dysregulation is a significant feature of BPD, yet Fitzpatrick et al. (2022) suggest that such dysregulation may also be observed in other disorders, like generalized anxiety disorder (GAD). Nonetheless, their findings imply that individuals with BPD particularly struggle with disengaging from negative emotions and achieving emotional equilibrium. Euphoria, or transient intense joy, can occur in those with BPD, but they are more commonly afflicted by dysphoria (a profound state of unease or dissatisfaction), depression, and pervasive distress. Zanarini et al. identified four types of dysphoria characteristic of BPD: intense emotional states, destructiveness or self-destructiveness, feelings of fragmentation or identity loss, and perceptions of victimization. A diagnosis of BPD is closely linked with experiencing feelings of betrayal, lack of control, and self-harm. Moreover, emotional lability, indicating variability or fluctuations in emotional states, is frequent among those with BPD. Although emotional lability may imply rapid alternations between depression and elation, mood swings in BPD are more commonly between anger and anxiety or depression and anxiety. Interpersonal relationships Interpersonal relationships are significantly impacted in individuals with BPD, characterized by a heightened sensitivity to the behavior and actions of others. Individuals with BPD can be very conscious of and susceptible to their perceived or real treatment by others. Individuals may experience profound happiness and gratitude for perceived kindness, yet feel intense sadness or anger towards perceived criticism or harm. A notable feature of BPD is the tendency to engage in idealization and devaluation of others – that is to idealize and subsequently devalue others – oscillating between extreme admiration and profound mistrust or dislike. This pattern, referred to as "splitting," can significantly influence the dynamics of interpersonal relationships. In addition to this external "splitting," patients with BPD typically have internal splitting, i.e. vacillation between considering oneself a good person who has been mistreated (in which case anger predominates) and a bad person whose life has no value (in which case self-destructive or even suicidal behavior may occur). This splitting is also evident in black-and-white or all-or-nothing dichotomous thinking. Despite a strong desire for intimacy, individuals with BPD may exhibit insecure, avoidant, ambivalent, or fearfully preoccupied attachment styles in relationships, complicating their interactions and connections with others. Family members, including parents of adults with BPD, may find themselves in a cycle of being overly involved in the individual's life at times and, at other times, significantly detached, contributing to a sense of alienation within the family unit. Personality disorders, including BPD, are associated with an increased incidence of chronic stress and conflict, reduced satisfaction in romantic partnerships, domestic abuse, and unintended pregnancies. Research indicates variability in relationship patterns among individuals with BPD. A portion of these individuals may transition rapidly between relationships, a pattern metaphorically described as "butterfly-like," characterized by fleeting and transient interactions and "fluttering" in and out of relationships. Conversely, a subgroup, referred to as "attached," tends to establish fewer but more intense and dependent relationships. These connections often form rapidly, evolving into deeply intertwined and tumultuous bonds, indicating a more pronounced dependence on these interpersonal ties compared to those without BPD. Individuals with BPD express higher levels of jealousy towards their partners in romantic relations. Behavior Behavioral patterns associated with BPD frequently involve impulsive actions, which may manifest as substance use disorders, binge eating, unprotected sexual encounters, and self-injury among other self-harming practices. These behaviors are a response to the intense emotional distress experienced by individuals with BPD, serving as an immediate but temporary alleviation of their emotional pain. However, such actions typically result in feelings of shame and guilt, contributing to a recurrent cycle. This cycle typically begins with emotional discomfort, followed by impulsive behavior aimed at mitigating this discomfort, only to lead to shame and guilt, which in turn exacerbates the emotional pain. This escalation of emotional pain then intensifies the compulsion towards impulsive behavior as a form of relief, creating a vicious cycle. Over time, these impulsive responses can become an automatic mechanism for coping with emotional pain. Self-harm and suicide Self-harm and suicidal behaviors are core diagnostic criteria for BPD as outlined in the DSM-5. Between 50% and 80% of individuals diagnosed with BPD engage in self-harm, with cutting being the most common method. Other methods, such as bruising, burning, head banging, or biting, are also prevalent. It is hypothesized that individuals with BPD might experience a sense of emotional relief following acts of self-harm. Estimates of the lifetime risk of death by suicide among individuals with BPD range between 3% and 10%, varying with the method of investigation. There is evidence that a significant proportion of males who die by suicide may have undiagnosed BPD. The motivations behind self-harm and suicide attempts among individuals with BPD are reported to differ. Nearly 70% of individuals with BPD engage in self-harm without the intention of ending their lives. Motivations for self-harm include expressing anger, self-punishment, inducing normal feelings or feelings of normality in response to dissociative episodes, and distraction from emotional distress or challenging situations. Conversely, true suicide attempts by individuals with BPD frequently are motivated by the notion that others will be better off in their absence. Sense of self and self-concept Individuals diagnosed with BPD frequently experience significant difficulties in maintaining a stable self-concept. This instability manifests as uncertainty in personal values, beliefs, preferences, and interests. They may also express confusion regarding their aspirations and objectives in terms of relationships and career paths. Such indeterminacy leads to feelings of emptiness and a profound sense of disorientation regarding their own identity. Moreover, their self-perception can fluctuate dramatically over short periods, oscillating between positive and negative evaluations. Consequently, individuals with BPD might adopt their sense of self-based on their surroundings or the people they interact with, resulting in a chameleon-like adaptation of identity. Dissociation and cognitive challenges The heightened emotional states experienced by individuals with BPD can impede their ability to concentrate and cognitively function. Additionally, individuals with BPD may frequently dissociate, which can be regarded as a mild to severe disconnection from physical and emotional experiences. Observers may notice signs of dissociation in individuals with BPD through diminished expressiveness in their face or voice, or an apparent disconnection and insensitivity to emotional cues or stimuli. Dissociation typically arises in response to distressing occurrences or reminders of past trauma, acting as a psychological defense mechanism by diverting attention from the current stressor or by blocking it out entirely. This process is believed to shield the individual from the anticipated overwhelming negative emotions and undesired impulses that the current emotional situation might provoke, and is rooted in the avoidance of intense emotional pain based on past experiences. While this mechanism may offer temporary emotional respite, it can foster unhealthy coping strategies and inadvertently dull positive emotions, thereby obstructing the individual's access to crucial emotional insights. These insights are essential for informed, healthy decision-making in everyday life. Psychotic symptoms BPD is predominantly characterized as a disorder involving emotional dysregulation, yet psychotic symptoms frequently occur in individuals with BPD, with about 20-50% of patients reporting psychotic symptoms. These manifestations have historically been labeled as "pseudo-psychotic" or "psychotic-like", implying a differentiation from symptoms observed in primary psychotic disorders. Studies conducted in the 2010s suggest a closer similarity between psychotic symptoms in BPD and those in recognized psychotic disorders than previously understood. The distinction of pseudo-psychosis has faced criticism for its weak construct validity and the potential to diminish the perceived severity of these symptoms, potentially hindering accurate diagnosis and effective treatment. Consequently, there are suggestions from some in the research community to categorize these symptoms as genuine psychosis, advocating for the abolishment of the distinction between pseudo-psychosis and true psychosis. The DSM-5 identifies transient paranoia, exacerbated by stress, as a symptom of BPD. Research has identified the presence of both hallucinations and delusions in individuals with BPD who do not possess an alternate diagnosis that would better explain these symptoms. Further, phenomenological analysis indicates that auditory verbal hallucinations in BPD patients are indistinguishable from those observed in schizophrenia. This has led to suggestions of a potential shared etiological basis for hallucinations across BPD and other disorders, including psychotic and affective disorders. Disability and employment Individuals diagnosed with BPD often possess the capability to engage in employment, provided they secure positions that align with their skill sets and the severity of their condition remains manageable. In certain cases, BPD may be recognized as a disability within the workplace, particularly if the condition's severity results in behaviors that undermine relationships, involve engagement in risky activities, or manifest as intense anger, thereby inhibiting the individual's ability to perform their job role effectively. The United States Social Security Administration officially recognizes BPD as a form of disability, enabling those significantly affected to apply for disability benefits. Causes The etiology, or causes, of BPD is multifaceted, with no consensus on a singular cause. BPD may share a connection with post-traumatic stress disorder (PTSD). While childhood trauma is a recognized contributing factor, the roles of congenital brain abnormalities, genetics, neurobiology, and non-traumatic environmental factors remain subjects of ongoing investigation. Genetics and heritability Compared to other major psychiatric conditions, the exploration of genetic underpinnings in BPD remains novel. Estimates suggest the heritability of BPD ranges from 37% to 69%, indicating that human genetic variations account for a substantial portion of the risk for BPD within the population. Twin studies, which often form the basis of these estimates, may overestimate the perceived influence of genetics due to the shared environment of twins, potentially skewing results. Despite these methodological considerations, certain studies propose that personality disorders are significantly shaped by genetics, more so than many Axis I disorders, such as depression and eating disorders, and even surpassing the genetic impact on broad personality traits. Notably, BPD ranks as the third most heritable among ten surveyed personality disorders. Research involving twin and sibling studies has shown a genetic component to traits associated with BPD, such as impulsive aggression; with the genetic contribution to behavior from serotonin-related genes appearing to be modest. A study conducted by Trull et al. in the Netherlands, which included 711 sibling pairs and 561 parents, aimed to identify genetic markers associated with BPD. This research identified a linkage to genetic markers on chromosome 9 as relevant to BPD characteristics, underscoring a significant genetic contribution to the variability observed in BPD features. Prior findings from this group indicated that 42% of BPD feature variability could be attributed to genetics, with the remaining 58% owing to environmental factors. Among specific genetic variants under scrutiny , the DRD4 7-repeat polymorphism (of the dopamine receptor D4) located on chromosome 11 has been linked to disorganized attachment, and in conjunction with the 10/10-repeat genotype of the dopamine transporter (DAT), it has been associated with issues with inhibitory control, both of which are characteristic of BPD. Additionally, potential links to chromosome 5 are being explored, further emphasizing the complex genetic landscape influencing BPD development and manifestation. Psychosocial factors Adverse childhood experiences Empirical studies have established a strong correlation between adverse childhood experiences such as child abuse, particularly child sexual abuse, and the onset of BPD later in life. Reports from individuals diagnosed with BPD frequently include narratives of extensive abuse and neglect during early childhood, though causality remains a subject of ongoing investigation. These individuals are significantly more prone to recount experiences of verbal, emotional, physical, or sexual abuse by caregivers, alongside a notable frequency of incest and loss of caregivers in early childhood. Moreover, there have been consistent accounts of caregivers invalidating the individuals' emotions and thoughts, neglecting physical care, failing to provide the necessary protection, and exhibiting emotional withdrawal and inconsistency. Specifically, female individuals with BPD reporting past neglect or abuse by caregivers have a heightened likelihood of encountering sexual abuse from individuals outside their immediate family circle. The enduring impact of chronic maltreatment and difficulties in forming secure attachments during childhood has been hypothesized to potentially contribute to the development of BPD. Invalidating environment Marsha Linehan's biosocial developmental theory posits that BPD arises from the interaction between a child's inherent emotional vulnerability and an invalidating environment. Emotional vulnerability is thought to be influenced by biological and genetic factors that shape the child's temperament. Traditional biomedical constructions of BPD often focus solely on biological factors. Though these factors certainly play a role in the development of borderline personality disorder, they do not provide a complete picture. A biosocial approach considers the interplay between genetic predispositions and environmental stressors, such as childhood trauma, invalidating environments, and social relationships, in shaping the course of the disorder. Invalidating environments are characterized by the neglect, ridicule, dismissal, or discouragement of a child's emotions and needs, and may also encompass experiences of trauma and abuse. Invalidation from caregivers, peers, or authority figures can lead individuals with borderline personality disorder to doubt the legitimacy of their feelings and experiences. This can exacerbate their emotional dysregulation and contribute to a cycle of invalidation, distress, and maladaptive coping strategies. When emotions are consistently dismissed or criticized, individuals with BPD may resort to destructive behaviors such as self-harm, substance abuse, or impulsive actions to cope with their distress, further perpetuating the negative stigma attached to those who suffer from borderline personality disorder. Clinical and cultural perspectives Anthropologist Rebecca Lester raises two perspectives that BPD can be viewed: a clinical perspective where BPD is a "dysfunction of personality", and an academic perspective that views BPD as a "mechanism of social regulation". Lester provides the perspective that BPD as a disorder of relationships and communication; that a person with BPD lacks the communication skills and knowledge to interact effectively with others within their society and culture given their life experience. Lester provides the metaphor of the particle-wave duality in quantum physics when dealing with the distinction between cultural and clinical perspectives of BPD. Like the particle-wave-duality, when asking particle-like questions you will get particle-like answers; and if you ask wave-like questions you will get wave-like answers. Lester argues the same applies to BPD; if you ask culturally based questions about the presence of BPD you will get culturally based answers, if you ask clinical personality-based questions it will reinforce personality-based perspectives. Lester advised both perspectives are valid and should work in tandem to provide a greater understanding of BPD culturally and for the individual. In this light, Lester argues the higher diagnosis of women than men with BPD goes towards arguing feminist claims. A higher diagnosis BPD in women would be expected in cultures where females are victimised. In this view, BPD is seen as a cultural phenomenon. This is understandable when BPD behaviours are viewed as learned behaviours as a consequence of their experience of surviving environments that reinforce worthlessness and their rejection. To Lester these survival techniques evidence humans' "resilience, adaptation, creativity". Behaviours associated with BPD are therefore an inherently human response. Brain and neurobiologic factors Research employing structural neuroimaging techniques, such as voxel-based morphometry, has reported variations in individuals diagnosed with BPD in specific brain regions that have been associated with the psychopathology of BPD. Notably, reductions in volume enclosed have been observed in the hippocampus, orbitofrontal cortex, anterior cingulate cortex, and amygdala, among others, which are crucial for emotional self-regulation and stress management. In addition to structural imaging, a subset of studies utilizing magnetic resonance spectroscopy has investigated the neurometabolic profile within these affected regions. These investigations have focused on the concentrations of various neurometabolites, including N-acetylaspartate, creatine, compounds related to glutamate, and compounds containing choline. These studies aim to show the biochemical alterations that may underlie the symptomatology observed in BPD, offering insights into BPD's neurobiological basis. Neurological patterns Research into BPD has identified that the propensity for experiencing intense negative emotions, a trait known as negative affectivity, serves as a more potent predictor of BPD symptoms than the history of childhood sexual abuse alone. This correlation, alongside observed variations in brain structure and the presence of BPD in individuals without traumatic histories, delineates BPD from disorders such as PTSD that are frequently co-morbid. Consequently, investigations into BPD encompass both developmental and traumatic origins. Research has shown changes in two brain circuits implicated in the emotional dysregulation characteristic of BPD: firstly, an escalation in activity within brain circuits associated with experiencing severe emotional pain, and secondly, a decreased activation within circuits tasked with the regulation or suppression of these intense emotions. These dysfunctional activations predominantly occur within the limbic system, though individual variances necessitate further neuroimaging research to explore these patterns in detail. Contrary to earlier findings, individuals with BPD exhibit decreased amygdala activation in response to heightened negative emotional stimuli compared to control groups. John Krystal, the editor of Biological Psychiatry, commented on these findings, suggesting they contribute to understanding the innate neurological predisposition of individuals with BPD to lead emotionally turbulent lives, which are not inherently negative or unproductive. This emotional volatility is consistently linked to disparities in several brain regions, emphasizing the neurobiological underpinnings of BPD. Mediating and moderating factors Executive function and social rejection sensitivity High sensitivity to social rejection is linked to more severe symptoms of BPD, with executive function playing a mediating role. Executive function—encompassing planning, working memory, attentional control, and problem-solving—moderates how rejection sensitivity influences BPD symptoms. Studies demonstrate that individuals with lower executive function exhibit a stronger correlation between rejection sensitivity and BPD symptoms. Conversely, higher executive function may mitigate the impact of rejection sensitivity, potentially offering protection against BPD symptoms. Additionally, deficiencies in working memory are associated with increased impulsivity in individuals with BPD. Diagnosis The clinical diagnosis of BPD can be made through a psychiatric assessment conducted by a mental health professional, ideally a psychiatrist or psychologist. This comprehensive assessment integrates various sources of information to confirm the diagnosis, encompassing the patient's self-reported clinical history, observations made by the clinician during interviews, and corroborative details obtained from family members, friends, and medical records. It is crucial to thoroughly assess patients for co-morbid mental health conditions, substance use disorders, suicidal ideation, and any self-harming behaviors. An effective approach involves presenting the criteria of the disorder to the individual and inquiring if they perceive these criteria as reflective of their experiences. Involving individuals in the diagnostic process may enhance their acceptance of the diagnosis. Despite the stigma associated with BPD and previous notions of its untreatability, disclosing the diagnosis to individuals is generally beneficial. It provides them with validation and directs them to appropriate treatment options. The psychological evaluation for BPD typically explores the onset and intensity of symptoms and their impact on the individual's quality of life. Critical areas of focus include suicidal thoughts, self-harm behaviors, and any thoughts of harming others. The diagnosis relies on both the individual's self-reported symptoms and the clinician's observations. To exclude other potential causes of the symptoms, additional assessments may include a physical examination and blood tests, to exclude thyroid disorders or substance use disorders. The International Classification of Diseases (ICD-10) categorizes the condition as emotionally unstable personality disorder, with diagnostic criteria similar to those in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), where the disorder's name remains unchanged from previous editions. DSM-5 diagnostic criteria The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) has eliminated the multiaxial diagnostic system, integrating all disorders, including personality disorders, into Section II of the manual. For a diagnosis of BPD, an individual must meet five out of nine specified diagnostic criteria. The DSM-5 characterizes BPD as a pervasive pattern of instability in interpersonal relationships, self-image, affect, and a significant propensity towards impulsive behavior. Moreover, the DSM-5 introduces alternative diagnostic criteria for BPD in Section III, titled "Alternative DSM-5 Model for Personality Disorders". These criteria are rooted in trait research and necessitate the identification of at least four out of seven maladaptive traits. Marsha Linehan highlights the diagnostic challenges faced by mental health professionals in using the DSM criteria due to the broad range of behaviors they encompass. To mitigate these challenges, Linehan categorizes BPD symptoms into five principal areas of dysregulation: emotions, behavior, interpersonal relationships, sense of self, and cognition. International Classification of Disease (ICD) diagnostic criteria ICD-11 diagnostic criteria The World Health Organization's ICD-11 completely restructured its personality disorder section. It classifies BPD as Personality disorder, () Borderline pattern, (). The borderline pattern specifier is defined as a personality disturbance marked by instability in interpersonal relationships, self-image, and emotions, as well as impulsivity. ICD-10 diagnostic criteria The ICD-10 (version 2019) identified a condition akin to BPD, termed Emotionally unstable personality disorder (EUPD) (). This classification described EUPD as a personality disorder with a marked propensity for impulsive behavior without considering potential consequences. Individuals with EUPD have noticeably erratic and fluctuating moods and are prone to sudden emotional outbursts, struggling to regulate these rapid shifts in emotion. Conflict and confrontational behavior are common, especially in situations where impulsive actions are criticized or hindered. The ICD-10 recognizes two subtypes of this disorder: the impulsive type, characterized mainly by emotional dysregulation and impulsivity, and the borderline type, which additionally includes disturbances in self-perception, goals, and personal preferences. Those with the borderline subtype also experience a persistent feeling of emptiness, unstable and chaotic interpersonal relationships, and a predisposition towards self-harming behaviors, encompassing both suicidal ideations and suicide attempts. Millon's subtypes Psychologist Theodore Millon proposed four subtypes of BPD, where individuals with BPD would exhibit none, one, or multiple subtypes. Misdiagnosis Individuals with BPD are subject to misdiagnosis due to various factors, notably the overlap (comorbidity) of BPD symptoms with those of other disorders such as depression, PTSD, and bipolar disorder. Misdiagnosis of BPD can lead to a range of adverse consequences. Diagnosis plays a crucial role in informing healthcare professionals about the patient's mental health status, guiding treatment strategies, and facilitating accurate reporting of successful interventions. Consequently, misdiagnosis may deprive individuals of access to suitable psychiatric medications or evidence-based psychological interventions tailored to their specific disorders. Critics of the BPD diagnosis contend that it is indistinguishable from negative affectivity upon undergoing regression and factor analyses. They maintain that the diagnosis of BPD does not provide additional insight beyond what is captured by other diagnoses, positing that it may be redundant or potentially misleading. Adolescence and prodrome The onset of BPD symptoms typically occurs during adolescence or early adulthood, with possible early signs in childhood. Predictive symptoms in adolescents include body image issues, extreme sensitivity to rejection, behavioral challenges, non-suicidal self-injury, seeking exclusive relationships, and profound shame. Although many adolescents exhibit these symptoms without developing BPD, those who do are significantly more likely to develop the disorder and potentially face long-term social challenges. BPD is recognized as a stable and valid diagnosis during adolescence, supported by the DSM-5 and ICD-11. Early detection and treatment of BPD in young individuals are emphasized in national guidelines across various countries, including the US, Australia, the UK, Spain, and Switzerland, highlighting the importance of early intervention. Historically, diagnosing BPD during adolescence was met with caution, due to concerns about the accuracy of diagnosing young individuals, the potential misinterpretation of normal adolescent behaviors, stigma, and the stability of personality during this developmental stage. Despite these challenges, research has confirmed the validity and clinical utility of the BPD diagnosis in adolescents, though misconceptions persist among mental health care professionals, contributing to clinical reluctance in diagnosing and a key barrier to the provision of effective treatment BPD in this population. A diagnosis of BPD in adolescence can indicate the persistence of the disorder into adulthood, with outcomes varying among individuals. Some maintain a stable diagnosis over time, while others may not consistently meet the diagnostic criteria. Early diagnosis facilitates the development of effective treatment plans, including family therapy, to support adolescents with BPD. Differential diagnosis and comorbidity Lifetime co-occurring (comorbid) conditions are prevalent among individuals diagnosed with BPD. Individuals with BPD exhibit higher rates of comorbidity compared to those diagnosed with other personality disorders. These comorbidities include mood disorders (such as major depressive disorder and bipolar disorder), anxiety disorders (including panic disorder, social anxiety disorder, and post-traumatic stress disorder (PTSD)), other personality disorders (notably schizotypal, antisocial, and dependent personality disorder), substance use disorder, eating disorders (anorexia nervosa and bulimia nervosa), attention deficit hyperactivity disorder (ADHD), somatic symptom disorder, and the dissociative disorders. It is advised that a personality disorder diagnosis should be made cautiously during untreated mood episodes or disorders unless a comprehensive lifetime history supports the existence of a personality disorder. Comorbid Axis I disorders A 2008 study stated that 75% of individuals with BPD at some point meet criteria for mood disorders, notably major depression and bipolar I, with a similar percentage for anxiety disorders. The same study stated that 73% of individuals with BPD meet criteria for substance use disorders, and about 40% for PTSD. This challenges the notion that BPD and PTSD are identical, as less than half of those with BPD exhibit PTSD symptoms in their lifetime. The study also noted significant gender differences in comorbidity among individuals with BPD: a higher proportion of males meet criteria for substance use disorders, whereas females are more likely to have PTSD and eating disorders. Additionally, 38% of individuals with BPD were found to meet criteria for ADHD, and 15% for autism spectrum disorder (ASD) in separate studies, highlighting the risk of misdiagnosis due to "lower expressions" of BPD or a complex pattern of comorbidity that might obscure the underlying personality disorder. This complexity in diagnosis underscores the importance of comprehensive assessment in identifying BPD. Mood disorders Seventy-five percent (75%) of individuals with BPD concurrently experience mood disorders, notably major depressive disorder (MDD) or bipolar disorder (BD), complicating diagnostic clarity due to overlapping symptoms. Distinguishing BPD from BD is particularly challenging, as behaviors part of diagnostic criteria for both BPD and BD may emerge during depressive or manic episodes in BD. However, these behaviours are likely to subside as mood normalises in BD to euthymia, but typically are pervasive in BPD. Thus, diagnosis should ideally be deferred until after the mood has stabilised. Differences between BPD and BD mood swings include their duration, with BD episodes typically lasting for at least two weeks at a time, in contrast to the rapid and transient mood shifts seen in BPD. Additionally, BD mood changes are generally unresponsive to environmental stimuli, whereas BPD moods are. For example, a positive event might alleviate a depressive mood in BPD, responsiveness not observed in BD. Furthermore, the euphoria in BPD lacks the racing thoughts and reduced need for sleep characteristic of BD, though sleep disturbances have been noted in BPD. An exception would be individuals with rapid-cycling BD, who can be a challenge to differentiate from the affective lability of individuals with BPD. Historically, BPD was considered a milder form of BD, or part of the bipolar spectrum. However, distinctions in phenomenology, family history, disease progression, and treatment responses refute a singular underlying mechanism for both conditions. Research indicates only a modest association between BPD and BD, challenging the notion of a close spectrum relationship. Premenstrual dysphoric disorder BPD is a psychiatric condition distinguishable from premenstrual dysphoric disorder (PMDD), despite some symptom overlap. BPD affects individuals persistently across all stages of the menstrual cycle, unlike PMDD, which is confined to the luteal phase and ends with menstruation. While PMDD, affecting 3–8% of women, includes mood swings, irritability, and anxiety tied to the menstrual cycle, BPD presents a broader, constant emotional and behavioral challenge irrespective of hormonal changes. Comorbid Axis II disorders Approximately 74% of individuals with BPD also fulfill criteria for another Axis II personality disorder during their lifetime, according to research conducted in 2008. The most prevalent co-occurring disorders are from Cluster A (paranoid, schizoid, and schizotypal personality disorders), affecting about half of those with BPD, with schizotypal personality disorder alone impacting one-third of individuals. Being part of Cluster B, BPD patients also commonly share characteristics with other Cluster B disorders (antisocial, histrionic, and narcissistic personality disorders), with nearly half of individuals with BPD showing signs of these conditions, and narcissistic personality disorder affecting roughly one-third. Cluster C disorders (avoidant, dependent, and obsessive-compulsive personality disorders) have the least comorbidity with BPD, with just under a third of individuals with BPD meeting the criteria for a Cluster C disorder. Management The main approach to managing BPD is through psychotherapy, tailored to the individual's specific needs rather than applying a one-size-fits-all model based on the diagnosis alone. While medications do not directly treat BPD, they are beneficial in managing comorbid conditions like depression and anxiety. Evidence states short-term hospitalization does not offer advantages over community care in terms of enhancing outcomes or in the long-term prevention of suicidal behavior among individuals with BPD. Psychotherapy Long-term, consistent psychotherapy stands as the preferred method for treating BPD and engagement in any therapeutic approach tends to surpass the absence of treatment, particularly in diminishing self-harm impulses. Among the effective psychotherapeutic approaches, dialectical behavior therapy (DBT), schema therapy, and psychodynamic therapies have shown efficacy, although improvements may require extensive time, often years of dedicated effort. Available treatments for BPD include dynamic deconstructive psychotherapy (DDP), mentalization-based treatment (MBT), schema therapy, transference-focused psychotherapy, dialectical behavior therapy (DBT), and general psychiatric management. The effectiveness of these therapies does not significantly vary between more intensive and less intensive approaches. Transference-focused psychotherapy is designed to mitigate absolutist thinking by encouraging individuals to express their interpretations of social interactions and their emotions, thereby fostering more nuanced and flexible categorizations. Dialectical behavior therapy (DBT), on the other hand, focuses on developing skills in four main areas: interpersonal communication, distress tolerance, emotional regulation, and mindfulness, aiming to equip individuals with BPD with tools to manage intense emotions and improve interpersonal relationships. Cognitive behavioral therapy (CBT) targets the modification of behaviors and beliefs through problem identification related to BPD, showing efficacy in reducing anxiety, mood symptoms, suicidal ideation, and self-harming actions. Mentalization-based therapy and transference-focused psychotherapy draw from psychodynamic principles, while DBT is rooted in cognitive-behavioral principles and mindfulness. General psychiatric management integrates key aspects from these treatments and is seen as more accessible and less resource-intensive. Studies suggest DBT and MBT may be particularly effective, with ongoing research into developing abbreviated forms of these therapies to enhance accessibility and reduce both financial and resource burdens on patients and providers. Schema therapy considers early maladaptive schemas, conceptualized as organized patterns that recur throughout life in response to memories, emotions, bodily sensations, and cognitions associated with unmet childhood needs. When activated by events in the patient's life, they manifest as schema modes associated with responses such as feelings of abandonment, anger, impulsivity, self-punitiveness, or avoidance and emptiness. Schema therapy attempts to modify early maladaptive schemas and their modes with a variety of cognitive, experiential, and behavioral techniques such as cognitive restructuring, mental imagery, and behavioral experiments. It also seeks to remove some of the stigma associated with BPD by explaining to clients that most people have maladaptive schemas and modes, but that in BPD, the schemas tend to be more extreme, while the modes shift more frequently. In schema therapy, the therapeutic alliance is based on the concept of limited reparenting: it does not only facilitate treatment, but is an integral part of it as the therapist seeks to model a healthy relationship that counteracts some of the instability, rejection, and deprivation often experienced early in life by BPD patients while helping them develop similarly healthy relationships in their broader personal lives. Additionally, mindfulness meditation has been associated with positive structural changes in the brain and improvements in BPD symptoms, with some participants in mindfulness-based interventions no longer meeting the diagnostic criteria for BPD after treatment. Medications A 2010 Cochrane review found that no medications were effective for the core symptoms of BPD, such as chronic feelings of emptiness, identity disturbances, and fears of abandonment. Some medications might impact isolated symptoms of BPD or those of comorbid conditions. A 2017 systematic review and a 2020 Cochrane review confirmed these findings. This 2020 Cochrane review found that while some medications, like mood stabilizers and second-generation antipsychotics, showed some benefits, SSRIs and SNRIs lacked high-level evidence of effectiveness. The review concluded that stabilizers and second-generation antipsychotics may effectively treat some symptoms and associated psychopathology of BPD, but these drugs are not effective for the overall severity of BPD; as such, pharmacotherapy should target specific symptoms. Specific medications have shown varied effectiveness on BPD symptoms: haloperidol and flupenthixol for anger and suicidal behavior reduction; aripiprazole for decreased impulsivity and interpersonal problems; and olanzapine and quetiapine for reducing affective instability, anger, and anxiety, though olanzapine showed less benefit for suicidal ideation than a placebo. Mood stabilizers like valproate and topiramate showed some improvements in depression, impulsivity, and anger, but the effect of carbamazepine was not significant. Of the antidepressants, amitriptyline may reduce depression, but mianserin, fluoxetine, fluvoxamine, and phenelzine sulfate showed no effect. Omega-3 fatty acid may ameliorate suicidality and improve depression. , trials with these medications had not been replicated and the effect of long-term use had not been assessed. Lamotrigine and other medications like IV ketamine for unresponsive depression require further research for their effects on BPD. Quetiapine showed some benefits for BPD severity, psychosocial impairment, aggression, and manic symptoms at doses of 150 mg/day to 300 mg/day, but the evidence is mixed. Despite the lack of solid evidence, SSRIs and SNRIs are prescribed off-label for BPD and are typically considered adjunctive to psychotherapy. Given the weak evidence and potential for serious side effects, the UK National Institute for Health and Clinical Excellence (NICE) recommends against using drugs specifically for BPD or its associated behaviors and symptoms. Medications may be considered for treating comorbid conditions within a broader treatment plan. Reviews suggest minimizing the use of medications for BPD to very low doses and short durations, emphasizing the need for careful evaluation and management of drug treatment in BPD. Health care services The disparity between those benefiting from treatment and those receiving it, known as the "treatment gap," arises from several factors. These include reluctance to seek treatment, healthcare providers' underdiagnosis, and limited availability and accessibility to advanced treatments. Furthermore, establishing clear pathways to services and medical care remains a challenge, complicating access to treatment for individuals with BPD. Despite efforts, many healthcare providers lack the training or resources to address severe BPD effectively, an issue acknowledged by both affected individuals and medical professionals. In the context of psychiatric hospitalizations, individuals with BPD constitute approximately 20% of admissions. While many engage in outpatient treatment consistently over several years, reliance on more restrictive and expensive treatment options, such as inpatient admission, tends to decrease over time. Service experiences vary among individuals with BPD. Assessing suicide risk poses a challenge for clinicians, with patients underestimating the lethality of self-harm behaviors. The suicide risk among people with BPD is significantly higher than that of the general population, characterized by a history of multiple suicide attempts during crises. Notably, about half of all individuals who commit suicide are diagnosed with a personality disorder, with BPD being the most common association. In 2014, following the death by suicide of a patient with BPD, the National Health Service (NHS) in England faced criticism from a coroner for the lack of commissioned services to support individuals with BPD. It was stated that 45% of female patients were diagnosed with BPD, yet there was no provision or prioritization for therapeutic psychological services. At that time, England had only 60 specialized inpatient beds for BPD patients, all located in London or the northeast region. Prognosis With treatment, the majority of people with BPD can find relief from distressing symptoms and achieve remission, defined as a consistent relief from symptoms for at least two years. A longitudinal study tracking the symptoms of people with BPD found that 34.5% achieved remission within two years from the beginning of the study. Within four years, 49.4% had achieved remission, and within six years, 68.6% had achieved remission. By the end of the study, 73.5% of participants were found to be in remission. Moreover, of those who achieved recovery from symptoms, only 5.9% experienced recurrences. A later study found that ten years from baseline (during a hospitalization), 86% of patients had sustained a stable recovery from symptoms. Other estimates have indicated an overall remission rate of 50% at 10 years, with 93% of people being able to achieve a 2-year remission and 86% achieving at least a 4-year remission. And a 30% risk of relapse over 10 years (relapse indicating a recurrence of BPD symptoms meeting diagnostic criteria). A meta-analysis which followed people over 5 years reported remission rates of 50-70%. Patient personality can play an important role during the therapeutic process, leading to better clinical outcomes. Recent research has shown that BPD patients undergoing dialectical behavior therapy (DBT) exhibit better clinical outcomes correlated with higher levels of the trait of agreeableness in the patient, compared to patients either low in agreeableness or not being treated with DBT. This association was mediated through the strength of a working alliance between patient and therapist; that is, more agreeable patients developed stronger working alliances with their therapists, which in turn, led to better clinical outcomes. In addition to recovering from distressing symptoms, people with BPD can also achieve high levels of psychosocial functioning. A longitudinal study tracking the social and work abilities of participants with BPD found that six years after diagnosis, 56% of participants had good function in work and social environments, compared to 26% of participants when they were first diagnosed. Vocational achievement was generally more limited, even compared to those with other personality disorders. However, those whose symptoms had remitted were significantly more likely to have good relationships with a romantic partner and at least one parent, good performance at work and school, a sustained work and school history, and good psychosocial functioning overall. Epidemiology BPD has a point prevalence of 1.6% and a lifetime prevalence of 5.9% of the global population. Within clinical settings, the occurrence of BPD is 6.4% among urban primary care patients, 9.3% among psychiatric outpatients, and approximately 20% among psychiatric inpatients. Despite the high utilization of healthcare resources by individuals with BPD, up to half may show significant improvement over a ten-year period with appropriate treatment. Regarding gender distribution, women are diagnosed with BPD three times more frequently than men in clinical environments. Nonetheless, epidemiological research in the United States indicates no significant gender difference in the lifetime prevalence of BPD within the general population. This finding implies that women with BPD may be more inclined to seek treatment compared to men. Studies examining BPD patients have found no significant differences in the rates of childhood trauma and levels of current psychosocial functioning between genders. The relationship between BPD and ethnicity continues to be ambiguous, with divergent findings reported in the United States. The overall prevalence of BPD in the U.S. prison population is thought to be 17%. These high numbers may be related to the high frequency of substance use and substance use disorders among people with BPD, which is estimated at 38%. History The coexistence of intense, divergent moods within an individual was recognized by Homer, Hippocrates, and Aretaeus, the latter describing the vacillating presence of impulsive anger, melancholia, and mania within a single person. The concept was revived by Swiss physician Théophile Bonet in 1684 who, using the term folie maniaco-mélancolique, described the phenomenon of unstable moods that followed an unpredictable course. Other writers noted the same pattern, including the American psychiatrist Charles H. Hughes in 1884 and J. C. Rosse in 1890, who called the disorder "borderline insanity". In 1921, Emil Kraepelin identified an "excitable personality" that closely parallels the borderline features outlined in the current concept of BPD. The idea that there were forms of disorder that were neither psychotic nor simply neurotic began to be discussed in psychoanalytic circles in the 1930s. The first formal definition of borderline disorder is widely acknowledged to have been written by Adolph Stern in 1938. He described a group of patients who he felt to be on the borderline between neurosis and psychosis, who very often came from family backgrounds marked by trauma. He argued that such patients would often need more active support than that provided by classical psychoanalytic techniques. The 1960s and 1970s saw a shift from thinking of the condition as borderline schizophrenia to thinking of it as a borderline affective disorder (mood disorder), on the fringes of bipolar disorder, cyclothymia, and dysthymia. In the DSM-II, stressing the intensity and variability of moods, it was called cyclothymic personality (affective personality). While the term "borderline" was evolving to refer to a distinct category of disorder, psychoanalysts such as Otto Kernberg were using it to refer to a broad spectrum of issues, describing an intermediate level of personality organization between neurosis and psychosis. After standardized criteria were developed to distinguish it from mood disorders and other Axis I disorders, BPD became a personality disorder diagnosis in 1980 with the publication of the DSM-III. The diagnosis was distinguished from sub-syndromal schizophrenia, which was termed "schizotypal personality disorder". The DSM-IV Axis II Work Group of the American Psychiatric Association finally decided on the name "borderline personality disorder", which is still in use by the DSM-5. However, the term "borderline" has been described as uniquely inadequate for describing the symptoms characteristic of this disorder. Etymology Earlier versions of the DSM—before the multiaxial diagnosis system—classified most people with mental health problems into two categories: the psychotics and the neurotics. Clinicians noted a certain class of neurotics who, when in crisis, appeared to straddle the borderline into psychosis. The term "borderline personality disorder" was coined in American psychiatry in the 1960s. It became the preferred term over a number of competing names, such as "emotionally unstable character disorder" and "borderline schizophrenia" during the 1970s. Borderline personality disorder was included in DSM-III (1980) despite not being universally recognized as a valid diagnosis. Controversies Credibility and validity of testimony The credibility of individuals with personality disorders has been questioned at least since the 1960s. Two concerns are the incidence of dissociation episodes among people with BPD and the belief that lying is not uncommon in those diagnosed with the condition. Dissociation Researchers disagree about whether dissociation or a sense of emotional detachment and physical experiences, impact the ability of people with BPD to recall the specifics of past events. A 1999 study reported that the specificity of autobiographical memory was decreased in BPD patients. The researchers found that decreased ability to recall specifics was correlated with patients' levels of dissociation, which 'may help them to avoid episodic information that would evoke acutely negative affect'. Gender In a clinic, up to 80% of patients are women, but this might not necessarily reflect the gender distribution in the entire population. According to Joel Paris, the primary reason for gender disparities in clinical settings is that women are more likely to develop symptoms that prompt them to seek help. Statistics indicate that twice as many women as men in the community experience depression. Conversely, men more frequently meet criteria for substance use disorder and psychopathy, but tend not to seek treatment as often. Additionally, men and women with similar symptoms may manifest them differently. Men often exhibit behaviors such as increased alcohol consumption and criminal activity, while women may internalize anger, leading to conditions like depression and self-harm, such as cutting or overdosing. Hence, the gender gap observed in antisocial personality disorder and borderline personality disorder, which may share similar underlying pathologies but present different symptoms influenced by gender. In a study examining completed suicides among individuals aged 18 to 35, 30% of the suicides were attributed to people with BPD, with a majority being men and almost none receiving treatment. Similar findings were reported in another study. In short, men are less likely to seek or accept appropriate treatment, more likely to be treated for symptoms of BPD such as substance use rather than BPD itself (the symptoms of BPD and ASPD possibly deriving from a similar underlying etiology); more likely to wind up in the correctional system due to criminal behavior; and, more likely to commit suicide prior to diagnosis. Among men diagnosed with BPD there is also evidence of a higher suicide rate: "men are more than twice as likely as women—18 percent versus 8 percent"—to die by suicide. There are also sex differences in personality traits and Axis I and II comorbidity. Men with BPD are more likely to recreationally use substances, have explosive temper, high levels of novelty seeking and have (especially) antisocial, narcissistic, passive-aggressive or sadistic personality traits (male BPD being characterised by antisocial overtones). Women with BPD are more likely to have eating, mood, anxiety, and post-traumatic stress disorders. Manipulative behavior Manipulative behavior to obtain nurturance is considered by the DSM-IV-TR and many mental health professionals to be a defining characteristic of borderline personality disorder. In one research study, 88% of therapists reported that they have experienced manipulation attempts from patient(s). Marsha Linehan has argued that doing so relies upon the assumption that people with BPD who communicate intense pain, or who engage in self-harm and suicidal behavior, do so with the intention of influencing the behavior of others. The impact of such behavior on others—often an intense emotional reaction in concerned friends, family members, and therapists—is thus assumed to have been the person's intention. According to Linehan, their frequent expressions of intense pain, self-harming, or suicidal behavior may instead represent a method of mood regulation or an escape mechanism from situations that feel unbearable, however, making their assumed manipulative behavior an involuntary and unintentional response. One paper identified possible reasons for manipulation in BPD: identifying others' feelings and reactions, a regulatory function due to insecurity, communicating one's emotions and connecting to others, or to feel as if one is in control, or allowing them to be "liberated" from relationships or commitments. Stigma The features of BPD include emotional instability, intense and unstable interpersonal relationships, a need for intimacy, and a fear of rejection. As a result, people with BPD often evoke intense emotions in those around them. Pejorative terms to describe people with BPD, such as "difficult", "treatment resistant", "manipulative", "demanding", and "attention seeking", are often used and may become a self-fulfilling prophecy, as the negative treatment of these individuals may trigger further self-destructive behavior. Since BPD can be a stigmatizing diagnosis even within the mental health community, some survivors of childhood abuse who are diagnosed with BPD are re-traumatized by the negative responses they receive from healthcare providers. One camp argues that it would be better to diagnose these people with post-traumatic stress disorder, as this would acknowledge the impact of abuse on their behavior. Critics of the PTSD diagnosis argue that it medicalizes abuse rather than addressing the root causes in society. Regardless, a diagnosis of PTSD does not encompass all aspects of the disorder (see brain abnormalities and terminology). Physical violence The stigma surrounding borderline personality disorder includes the belief that people with BPD are prone to violence toward others. While movies and visual media often sensationalize people with BPD by portraying them as violent, the majority of researchers agree that people with BPD are unlikely to physically harm others. Although people with BPD often struggle with experiences of intense anger, a defining characteristic of BPD is that they direct it inward toward themselves. One 2020 study found that BPD is individually associated with psychological, physical, and sexual forms of intimate partner violence (IPV), especially amongst men. In terms of the AMPD trait facets, hostility (negative affectivity), suspiciousness (negative affectivity) and risk-taking (disinhibition) were most strongly associated with IPV perpetration for the total sample. In addition, adults with BPD have often experienced abuse in childhood, so many people with BPD adopt a "no-tolerance" policy toward expressions of anger of any kind. Their extreme aversion to violence can cause many people with BPD to overcompensate and experience difficulties being assertive and expressing their needs. This is one reason why people with BPD often choose to harm themselves over potentially causing harm to others. Mental health care providers People with BPD are considered to be among the most challenging groups of patients to work with in therapy, requiring a high level of skill and training for the psychiatrists, therapists, and nurses involved in their treatment. A majority of psychiatric staff report finding individuals with BPD moderately to extremely difficult to work with and more difficult than other client groups. This largely negative view of BPD can result in people with BPD being terminated from treatment early, being provided harmful treatment, not being informed of their diagnosis of BPD, or being misdiagnosed. With healthcare providers contributing to the stigma of a BPD diagnosis, seeking treatment can often result in the perpetuation of BPD features. Efforts are ongoing to improve public and staff attitudes toward people with BPD. In psychoanalytic theory, the stigmatization among mental health care providers may be thought to reflect countertransference (when a therapist projects his or her feelings onto a client). This inadvertent countertransference can give rise to inappropriate clinical responses, including excessive use of medication, inappropriate mothering, and punitive use of limit setting and interpretation. Some clients feel the diagnosis is helpful, allowing them to understand that they are not alone and to connect with others with BPD who have developed helpful coping mechanisms. However, others experience the term "borderline personality disorder" as a pejorative label rather than an informative diagnosis. They report concerns that their self-destructive behavior is incorrectly perceived as manipulative and that the stigma surrounding this disorder limits their access to health care. Indeed, mental health professionals frequently refuse to provide services to those who have received a BPD diagnosis. Terminology Because of concerns around stigma, and because of a move away from the original theoretical basis for the term (see history), there is ongoing debate about renaming borderline personality disorder. While some clinicians agree with the current name, others argue that it should be changed, since many who are labelled with borderline personality disorder find the name unhelpful, stigmatizing, or inaccurate. Valerie Porr, president of Treatment and Research Advancement Association for Personality Disorders states that "the name BPD is confusing, imparts no relevant or descriptive information, and reinforces existing stigma". Alternative suggestions for names include emotional regulation disorder or emotional dysregulation disorder. Impulse disorder and interpersonal regulatory disorder are other valid alternatives, according to John G. Gunderson of McLean Hospital in the United States. Another term suggested by psychiatrist Carolyn Quadrio is post-traumatic personality disorganization (PTPD), reflecting the condition's status as (often) both a form of chronic post-traumatic stress disorder (PTSD) as well as a personality disorder. However, although many with BPD do have traumatic histories, some do not report any kind of traumatic event, which suggests that BPD is not necessarily a trauma spectrum disorder. The Treatment and Research Advancements National Association for Personality Disorders (TARA-APD) campaigned unsuccessfully to change the name and designation of BPD in DSM-5, published in May 2013, in which the name "borderline personality disorder" remains unchanged and it is not considered a trauma- and stressor-related disorder. Society and culture Literature In literature, characters believed to exhibit signs of BPD include Catherine in Wuthering Heights (1847), Smerdyakov in The Brothers Karamazov (1880), and Harry Haller in Steppenwolf (1927). Film Films have also attempted to portray BPD, with characters in Margot at the Wedding (2007), Mr. Nobody (2009), Cracks (2009), Truth (2013), Wounded (2013), Welcome to Me (2014), and Tamasha (2015) all suggested to show traits of the disorder. The behavior of Theresa Dunn in Looking for Mr. Goodbar (1975) is consistent with BPD, as suggested by Robert O. Friedel. Films like Play Misty for Me (1971) and Girl, Interrupted (1999, based on the memoir of the same name) suggest emotional instability characteristic of BPD, while Single White Female (1992) highlights aspects such as identity disturbance and fear of abandonment. Clementine in Eternal Sunshine of the Spotless Mind (2004) is noted to show classic BPD behavior, and Carey Mulligan's portrayal in Shame (2011) is praised for its accuracy regarding BPD characteristics by psychiatrists. Psychiatrists have even analyzed characters such as Kylo Ren and Anakin Skywalker/Darth Vader from the Star Wars films, noting that they meet several diagnostic criteria for BPD. Television Television series like Crazy Ex-Girlfriend (2015) and the miniseries Maniac (2018) depict characters with BPD. Traits of BPD and narcissistic personality disorders are observed in characters like Cersei and Jaime Lannister from A Song of Ice and Fire (1996) and its TV adaptation Game of Thrones (2011). In The Sopranos (1999), Livia Soprano is diagnosed with BPD, and even the portrayal of Bruce Wayne/Batman in the show Titans (2018) is said to include aspects of the disorder. The animated series Bojack Horseman (2014) also features a main character with symptoms of BPD. Awareness Awareness of BPD has been growing, with the U.S. House of Representatives declaring May as Borderline Personality Disorder Awareness Month in 2008. People with BPD will share their personal experiences of living with the disorder on social media to raise awareness of the condition. Public figures like South Korean singer-songwriter Lee Sun-mi have opened up about their personal experiences with the disorder, bringing further attention to its impact on individuals' lives.
Biology and health sciences
Mental disorders
Health
149326
https://en.wikipedia.org/wiki/Phylogenetic%20tree
Phylogenetic tree
A phylogenetic tree, phylogeny or evolutionary tree is a graphical representation which shows the evolutionary history between a set of species or taxa during a specific time. In other words, it is a branching diagram or a tree showing the evolutionary relationships among various biological species or other entities based upon similarities and differences in their physical or genetic characteristics. In evolutionary biology, all life on Earth is theoretically part of a single phylogenetic tree, indicating common ancestry. Phylogenetics is the study of phylogenetic trees. The main challenge is to find a phylogenetic tree representing optimal evolutionary ancestry between a set of species or taxa. Computational phylogenetics (also phylogeny inference) focuses on the algorithms involved in finding optimal phylogenetic tree in the phylogenetic landscape. Phylogenetic trees may be rooted or unrooted. In a rooted phylogenetic tree, each node with descendants represents the inferred most recent common ancestor of those descendants, and the edge lengths in some trees may be interpreted as time estimates. Each node is called a taxonomic unit. Internal nodes are generally called hypothetical taxonomic units, as they cannot be directly observed. Trees are useful in fields of biology such as bioinformatics, systematics, and phylogenetics. Unrooted trees illustrate only the relatedness of the leaf nodes and do not require the ancestral root to be known or inferred. History The idea of a tree of life arose from ancient notions of a ladder-like progression from lower into higher forms of life (such as in the Great Chain of Being). Early representations of "branching" phylogenetic trees include a "paleontological chart" showing the geological relationships among plants and animals in the book Elementary Geology, by Edward Hitchcock (first edition: 1840). Charles Darwin featured a diagrammatic evolutionary "tree" in his 1859 book On the Origin of Species. Over a century later, evolutionary biologists still use tree diagrams to depict evolution because such diagrams effectively convey the concept that speciation occurs through the adaptive and semirandom splitting of lineages. The term phylogenetic, or phylogeny, derives from the two ancient greek words (), meaning "race, lineage", and (), meaning "origin, source". Properties Rooted tree A rooted phylogenetic tree (see two graphics at top) is a directed tree with a unique node — the root — corresponding to the (usually imputed) most recent common ancestor of all the entities at the leaves of the tree. The root node does not have a parent node, but serves as the parent of all other nodes in the tree. The root is therefore a node of degree 2, while other internal nodes have a minimum degree of 3 (where "degree" here refers to the total number of incoming and outgoing edges). The most common method for rooting trees is the use of an uncontroversial outgroup—close enough to allow inference from trait data or molecular sequencing, but far enough to be a clear outgroup. Another method is midpoint rooting, or a tree can also be rooted by using a non-stationary substitution model. Unrooted tree Unrooted trees illustrate the relatedness of the leaf nodes without making assumptions about ancestry. They do not require the ancestral root to be known or inferred. Unrooted trees can always be generated from rooted ones by simply omitting the root. By contrast, inferring the root of an unrooted tree requires some means of identifying ancestry. This is normally done by including an outgroup in the input data so that the root is necessarily between the outgroup and the rest of the taxa in the tree, or by introducing additional assumptions about the relative rates of evolution on each branch, such as an application of the molecular clock hypothesis. Bifurcating versus multifurcating Both rooted and unrooted trees can be either bifurcating or multifurcating. A rooted bifurcating tree has exactly two descendants arising from each interior node (that is, it forms a binary tree), and an unrooted bifurcating tree takes the form of an unrooted binary tree, a free tree with exactly three neighbors at each internal node. In contrast, a rooted multifurcating tree may have more than two children at some nodes and an unrooted multifurcating tree may have more than three neighbors at some nodes. Labeled versus unlabeled Both rooted and unrooted trees can be either labeled or unlabeled. A labeled tree has specific values assigned to its leaves, while an unlabeled tree, sometimes called a tree shape, defines a topology only. Some sequence-based trees built from a small genomic locus, such as Phylotree, feature internal nodes labeled with inferred ancestral haplotypes. Enumerating trees The number of possible trees for a given number of leaf nodes depends on the specific type of tree, but there are always more labeled than unlabeled trees, more multifurcating than bifurcating trees, and more rooted than unrooted trees. The last distinction is the most biologically relevant; it arises because there are many places on an unrooted tree to put the root. For bifurcating labeled trees, the total number of rooted trees is: for , represents the number of leaf nodes. For bifurcating labeled trees, the total number of unrooted trees is: for . Among labeled bifurcating trees, the number of unrooted trees with leaves is equal to the number of rooted trees with leaves. The number of rooted trees grows quickly as a function of the number of tips. For 10 tips, there are more than possible bifurcating trees, and the number of multifurcating trees rises faster, with ca. 7 times as many of the latter as of the former. Special tree types Dendrogram A dendrogram is a general name for a tree, whether phylogenetic or not, and hence also for the diagrammatic representation of a phylogenetic tree. Cladogram A cladogram only represents a branching pattern; i.e., its branch lengths do not represent time or relative amount of character change, and its internal nodes do not represent ancestors. Phylogram A phylogram is a phylogenetic tree that has branch lengths proportional to the amount of character change. Chronogram A chronogram is a phylogenetic tree that explicitly represents time through its branch lengths. Dahlgrenogram A Dahlgrenogram is a diagram representing a cross section of a phylogenetic tree. Phylogenetic network A phylogenetic network is not strictly speaking a tree, but rather a more general graph, or a directed acyclic graph in the case of rooted networks. They are used to overcome some of the limitations inherent to trees. Spindle diagram A spindle diagram, or bubble diagram, is often called a romerogram, after its popularisation by the American palaeontologist Alfred Romer. It represents taxonomic diversity (horizontal width) against geological time (vertical axis) in order to reflect the variation of abundance of various taxa through time. A spindle diagram is not an evolutionary tree: the taxonomic spindles obscure the actual relationships of the parent taxon to the daughter taxon and have the disadvantage of involving the paraphyly of the parental group. This type of diagram is no longer used in the form originally proposed. Coral of life Darwin also mentioned that the coral may be a more suitable metaphor than the tree. Indeed, phylogenetic corals are useful for portraying past and present life, and they have some advantages over trees (anastomoses allowed, etc.). Construction Phylogenetic trees composed with a nontrivial number of input sequences are constructed using computational phylogenetics methods. Distance-matrix methods such as neighbor-joining or UPGMA, which calculate genetic distance from multiple sequence alignments, are simplest to implement, but do not invoke an evolutionary model. Many sequence alignment methods such as ClustalW also create trees by using the simpler algorithms (i.e. those based on distance) of tree construction. Maximum parsimony is another simple method of estimating phylogenetic trees, but implies an implicit model of evolution (i.e. parsimony). More advanced methods use the optimality criterion of maximum likelihood, often within a Bayesian framework, and apply an explicit model of evolution to phylogenetic tree estimation. Identifying the optimal tree using many of these techniques is NP-hard, so heuristic search and optimization methods are used in combination with tree-scoring functions to identify a reasonably good tree that fits the data. Tree-building methods can be assessed on the basis of several criteria: efficiency (how long does it take to compute the answer, how much memory does it need?) power (does it make good use of the data, or is information being wasted?) consistency (will it converge on the same answer repeatedly, if each time given different data for the same model problem?) robustness (does it cope well with violations of the assumptions of the underlying model?) falsifiability (does it alert us when it is not good to use, i.e. when assumptions are violated?) Tree-building techniques have also gained the attention of mathematicians. Trees can also be built using T-theory. File formats Trees can be encoded in a number of different formats, all of which must represent the nested structure of a tree. They may or may not encode branch lengths and other features. Standardized formats are critical for distributing and sharing trees without relying on graphics output that is hard to import into existing software. Commonly used formats are Nexus file format Newick format Limitations of phylogenetic analysis Although phylogenetic trees produced on the basis of sequenced genes or genomic data in different species can provide evolutionary insight, these analyses have important limitations. Most importantly, the trees that they generate are not necessarily correct – they do not necessarily accurately represent the evolutionary history of the included taxa. As with any scientific result, they are subject to falsification by further study (e.g., gathering of additional data, analyzing the existing data with improved methods). The data on which they are based may be noisy; the analysis can be confounded by genetic recombination, horizontal gene transfer, hybridisation between species that were not nearest neighbors on the tree before hybridisation takes place, and conserved sequences. Also, there are problems in basing an analysis on a single type of character, such as a single gene or protein or only on morphological analysis, because such trees constructed from another unrelated data source often differ from the first, and therefore great care is needed in inferring phylogenetic relationships among species. This is most true of genetic material that is subject to lateral gene transfer and recombination, where different haplotype blocks can have different histories. In these types of analysis, the output tree of a phylogenetic analysis of a single gene is an estimate of the gene's phylogeny (i.e. a gene tree) and not the phylogeny of the taxa (i.e. species tree) from which these characters were sampled, though ideally, both should be very close. For this reason, serious phylogenetic studies generally use a combination of genes that come from different genomic sources (e.g., from mitochondrial or plastid vs. nuclear genomes), or genes that would be expected to evolve under different selective regimes, so that homoplasy (false homology) would be unlikely to result from natural selection. When extinct species are included as terminal nodes in an analysis (rather than, for example, to constrain internal nodes), they are considered not to represent direct ancestors of any extant species. Extinct species do not typically contain high-quality DNA. The range of useful DNA materials has expanded with advances in extraction and sequencing technologies. Development of technologies able to infer sequences from smaller fragments, or from spatial patterns of DNA degradation products, would further expand the range of DNA considered useful. Phylogenetic trees can also be inferred from a range of other data types, including morphology, the presence or absence of particular types of genes, insertion and deletion events – and any other observation thought to contain an evolutionary signal. Phylogenetic networks are used when bifurcating trees are not suitable, due to these complications which suggest a more reticulate evolutionary history of the organisms sampled.
Biology and health sciences
Basics_4
Biology
149353
https://en.wikipedia.org/wiki/Computational%20biology
Computational biology
Computational biology refers to the use of techniques in computer science, data analysis, mathematical modeling and computational simulations to understand biological systems and relationships. An intersection of computer science, biology, and data science, the field also has foundations in applied mathematics, molecular biology, cell biology, chemistry, and genetics. History Bioinformatics, the analysis of informatics processes in biological systems, began in the early 1970s. At this time, research in artificial intelligence was using network models of the human brain in order to generate new algorithms. This use of biological data pushed biological researchers to use computers to evaluate and compare large data sets in their own field. By 1982, researchers shared information via punch cards. The amount of data grew exponentially by the end of the 1980s, requiring new computational methods for quickly interpreting relevant information. Perhaps the best-known example of computational biology, the Human Genome Project, officially began in 1990. By 2003, the project had mapped around 85% of the human genome, satisfying its initial goals. Work continued, however, and by 2021 level " a complete genome" was reached with only 0.3% remaining bases covered by potential issues. The missing Y chromosome was added in January 2022. Since the late 1990s, computational biology has become an important part of biology, leading to numerous subfields. Today, the International Society for Computational Biology recognizes 21 different 'Communities of Special Interest', each representing a slice of the larger field. In addition to helping sequence the human genome, computational biology has helped create accurate models of the human brain, map the 3D structure of genomes, and model biological systems. Global contributions Colombia In 2000, despite a lack of initial expertise in programming and data management, Colombia began applying computational biology from an industrial perspective, focusing on plant diseases. This research has contributed to understanding how to counteract diseases in crops like potatoes and studying the genetic diversity of coffee plants. By 2007, concerns about alternative energy sources and global climate change prompted biologists to collaborate with systems and computer engineers. Together, they developed a robust computational network and database to address these challenges. In 2009, in partnership with the University of Los Angeles, Colombia also created a Virtual Learning Environment (VLE) to improve the integration of computational biology and bioinformatics. Poland In Poland, computational biology is closely linked to mathematics and computational science, serving as a foundation for bioinformatics and biological physics. The field is divided into two main areas: one focusing on physics and simulation and the other on biological sequences. The application of statistical models in Poland has advanced techniques for studying proteins and RNA, contributing to global scientific progress. Polish scientists have also been instrumental in evaluating protein prediction methods, significantly enhancing the field of computational biology. Over time, they have expanded their research to cover topics such as protein-coding analysis and hybrid structures, further solidifying Poland's influence on the development of bioinformatics worldwide. Applications Anatomy Computational anatomy is the study of anatomical shape and form at the visible or gross anatomical scale of morphology. It involves the development of computational mathematical and data-analytical methods for modeling and simulating biological structures. It focuses on the anatomical structures being imaged, rather than the medical imaging devices. Due to the availability of dense 3D measurements via technologies such as magnetic resonance imaging, computational anatomy has emerged as a subfield of medical imaging and bioengineering for extracting anatomical coordinate systems at the morpheme scale in 3D. The original formulation of computational anatomy is as a generative model of shape and form from exemplars acted upon via transformations. The diffeomorphism group is used to study different coordinate systems via coordinate transformations as generated via the Lagrangian and Eulerian velocities of flow from one anatomical configuration in to another. It relates with shape statistics and morphometrics, with the distinction that diffeomorphisms are used to map coordinate systems, whose study is known as diffeomorphometry. Data and modeling Mathematical biology is the use of mathematical models of living organisms to examine the systems that govern structure, development, and behavior in biological systems. This entails a more theoretical approach to problems, rather than its more empirically-minded counterpart of experimental biology. Mathematical biology draws on discrete mathematics, topology (also useful for computational modeling), Bayesian statistics, linear algebra and Boolean algebra. These mathematical approaches have enabled the creation of databases and other methods for storing, retrieving, and analyzing biological data, a field known as bioinformatics. Usually, this process involves genetics and analyzing genes. Gathering and analyzing large datasets have made room for growing research fields such as data mining, and computational biomodeling, which refers to building computer models and visual simulations of biological systems. This allows researchers to predict how such systems will react to different environments, which is useful for determining if a system can "maintain their state and functions against external and internal perturbations". While current techniques focus on small biological systems, researchers are working on approaches that will allow for larger networks to be analyzed and modeled. A majority of researchers believe this will be essential in developing modern medical approaches to creating new drugs and gene therapy. A useful modeling approach is to use Petri nets via tools such as esyN. Along similar lines, until recent decades theoretical ecology has largely dealt with analytic models that were detached from the statistical models used by empirical ecologists. However, computational methods have aided in developing ecological theory via simulation of ecological systems, in addition to increasing application of methods from computational statistics in ecological analyses. Systems biology Systems biology consists of computing the interactions between various biological systems ranging from the cellular level to entire populations with the goal of discovering emergent properties. This process usually involves networking cell signaling and metabolic pathways. Systems biology often uses computational techniques from biological modeling and graph theory to study these complex interactions at cellular levels. Evolutionary biology Computational biology has assisted evolutionary biology by: Using DNA data to reconstruct the tree of life with computational phylogenetics Fitting population genetics models (either forward time or backward time) to DNA data to make inferences about demographic or selective history Building population genetics models of evolutionary systems from first principles in order to predict what is likely to evolve Genomics Computational genomics is the study of the genomes of cells and organisms. The Human Genome Project is one example of computational genomics. This project looks to sequence the entire human genome into a set of data. Once fully implemented, this could allow for doctors to analyze the genome of an individual patient. This opens the possibility of personalized medicine, prescribing treatments based on an individual's pre-existing genetic patterns. Researchers are looking to sequence the genomes of animals, plants, bacteria, and all other types of life. One of the main ways that genomes are compared is by sequence homology. Homology is the study of biological structures and nucleotide sequences in different organisms that come from a common ancestor. Research suggests that between 80 and 90% of genes in newly sequenced prokaryotic genomes can be identified this way. Sequence alignment is another process for comparing and detecting similarities between biological sequences or genes. Sequence alignment is useful in a number of bioinformatics applications, such as computing the longest common subsequence of two genes or comparing variants of certain diseases. An untouched project in computational genomics is the analysis of intergenic regions, which comprise roughly 97% of the human genome. Researchers are working to understand the functions of non-coding regions of the human genome through the development of computational and statistical methods and via large consortia projects such as ENCODE and the Roadmap Epigenomics Project. Understanding how individual genes contribute to the biology of an organism at the molecular, cellular, and organism levels is known as gene ontology. The Gene Ontology Consortium's mission is to develop an up-to-date, comprehensive, computational model of biological systems, from the molecular level to larger pathways, cellular, and organism-level systems. The Gene Ontology resource provides a computational representation of current scientific knowledge about the functions of genes (or, more properly, the protein and non-coding RNA molecules produced by genes) from many different organisms, from humans to bacteria. 3D genomics is a subsection in computational biology that focuses on the organization and interaction of genes within a eukaryotic cell. One method used to gather 3D genomic data is through Genome Architecture Mapping (GAM). GAM measures 3D distances of chromatin and DNA in the genome by combining cryosectioning, the process of cutting a strip from the nucleus to examine the DNA, with laser microdissection. A nuclear profile is simply this strip or slice that is taken from the nucleus. Each nuclear profile contains genomic windows, which are certain sequences of nucleotides - the base unit of DNA. GAM captures a genome network of complex, multi enhancer chromatin contacts throughout a cell. Neuroscience Computational neuroscience is the study of brain function in terms of the information processing properties of the nervous system. A subset of neuroscience, it looks to model the brain to examine specific aspects of the neurological system. Models of the brain include: Realistic Brain Models: These models look to represent every aspect of the brain, including as much detail at the cellular level as possible. Realistic models provide the most information about the brain, but also have the largest margin for error. More variables in a brain model create the possibility for more error to occur. These models do not account for parts of the cellular structure that scientists do not know about. Realistic brain models are the most computationally heavy and the most expensive to implement. Simplifying Brain Models: These models look to limit the scope of a model in order to assess a specific physical property of the neurological system. This allows for the intensive computational problems to be solved, and reduces the amount of potential error from a realistic brain model. It is the work of computational neuroscientists to improve the algorithms and data structures currently used to increase the speed of such calculations. Computational neuropsychiatry is an emerging field that uses mathematical and computer-assisted modeling of brain mechanisms involved in mental disorders. Several initiatives have demonstrated that computational modeling is an important contribution to understand neuronal circuits that could generate mental functions and dysfunctions. Pharmacology Computational pharmacology is "the study of the effects of genomic data to find links between specific genotypes and diseases and then screening drug data". The pharmaceutical industry requires a shift in methods to analyze drug data. Pharmacologists were able to use Microsoft Excel to compare chemical and genomic data related to the effectiveness of drugs. However, the industry has reached what is referred to as the Excel barricade. This arises from the limited number of cells accessible on a spreadsheet. This development led to the need for computational pharmacology. Scientists and researchers develop computational methods to analyze these massive data sets. This allows for an efficient comparison between the notable data points and allows for more accurate drugs to be developed. Analysts project that if major medications fail due to patents, that computational biology will be necessary to replace current drugs on the market. Doctoral students in computational biology are being encouraged to pursue careers in industry rather than take Post-Doctoral positions. This is a direct result of major pharmaceutical companies needing more qualified analysts of the large data sets required for producing new drugs. Oncology Computational biology plays a crucial role in discovering signs of new, previously unknown living creatures and in cancer research. This field involves large-scale measurements of cellular processes, including RNA, DNA, and proteins, which pose significant computational challenges. To overcome these, biologists rely on computational tools to accurately measure and analyze biological data. In cancer research, computational biology aids in the complex analysis of tumor samples, helping researchers develop new ways to characterize tumors and understand various cellular properties. The use of high-throughput measurements, involving millions of data points from DNA, RNA, and other biological structures, helps in diagnosing cancer at early stages and in understanding the key factors that contribute to cancer development. Areas of focus include analyzing molecules that are deterministic in causing cancer and understanding how the human genome relates to tumor causation. Toxicology Computational toxicology is a multidisciplinary area of study, which is employed in the early stages of drug discovery and development to predict the safety and potential toxicity of drug candidates. Techniques Computational biologists use a wide range of software and algorithms to carry out their research. Unsupervised Learning Unsupervised learning is a type of algorithm that finds patterns in unlabeled data. One example is k-means clustering, which aims to partition n data points into k clusters, in which each data point belongs to the cluster with the nearest mean. Another version is the k-medoids algorithm, which, when selecting a cluster center or cluster centroid, will pick one of its data points in the set, and not just an average of the cluster. The algorithm follows these steps: Randomly select k distinct data points. These are the initial clusters. Measure the distance between each point and each of the 'k' clusters. (This is the distance of the points from each point k). Assign each point to the nearest cluster. Find the center of each cluster (medoid). Repeat until the clusters no longer change. Assess the quality of the clustering by adding up the variation within each cluster. Repeat the processes with different values of k. Pick the best value for 'k' by finding the "elbow" in the plot of which k value has the lowest variance. One example of this in biology is used in the 3D mapping of a genome. Information of a mouse's HIST1 region of chromosome 13 is gathered from Gene Expression Omnibus. This information contains data on which nuclear profiles show up in certain genomic regions. With this information, the Jaccard distance can be used to find a normalized distance between all the loci. Graph Analytics Graph analytics, or network analysis, is the study of graphs that represent connections between different objects. Graphs can represent all kinds of networks in biology such as protein-protein interaction networks, regulatory networks, Metabolic and biochemical networks and much more. There are many ways to analyze these networks. One of which is looking at centrality in graphs. Finding centrality in graphs assigns nodes rankings to their popularity or centrality in the graph. This can be useful in finding which nodes are most important. For example, given data on the activity of genes over a time period, degree centrality can be used to see what genes are most active throughout the network, or what genes interact with others the most throughout the network. This contributes to the understanding of the roles certain genes play in the network. There are many ways to calculate centrality in graphs all of which can give different kinds of information on centrality. Finding centralities in biology can be applied in many different circumstances, some of which are gene regulatory, protein interaction and metabolic networks. Supervised Learning Supervised learning is a type of algorithm that learns from labeled data and learns how to assign labels to future data that is unlabeled. In biology supervised learning can be helpful when we have data that we know how to categorize and we would like to categorize more data into those categories. A common supervised learning algorithm is the random forest, which uses numerous decision trees to train a model to classify a dataset. Forming the basis of the random forest, a decision tree is a structure which aims to classify, or label, some set of data using certain known features of that data. A practical biological example of this would be taking an individual's genetic data and predicting whether or not that individual is predisposed to develop a certain disease or cancer. At each internal node the algorithm checks the dataset for exactly one feature, a specific gene in the previous example, and then branches left or right based on the result. Then at each leaf node, the decision tree assigns a class label to the dataset. So in practice, the algorithm walks a specific root-to-leaf path based on the input dataset through the decision tree, which results in the classification of that dataset. Commonly, decision trees have target variables that take on discrete values, like yes/no, in which case it is referred to as a classification tree, but if the target variable is continuous then it is called a regression tree. To construct a decision tree, it must first be trained using a training set to identify which features are the best predictors of the target variable. Open source software Open source software provides a platform for computational biology where everyone can access and benefit from software developed in research. PLOS cites four main reasons for the use of open source software: Reproducibility: This allows for researchers to use the exact methods used to calculate the relations between biological data. Faster development: developers and researchers do not have to reinvent existing code for minor tasks. Instead they can use pre-existing programs to save time on the development and implementation of larger projects. Increased quality: Having input from multiple researchers studying the same topic provides a layer of assurance that errors will not be in the code. Long-term availability: Open source programs are not tied to any businesses or patents. This allows for them to be posted to multiple web pages and ensure that they are available in the future. Research There are several large conferences that are concerned with computational biology. Some notable examples are Intelligent Systems for Molecular Biology, European Conference on Computational Biology and Research in Computational Molecular Biology. There are also numerous journals dedicated to computational biology. Some notable examples include Journal of Computational Biology and PLOS Computational Biology, a peer-reviewed open access journal that has many notable research projects in the field of computational biology. They provide reviews on software, tutorials for open source software, and display information on upcoming computational biology conferences. Other journals relevant to this field include Bioinformatics, Computers in Biology and Medicine, BMC Bioinformatics, Nature Methods, Nature Communications, Scientific Reports, PLOS One, etc. Related fields Computational biology, bioinformatics and mathematical biology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science. The NIH describes computational/mathematical biology as the use of computational/mathematical approaches to address theoretical and experimental questions in biology and, by contrast, bioinformatics as the application of information science to understand complex life-sciences data. Specifically, the NIH defines While each field is distinct, there may be significant overlap at their interface, so much so that to many, bioinformatics and computational biology are terms that are used interchangeably. The terms computational biology and evolutionary computation have a similar name, but are not to be confused. Unlike computational biology, evolutionary computation is not concerned with modeling and analyzing biological data. It instead creates algorithms based on the ideas of evolution across species. Sometimes referred to as genetic algorithms, the research of this field can be applied to computational biology. While evolutionary computation is not inherently a part of computational biology, computational evolutionary biology is a subfield of it.
Biology and health sciences
Biology basics
Biology
149354
https://en.wikipedia.org/wiki/Information%20science
Information science
Information science is an academic field which is primarily concerned with analysis, collection, classification, manipulation, storage, retrieval, movement, dissemination, and protection of information. Practitioners within and outside the field study the application and the usage of knowledge in organizations in addition to the interaction between people, organizations, and any existing information systems with the aim of creating, replacing, improving, or understanding the information systems. Historically, information science is associated with informatics, computer science, data science, psychology, technology, documentation science, library science, healthcare, and intelligence agencies. However, information science also incorporates aspects of diverse fields such as archival science, cognitive science, commerce, law, linguistics, museology, management, mathematics, philosophy, public policy, and social sciences. Foundations Scope and approach Information science focuses on understanding problems from the perspective of the stakeholders involved and then applying information and other technologies as needed. In other words, it tackles systemic problems first rather than individual pieces of technology within that system. In this respect, one can see information science as a response to technological determinism, the belief that technology "develops by its own laws, that it realizes its own potential, limited only by the material resources available and the creativity of its developers. It must therefore be regarded as an autonomous system controlling and ultimately permeating all other subsystems of society." Many universities have entire colleges, departments or schools devoted to the study of information science, while numerous information-science scholars work in disciplines such as communication, healthcare, computer science, law, and sociology. Several institutions have formed an I-School Caucus (see List of I-Schools), but numerous others besides these also have comprehensive information foci. Within information science, current issues include: Human–computer interaction for science Groupware The Semantic Web Value sensitive design Iterative design processes The ways people generate, use and find information Definitions The first known usage of the term "information science" was in 1955. An early definition of Information science (going back to 1968, the year when the American Documentation Institute renamed itself as the American Society for Information Science and Technology) states: "Information science is that discipline that investigates the properties and behavior of information, the forces governing the flow of information, and the means of processing information for optimum accessibility and usability. It is concerned with that body of knowledge relating to the origination, collection, organization, storage, retrieval, interpretation, transmission, transformation, and utilization of information. This includes the authenticity of information representations in both natural and artificial systems, the use of codes for efficient message transmission, and the study of information processing devices and techniques such as computers and their programming systems. It is an interdisciplinary science derived from and related to such fields as mathematics, logic, linguistics, psychology, computer technology, operations research, the graphic arts, communications, management, and other similar fields. It has both a pure science component, which inquires into the subject without regard to its application, and an applied science component, which develops services and products." . Related terms Some authors use informatics as a synonym for information science. This is especially true when related to the concept developed by A. I. Mikhailov and other Soviet authors in the mid-1960s. The Mikhailov school saw informatics as a discipline related to the study of scientific information. Informatics is difficult to precisely define because of the rapidly evolving and interdisciplinary nature of the field. Definitions reliant on the nature of the tools used for deriving meaningful information from data are emerging in Informatics academic programs. Regional differences and international terminology complicate the problem. Some people note that much of what is called "Informatics" today was once called "Information Science" – at least in fields such as Medical Informatics. For example, when library scientists began also to use the phrase "Information Science" to refer to their work, the term "informatics" emerged: in the United States as a response by computer scientists to distinguish their work from that of library science in Britain as a term for a science of information that studies natural, as well as artificial or engineered, information-processing systems Another term discussed as a synonym for "information studies" is "information systems". Brian Campbell Vickery's Information Systems (1973) placed information systems within IS. , on the other hand, provided a bibliometric investigation describing the relation between two different fields: "information science" and "information systems". Philosophy of information Philosophy of information studies conceptual issues arising at the intersection of psychology, computer science, information technology, and philosophy. It includes the investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation and sciences, as well as the elaboration and application of information-theoretic and computational methodologies to its philosophical problems. Robert Hammarberg pointed out that there is no coherent distinction between information and data: "an Information Processing System (IPS) cannot process data except in terms of whatever representational language is inherent to it, [so] data could not even be apprehended by an IPS without becoming representational in nature, and thus losing their status of being raw, brute, facts. Ontology In science and information science, an ontology formally represents knowledge as a set of concepts within a domain, and the relationships between those concepts. It can be used to reason about the entities within that domain and may be used to describe the domain. More specifically, an ontology is a model for describing the world that consists of a set of types, properties, and relationship types. Exactly what is provided around these varies, but they are the essentials of an ontology. There is also generally an expectation that there be a close resemblance between the real world and the features of the model in an ontology.<ref>{{cite web |first=L. M. |last=Garshol |year=2004 |url=http://www.ontopia.net/topicmaps/materials/tm-vs-thesauri.html#N773 |title=Metadata? Thesauri? Taxonomies? Topic Maps! Making sense of it all |access-date=13 October 2008 |url-status=dead |archive-url=https://web.archive.org/web/20081017174807/http://www.ontopia.net/topicmaps/materials/tm-vs-thesauri.html#N773 |archive-date=17 October 2008 }}</ref> In theory, an ontology is a "formal, explicit specification of a shared conceptualisation". An ontology renders shared vocabulary and taxonomy which models a domain with the definition of objects and/or concepts and their properties and relations. Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it. The creation of domain ontologies is also essential to the definition and use of an enterprise architecture framework. Science or discipline? Authors such as Ingwersen argue that informatology has problems defining its own boundaries with other disciplines. According to Popper "Information science operates busily on an ocean of commonsense practical applications, which increasingly involve the computer ... and on commonsense views of language, of communication, of knowledge and Information, computer science is in little better state". Other authors, such as Furner, deny that information science is a true science. Careers Information scientist An information scientist is an individual, usually with a relevant subject degree or high level of subject knowledge, who provides focused information to scientific and technical research staff in industry or to subject faculty and students in academia. The industry *information specialist/scientist* and the academic information subject specialist/librarian have, in general, similar subject background training, but the academic position holder will be required to hold a second advanced degree (MLS/MI/MA in IS, e.g.) in information and library studies in addition to a subject master's. The title also applies to an individual carrying out research in information science. Systems analyst A systems analyst works on creating, designing, and improving information systems for a specific need. Often systems analysts work with one or more businesses to evaluate and implement organizational processes and techniques for accessing information in order to improve efficiency and productivity within the organization (s). Information professional An information professional is an individual who preserves, organizes, and disseminates information. Information professionals are skilled in the organization and retrieval of recorded knowledge. Traditionally, their work has been with print materials, but these skills are being increasingly used with electronic, visual, audio, and digital materials. Information professionals work in a variety of public, private, non-profit, and academic institutions. Information professionals can also be found within organisational and industrial contexts. Performing roles that include system design and development and system analysis. History Early beginnings Information science, in studying the collection, classification, manipulation, storage, retrieval and dissemination of information has origins in the common stock of human knowledge. Information analysis has been carried out by scholars at least as early as the time of the Assyrian Empire with the emergence of cultural depositories, what is today known as libraries and archives. Institutionally, information science emerged in the 19th century along with many other social science disciplines. As a science, however, it finds its institutional roots in the history of science, beginning with publication of the first issues of Philosophical Transactions, generally considered the first scientific journal, in 1665 by the Royal Society (London). The institutionalization of science occurred throughout the 18th century. In 1731, Benjamin Franklin established the Library Company of Philadelphia, the first library owned by a group of public citizens, which quickly expanded beyond the realm of books and became a center of scientific experimentation, and which hosted public exhibitions of scientific experiments. Benjamin Franklin invested a town in Massachusetts with a collection of books that the town voted to make available to all free of charge, forming the first public library of the United States. Academie de Chirurgia (Paris) published Memoires pour les Chirurgiens, generally considered to be the first medical journal, in 1736. The American Philosophical Society, patterned on the Royal Society (London), was founded in Philadelphia in 1743. As numerous other scientific journals and societies were founded, Alois Senefelder developed the concept of lithography for use in mass printing work in Germany in 1796. 19th century By the 19th century the first signs of information science emerged as separate and distinct from other sciences and social sciences but in conjunction with communication and computation. In 1801, Joseph Marie Jacquard invented a punched card system to control operations of the cloth weaving loom in France. It was the first use of "memory storage of patterns" system. As chemistry journals emerged throughout the 1820s and 1830s, Charles Babbage developed his "difference engine", the first step towards the modern computer, in 1822 and his "analytical engine" by 1834. By 1843 Richard Hoe developed the rotary press, and in 1844 Samuel Morse sent the first public telegraph message. By 1848 William F. Poole begins the Index to Periodical Literature, the first general periodical literature index in the US. In 1854 George Boole published An Investigation into Laws of Thought..., which lays the foundations for Boolean algebra, which is later used in information retrieval. In 1860 a congress was held at Karlsruhe Technische Hochschule to discuss the feasibility of establishing a systematic and rational nomenclature for chemistry. The congress did not reach any conclusive results, but several key participants returned home with Stanislao Cannizzaro's outline (1858), which ultimately convinces them of the validity of his scheme for calculating atomic weights. By 1865, the Smithsonian Institution began a catalog of current scientific papers, which became the International Catalogue of Scientific Papers in 1902. The following year the Royal Society began publication of its Catalogue of Papers in London. In 1868, Christopher Sholes, Carlos Glidden, and S. W. Soule produced the first practical typewriter. By 1872 Lord Kelvin devised an analogue computer to predict the tides, and by 1875 Frank Stephen Baldwin was granted the first US patent for a practical calculating machine that performs four arithmetic functions. Alexander Graham Bell and Thomas Edison invented the telephone and phonograph in 1876 and 1877 respectively, and the American Library Association was founded in Philadelphia. In 1879 Index Medicus was first issued by the Library of the Surgeon General, U.S. Army, with John Shaw Billings as librarian, and later the library issues Index Catalogue, which achieved an international reputation as the most complete catalog of medical literature. European documentation The discipline of documentation science, which marks the earliest theoretical foundations of modern information science, emerged in the late part of the 19th century in Europe together with several more scientific indexes whose purpose was to organize scholarly literature. Many information science historians cite Paul Otlet and Henri La Fontaine as the fathers of information science with the founding of the International Institute of Bibliography (IIB) in 1895. A second generation of European Documentalists emerged after the Second World War, most notably Suzanne Briet. However, "information science" as a term is not popularly used in academia until sometime in the latter part of the 20th century. Documentalists emphasized the utilitarian integration of technology and technique toward specific social goals. According to Ronald Day, "As an organized system of techniques and technologies, documentation was understood as a player in the historical development of global organization in modernity – indeed, a major player inasmuch as that organization was dependent on the organization and transmission of information." Otlet and Lafontaine (who won the Nobel Prize in 1913) not only envisioned later technical innovations but also projected a global vision for information and information technologies that speaks directly to postwar visions of a global "information society". Otlet and Lafontaine established numerous organizations dedicated to standardization, bibliography, international associations, and consequently, international cooperation. These organizations were fundamental for ensuring international production in commerce, information, communication and modern economic development, and they later found their global form in such institutions as the League of Nations and the United Nations. Otlet designed the Universal Decimal Classification, based on Melville Dewey's decimal classification system. Although he lived decades before computers and networks emerged, what he discussed prefigured what ultimately became the World Wide Web. His vision of a great network of knowledge focused on documents and included the notions of hyperlinks, search engines, remote access, and social networks. Otlet not only imagined that all the world's knowledge should be interlinked and made available remotely to anyone, but he also proceeded to build a structured document collection. This collection involved standardized paper sheets and cards filed in custom-designed cabinets according to a hierarchical index (which culled information worldwide from diverse sources) and a commercial information retrieval service (which answered written requests by copying relevant information from index cards). Users of this service were even warned if their query was likely to produce more than 50 results per search. By 1937 documentation had formally been institutionalized, as evidenced by the founding of the American Documentation Institute (ADI), later called the American Society for Information Science and Technology. Transition to modern information science With the 1950s came increasing awareness of the potential of automatic devices for literature searching and information storage and retrieval. As these concepts grew in magnitude and potential, so did the variety of information science interests. By the 1960s and 70s, there was a move from batch processing to online modes, from mainframe to mini and microcomputers. Additionally, traditional boundaries among disciplines began to fade and many information science scholars joined with other programs. They further made themselves multidisciplinary by incorporating disciplines in the sciences, humanities and social sciences, as well as other professional programs, such as law and medicine in their curriculum. Among the individuals who had distinct opportunities to facilitate interdisciplinary activity targeted at scientific communication was Foster E. Mohrhardt, director of the National Agricultural Library from 1954 to 1968. By the 1980s, large databases, such as Grateful Med at the National Library of Medicine, and user-oriented services such as Dialog and Compuserve, were for the first time accessible by individuals from their personal computers. The 1980s also saw the emergence of numerous special interest groups to respond to the changes. By the end of the decade, special interest groups were available involving non-print media, social sciences, energy and the environment, and community information systems. Today, information science largely examines technical bases, social consequences, and theoretical understanding of online databases, widespread use of databases in government, industry, and education, and the development of the Internet and World Wide Web. Information dissemination in the 21st century Changing definition Dissemination has historically been interpreted as unilateral communication of information. With the advent of the internet, and the explosion in popularity of online communities, social media has changed the information landscape in many respects, and creates both new modes of communication and new types of information", changing the interpretation of the definition of dissemination. The nature of social networks allows for faster diffusion of information than through organizational sources. The internet has changed the way we view, use, create, and store information; now it is time to re-evaluate the way we share and spread it. Impact of social media on people and industry Social media networks provide an open information environment for the mass of people who have limited time or access to traditional outlets of information diffusion, this is an "increasingly mobile and social world [that] demands...new types of information skills". Social media integration as an access point is a very useful and mutually beneficial tool for users and providers. All major news providers have visibility and an access point through networks such as Facebook and Twitter maximizing their breadth of audience. Through social media people are directed to, or provided with, information by people they know. The ability to "share, like, and comment on...content" increases the reach farther and wider than traditional methods. People like to interact with information, they enjoy including the people they know in their circle of knowledge. Sharing through social media has become so influential that publishers must "play nice" if they desire to succeed. Although, it is often mutually beneficial for publishers and Facebook to "share, promote and uncover new content" to improve both user base experiences. The impact of popular opinion can spread in unimaginable ways. Social media allows interaction through simple to learn and access tools; The Wall Street Journal offers an app through Facebook, and The Washington Post goes a step further and offers an independent social app that was downloaded by 19.5 million users in six months, proving how interested people are in the new way of being provided information. Social media's power to facilitate topics The connections and networks sustained through social media help information providers learn what is important to people. The connections people have throughout the world enable the exchange of information at an unprecedented rate. It is for this reason that these networks have been realized for the potential they provide. "Most news media monitor Twitter for breaking news", as well as news anchors frequently request the audience to tweet pictures of events. The users and viewers of the shared information have earned "opinion-making and agenda-setting power" This channel has been recognized for the usefulness of providing targeted information based on public demand. Research sectors and applications The following areas are some of those that information science investigates and develops. Information access Information access is an area of research at the intersection of Informatics, Information Science, Information Security, Language Technology, and Computer Science. The objectives of information access research are to automate the processing of large and unwieldy amounts of information and to simplify users' access to it. What about assigning privileges and restricting access to unauthorized users? The extent of access should be defined in the level of clearance granted for the information. Applicable technologies include information retrieval, text mining, text editing, machine translation, and text categorisation. In discussion, information access is often defined as concerning the insurance of free and closed or public access to information and is brought up in discussions on copyright, patent law, and public domain. Public libraries need resources to provide knowledge of information assurance. Information architecture Information architecture (IA) is the art and science of organizing and labelling websites, intranets, online communities and software to support usability. It is an emerging discipline and community of practice focused on bringing together principles of design and architecture to the digital landscape. Typically it involves a model or concept of information which is used and applied to activities that require explicit details of complex information systems. These activities include library systems and database development. Information management Information management (IM) is the collection and management of information from one or more sources and the distribution of that information to one or more audiences. This sometimes involves those who have a stake in, or a right to that information. Management means the organization of and control over the structure, processing and delivery of information. Throughout the 1970s this was largely limited to files, file maintenance, and the life cycle management of paper-based files, other media and records. With the proliferation of information technology starting in the 1970s, the job of information management took on a new light and also began to include the field of data maintenance. Information retrieval Information retrieval (IR) is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web. Automated information retrieval systems are used to reduce what has been called "information overload". Many universities and public libraries use IR systems to provide access to books, journals and other documents. Web search engines are the most visible IR applications. An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevancy. An object is an entity that is represented by information in a database. User queries are matched against the database information. Depending on the application the data objects may be, for example, text documents, images, audio, mind maps or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata. Most IR systems compute a numeric score on how well each object in the database match the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query. Information seeking Information seeking is the process or activity of attempting to obtain information in both human and technological contexts. Information seeking is related to, but different from, information retrieval (IR). Much library and information science (LIS) research has focused on the information-seeking practices of practitioners within various fields of professional work. Studies have been carried out into the information-seeking behaviors of librarians, academics, medical professionals, engineers and lawyers (among others). Much of this research has drawn on the work done by Leckie, Pettigrew (now Fisher) and Sylvain, who in 1996 conducted an extensive review of the LIS literature (as well as the literature of other academic fields) on professionals' information seeking. The authors proposed an analytic model of professionals' information seeking behaviour, intended to be generalizable across the professions, thus providing a platform for future research in the area. The model was intended to "prompt new insights... and give rise to more refined and applicable theories of information seeking" . The model has been adapted by who proposes a model of the information seeking of lawyers. Recent studies in this topic address the concept of information-gathering that "provides a broader perspective that adheres better to professionals' work-related reality and desired skills." . Information society An information society is a society where the creation, distribution, diffusion, uses, integration and manipulation of information is a significant economic, political, and cultural activity. The aim of an information society is to gain competitive advantage internationally, through using IT in a creative and productive way. The knowledge economy is its economic counterpart, whereby wealth is created through the economic exploitation of understanding. People who have the means to partake in this form of society are sometimes called digital citizens. Basically, an information society is the means of getting information from one place to another . As technology has become more advanced over time so too has the way we have adapted in sharing this information with each other. Information society theory discusses the role of information and information technology in society, the question of which key concepts should be used for characterizing contemporary society, and how to define such concepts. It has become a specific branch of contemporary sociology. Knowledge representation and reasoning Knowledge representation (KR) is an area of artificial intelligence research aimed at representing knowledge in symbols to facilitate inferencing from those knowledge elements, creating new elements of knowledge. The KR can be made to be independent of the underlying knowledge model or knowledge base system (KBS) such as a semantic network. Knowledge Representation (KR) research involves analysis of how to reason accurately and effectively and how best to use a set of symbols to represent a set of facts within a knowledge domain. A symbol vocabulary and a system of logic are combined to enable inferences about elements in the KR to create new KR sentences. Logic is used to supply formal semantics of how reasoning functions should be applied to the symbols in the KR system. Logic is also used to define how operators can process and reshape the knowledge. Examples of operators and operations include, negation, conjunction, adverbs, adjectives, quantifiers and modal operators. The logic is interpretation theory. These elements—symbols, operators, and interpretation theory—are what give sequences of symbols meaning within a KR.
Physical sciences
Basics
null
149422
https://en.wikipedia.org/wiki/Hunger
Hunger
In politics, humanitarian aid, and the social sciences, hunger is defined as a condition in which a person does not have the physical or financial capability to eat sufficient food to meet basic nutritional needs for a sustained period. In the field of hunger relief, the term hunger is used in a sense that goes beyond the common desire for food that all humans experience, also known as an appetite. The most extreme form of hunger, when malnutrition is widespread, and when people have started dying of starvation through lack of access to sufficient, nutritious food, leads to a declaration of famine. Throughout history, portions of the world's population have often suffered sustained periods of hunger. In many cases, hunger resulted from food supply disruptions caused by war, plagues, or adverse weather. In the decades following World War II, technological progress and enhanced political cooperation suggested it might be possible to substantially reduce the number of people suffering from hunger. While progress was uneven, by 2015, the threat of extreme hunger had receded for a large portion of the world's population. According to the FAO's 2023 The State of Food Security and Nutrition in the World report, this positive trend had reversed from about 2017, when a gradual rise in number of people suffering from chronic hunger became discernible. In 2020 and 2021, due to the COVID-19 pandemic, there was an increase in the number of people suffering from undernourishment. A recovery occurred in 2022 along with the economic rebound, though the impact on global food markets caused by the invasion of Ukraine meant the reduction in world hunger was limited. While most of the world's people continue to live in Asia, much of the increase in hunger since 2017 occurred in Africa and South America. The FAO's 2017 report discussed three principal reasons for the recent increase in hunger: climate, conflict, and economic slowdowns. The 2018 edition focused on extreme weather as a primary driver of the increase in hunger, finding rising rates to be especially severe in countries where agricultural systems were most sensitive to extreme weather variations. The 2019 SOFI report found a strong correlation between increases in hunger and countries that had suffered an economic slowdown. The 2020 edition instead looked at the prospects of achieving the hunger related Sustainable Development Goal (SDG). It warned that if nothing was done to counter the adverse trends of the past six years, the number of people suffering from chronic hunger could rise by over 150 million by 2030. The 2023 report reported a sharp jump in hunger caused by the COVID-19 pandemic, which leveled off in 2022. Many thousands of organizations are engaged in the field of hunger relief, operating at local, national, regional, or international levels. Some of these organizations are dedicated to hunger relief, while others may work in several different fields. The organizations range from multilateral institutions to national governments, to small local initiatives such as independent soup kitchens. Many participate in umbrella networks that connect thousands of different hunger relief organizations. At the global level, much of the world's hunger relief efforts are coordinated by the UN and geared towards achieving SDG 2 of Zero Hunger by 2030. Definition and related terms There is one globally recognized approach for defining and measuring hunger generally used by those studying or working to relieve hunger as a social problem. This is the United Nation's FAO measurement, which is typically referred to as chronic undernourishment (or in older publications, as 'food deprivation,' 'chronic hunger,' or just plain 'hunger.') For the FAO: Hunger or chronic undernourishment exists when "caloric intake is below the minimum dietary energy requirement (MDER). The MDER is the amount of energy needed to perform light activity and to maintain a minimum acceptable weight for attained height." The FAO use different MDER thresholds for different countries, due to variations in climate and cultural factors. Typically a yearly "balance sheet" approach is used, with the minimum dietary energy requirement tallied against the estimated total calories consumed over the year. The FAO definitions differentiate hunger from malnutrition and food insecurity: Malnutrition results from "deficiencies, excesses or imbalances in the consumption of macro- and/or micro-nutrients." In the FAO definition, all hungry people suffer from malnutrition, but people who are malnourished may not be hungry. They may get sufficient raw calories to avoid hunger but lack essential micronutrients, or they may even consume an excess of raw calories and hence suffer from obesity. Food insecurity occurs when people are at risk, or worried about, not being able to meet their preferences for food, including in terms of raw calories and nutritional value. In the FAO definition, all hungry people are food insecure, but not all food-insecure people are hungry (though there is a very strong overlap between hunger and severe food insecurity.). The FAO have reported that food insecurity quite often results in simultaneous stunted growth for children, and obesity for adults. For hunger relief actors operating at the global or regional level, an increasingly commonly used metric for food insecurity is the IPC scale. Acute hunger is typically used to denote famine like hunger, though the phrase lacks a widely accepted formal definition. In the context of hunger relief, people experiencing 'acute hunger' may also suffer from 'chronic hunger'. The word is used mainly to denote severity, not long-term duration. Not all of the organizations in the hunger relief field use the FAO definition of hunger. Some use a broader definition that overlaps more fully with malnutrition. The alternative definitions do however tend to go beyond the commonly understood meaning of hunger as a painful or uncomfortable motivational condition; the desire for food is something that all humans frequently experience, even the most affluent, and is not in itself a social problem. Very low food supply can be described as "food insecure with hunger." A change in description was made in 2006 at the recommendation of the Committee on National Statistics (National Research Council, 2006) in order to distinguish the physiological state of hunger from indicators of food availability. Food insecure is when food intake of one or more household members was reduced and their eating patterns were disrupted at times during the year because the household lacked money and other resources for food. Food security statistics is measured by using survey data, based on household responses to items about whether the household was able to obtain enough food to meet their needs. World statistics The United Nations publishes an annual report on the state of food security and nutrition across the world. Led by the FAO, the report was joint authored by four other UN agencies: the WFP, IFAD, WHO and UNICEF. The theme of the 2024 report is on how efforts to meet SDG 2.1 & 2.2 can be financed. The FAO's yearly report provides a statistical overview on the prevalence of hunger around the world, and is widely considered the main global reference for tracking hunger. No simple set of statistics can ever fully capture the multi dimensional nature of hunger however. Reasons include that the FAO's key metric for hunger, "undernourishment", is defined solely in terms of dietary energy availability – disregarding micro-nutrients such as vitamins or minerals. Second, the FAO uses the energy requirements for minimum activity levels as a benchmark; many people would not count as hungry by the FAO's measure yet still be eating too little to undertake hard manual labour, which might be the only sort of work available to them. Thirdly, the FAO statistics do not always reflect short-term undernourishment. An alternative measure of hunger across the world is the Global Hunger Index (GHI). Unlike the FAO's measure, the GHI defines hunger in a way that goes beyond raw calorie intake, to include for example ingestion of micronutrients. GDI is a multidimensional statistical tool used to describe the state of countries' hunger situation. The GHI measures progress and failures in the global fight against hunger. The GHI is updated once a year. The data from the 2015 report showed that Hunger levels have dropped 27% since 2000. Fifty two countries remained at serious or alarming levels. The 2019 GHI report expresses concern about the increase in hunger since 2015. In addition to the latest statistics on Hunger and Food Security, the GHI also features different special topics each year. The 2019 report includes an essay on hunger and climate change, with evidence suggesting that areas most vulnerable to climate change have suffered much of the recent increases in hunger. The fight against hunger Pre World War II Throughout history, the need to aid those suffering from hunger has been commonly, though not universally, recognized. The philosopher Simone Weil wrote that feeding the hungry when you have resources to do so is the most obvious of all human obligations. She says that as far back as Ancient Egypt, many believed that people had to show they had helped the hungry in order to justify themselves in the afterlife. Weil writes that Social progress is commonly held to be first of all, "...a transition to a state of human society in which people will not suffer from hunger." Social historian Karl Polanyi wrote that before markets became the world's dominant form of economic organization in the 19th century, most human societies would either starve all together or not at all, because communities would invariably share their food. While some of the principles for avoiding famines had been laid out in the first book of the Bible, they were not always understood. Historical hunger relief efforts were often largely left to religious organizations and individual kindness. Even up to early modern times, political leaders often reacted to famine with bewilderment and confusion. From the first age of globalization, which began in the 19th century, it became more common for the elite to consider problems like hunger in global terms. However, as early globalization largely coincided with the high peak of influence for classical liberalism, there was relatively little call for politicians to address world hunger. In the late nineteenth and early twentieth century, the view that politicians ought not to intervene against hunger was increasingly challenged by campaigning journalists. There were also more frequent calls for large scale intervention against world hunger from academics and politicians, such as U.S. President Woodrow Wilson. Funded both by the government and private donations, the U.S. was able to dispatch millions of tons of food aid to European countries during and in the years immediately after WWI, organized by agencies such as the American Relief Administration. Hunger as an academic and social topic came to further prominence in the U.S. thanks to mass media coverage of the issue as a domestic problem during the Great Depression. Efforts after World War II While there had been increasing attention to hunger relief from the late 19th century, Dr David Grigg has summarised that prior to the end of World War II, world hunger still received relatively little academic or political attention; whereas after 1945 there was an explosion of interest in the topic. After World War II, a new international politico-economic order came into being, which was later described as Embedded liberalism. For at least the first decade after the war, the United States, then by far the period's most dominant national actor, was strongly supportive of efforts to tackle world hunger and to promote international development. It heavily funded the United Nation's development programmes, and later the efforts of other multilateral organizations like the International Monetary Fund (IMF) and the World Bank (WB). The newly established United Nations became a leading player in co-ordinating the global fight against hunger. The UN has three agencies that work to promote food security and agricultural development: the Food and Agriculture Organization (FAO), the World Food Programme (WFP) and the International Fund for Agricultural Development (IFAD). FAO is the world's agricultural knowledge agency, providing policy and technical assistance to developing countries to promote food security, nutrition and sustainable agricultural production, particularly in rural areas. WFP's key mission is to deliver food into the hands of the hungry poor. The agency steps in during emergencies and uses food to aid recovery after emergencies. Its longer term approaches to hunger helps the transition from recovery to development. IFAD, with its knowledge of rural poverty and exclusive focus on poor rural people, designs and implements programmes to help those people access the assets, services and opportunities they need to overcome poverty. Following successful post WWII reconstruction of Germany and Japan, the IMF and WB began to turn their attention to the developing world. A great many civil society actors were also active in trying to combat hunger, especially after the late 1970s when global media began to bring the plight of starving people in places like Ethiopia to wider attention. Most significant of all, especially in the late 1960s and 70s, the Green revolution helped improved agricultural technology propagate throughout the world. The United States began to change its approach to the problem of world hunger from about the mid 1950s. Influential members of the administration became less enthusiastic about methods they saw as promoting an over reliance on the state, as they feared that might assist the spread of communism. By the 1980s, the previous consensus in favour of moderate government intervention had been displaced across the western world. The IMF and World Bank in particular began to promote market-based solutions. In cases where countries became dependent on the IMF, they sometimes forced national governments to prioritize debt repayments and sharply cut public services. This sometimes had a negative effect on efforts to combat hunger. Organizations such as Food First raised the issue of food sovereignty and claimed that every country on earth (with the possible minor exceptions of some city-states) has sufficient agricultural capacity to feed its own people, but that the "free trade" economic order, which from the late 1970s to about 2008 had been associated with such institutions as the IMF and World Bank, had prevented this from happening. The World Bank itself claimed it was part of the solution to hunger, asserting that the best way for countries to break the cycle of poverty and hunger was to build export-led economies that provide the financial means to buy foodstuffs on the world market. However, in the early 21st century the World Bank and IMF became less dogmatic about promoting free market reforms. They increasingly returned to the view that government intervention does have a role to play, and that it can be advisable for governments to support food security with policies favourable to domestic agriculture, even for countries that do not have a comparative advantage in that area. As of 2012, the World Bank remains active in helping governments to intervene against hunger. Until at least the 1980s—and, to an extent, the 1990s—the dominant academic view concerning world hunger was that it was a problem of demand exceeding supply. Proposed solutions often focused on boosting food production, and sometimes on birth control. There were exceptions to this, even as early as the 1940s, Lord Boyd-Orr, the first head of the UN's FAO, had perceived hunger as largely a problem of distribution, and drew up comprehensive plans to correct this. Few agreed with him at the time, however, and he resigned after failing to secure support for his plans from the US and Great Britain. In 1998, Amartya Sen won a Nobel Prize in part for demonstrating that hunger in modern times is not typically the product of a lack of food. Rather, hunger usually arises from food distribution problems, or from governmental policies in the developed and developing world. It has since been broadly accepted that world hunger results from issues with the distribution as well as the production of food. Sen's 1981 essay Poverty and Famines: An Essay on Entitlement and Deprivation played a prominent part in forging the new consensus. In 2007 and 2008, rapidly increasing food prices caused a global food crisis. Food riots erupted in several dozen countries; in at least two cases, Haiti and Madagascar, this led to the toppling of governments. A second global food crisis unfolded due to the spike in food prices of late 2010 and early 2011. Fewer food riots occurred, due in part to greater availability of food stock piles for relief. However, several analysts argue the food crisis was one of the causes of the Arab Spring. Efforts since the global 2008 crisis In the early 21st century, the attention paid to the problem of hunger by the leaders of advanced nations such as those that form the G8 had somewhat subsided. Prior to 2009, large scale efforts to fight hunger were mainly undertaken by governments of the worst affected countries, by civil society actors, and by multilateral and regional organizations. In 2009, Pope Benedict published his third encyclical, Caritas in Veritate, which emphasised the importance of fighting against hunger. The encyclical was intentionally published immediately before the July 2009 G8 Summit to maximise its influence on that event. At the Summit, which took place at L'Aquila in central Italy, the L'Aquila Food Security Initiative was launched, with a total of US$22 billion committed to combat hunger. Food prices fell sharply in 2009 and early 2010, though analysts credit this much more to farmers increasing production in response to the 2008 spike in prices, than to the fruits of enhanced government action. However, since the 2009 G8 summit, the fight against hunger became a high-profile issue among the leaders of the worlds major nations and was a prominent part of the agenda for the 2012 G-20 summit. In April 2012, the Food Assistance Convention was signed, the world's first legally binding international agreement on food aid. The May 2012 Copenhagen Consensus recommended that efforts to combat hunger and malnutrition should be the first priority for politicians and private sector philanthropists looking to maximize the effectiveness of aid spending. They put this ahead of other priorities, like the fight against malaria and AIDS. Also in May 2012, U.S. President Barack Obama launched a "new alliance for food security and nutrition"—a broad partnership between private sector, governmental and civil society actors—that aimed to "...achieve sustained and inclusive agricultural growth and raise 50 million people out of poverty over the next 10 years." The UK's prime minister David Cameron held a hunger summit on 12 August, the last day of the 2012 Summer Olympics. The fight against hunger has also been joined by an increased number of regular people. While folk throughout the world had long contributed to efforts to alleviate hunger in the developing world, there has recently been a rapid increase in the numbers involved in tackling domestic hunger even within the economically advanced nations of the Global North. This had happened much earlier in North America than it did in Europe. In the US, the Reagan administration scaled back welfare the early 1980s, leading to a vast increase of charity sector efforts to help Americans unable to buy enough to eat. According to a 1992 survey of 1000 randomly selected US voters, 77% of Americans had contributed to efforts to feed the hungry, either by volunteering for various hunger relief agencies such as food banks and soup kitchens, or by donating cash or food. Europe, with its more generous welfare systems, had little awareness of domestic hunger until the food price inflation that began in late 2006, and especially as austerity-imposed welfare cuts began to take effect in 2010. Various surveys reported that upwards of 10% of Europe's population had begun to suffer from food insecurity. Especially since 2011, there has been a substantial increase in grass roots efforts to help the hungry by means of food banks, both in the UK and in continental Europe. By July 2012, the 2012 US drought had already caused a rapid increase in the price of grain and soy, with a knock on effect on the price of meat. As well as affecting hungry people in the US, this caused prices to rise on the global markets; the US is the world's biggest exporter of food. This led to much talk of a possible third 21st century global food crisis. The Financial Times reported that the BRICS may not be as badly affected as they were in the earlier crises of 2008 and 2011. However, smaller developing countries that must import a substantial portion of their food could be hard hit. The UN and G20 has begun contingency planning so as to be ready to intervene if a third global crisis breaks out. By August 2013 however, concerns had been allayed, with above average grain harvests expected from major exporters, including Japan, Brazil, Ukraine and the US. 2014 also saw a good worldwide harvest, leading to speculation that grain prices could soon begin to fall. In an April 2013 summit held in Dublin concerning Hunger, Nutrition, Climate Justice, and the post 2015 MDG framework for global justice, Ireland's President Higgins said that only 10% of deaths from hunger are due to armed conflict and natural disasters, with ongoing hunger being both the "greatest ethical failure of the current global system" and the "greatest ethical challenge facing the global community." $4.15 billion of new commitments were made to tackle hunger at a June 2013 Hunger Summit held in London, hosted by the governments of Britain and Brazil, together with The Children's Investment Fund Foundation. Despite the hardship caused by the 2007–2009 financial crisis and global increases in food prices that occurred around the same time, the UN's global statistics show it was followed by close to year on year reductions in the numbers suffering from hunger around the world. By 2019 however, evidence had mounted that this progress seemed to have gone into reverse over the last four years. The numbers suffering from hunger had risen both in absolute terms and very slightly even as a percentage of the world's population. In 2019, FAO its annual edition of The State of Food and Agriculture which asserted that food loss and waste has potential effects on food security and nutrition through changes in the four dimensions of food security: food availability, access, utilization and stability. However, the links between food loss and waste reduction and food security are complex, and positive outcomes are not always certain. Reaching acceptable levels of food security and nutrition inevitably implies certain levels of food loss and waste. Maintaining buffers to ensure food stability requires a certain amount of food to be lost or wasted. At the same time, ensuring food safety involves discarding unsafe food, which then is counted as lost or wasted, while higher-quality diets tend to include more highly perishable foods. How the impacts on the different dimensions of food security play out and affect the food security of different population groups depends on where in the food supply chain the reduction in losses or waste takes place as well as on where nutritionally vulnerable and food-insecure people are located geographically. In April and May 2020, concerns were expressed that the COVID-19 pandemic could result in a doubling in global hunger unless world leaders acted to prevent this. Agencies such as the WFP warned that this could include the number of people facing acute hunger rising from 135 million to about 265 million by the end of 2020. Indications of extreme hunger were seen in various cities, such as fatal stampedes when word spread that emergency food aid was being handed out. Letters calling for co-ordinated action to offset the effects of the COVID-19 pandemic were written to the G20 and G7, by various actors including NGOs, UN staff, corporations, academics and former national leaders. The FAO found that 122 million more people experienced hunger in 2022 compared to 2019. Following the 2022 invasion of Ukraine, concerns have been raised over hunger resulting from rising food prices. This is forecast to risk civil unrest even in many middle income countries, where government capability to protect their populations was largely exhausted by the Covid pandemic, and has not yet recovered. Hunger relief organisations Many thousands of hunger relief organisations exist across the world. Some but not all are entirely dedicated to fighting hunger. They range from independent soup kitchens that serve only one locality, to global organisations. Organisations working at the global and regional level will often focus much of their efforts on helping hungry communities to better feed themselves, for example by sharing agricultural technology. With some exceptions, organisations that work just on the local level tend to focus more on providing food directly to hungry people. Many of the entities are connected by a web of national, regional and global alliances that help them share resources, knowledge, and coordinate efforts. Global The United Nations is central to global efforts to relieve hunger, most especially through the FAO, and also via other agencies: such as WFP, IFAD, WHO and UNICEF. After the Millennium Development Goals expired in 2015, the Sustainable Development Goals (SDGs) became key objectives to shape the world's response to development challenges such as hunger. In particular Goal 2: Zero Hunger sets globally agreed targets to end hunger, achieve food security and improved nutrition and promote sustainable agriculture. Aside from the UN agencies themselves, hundreds of other actors address the problem of hunger on the global level, often involving participation in large umbrella organisations. These include national governments, religious groups, international charities and in some cases international corporations. Though except perhaps in the cases of dedicated charities, the priority these organisations assign to hunger relief may vary from year to year. In many cases the organisations partner with the UN agencies, though often they pursue independent goals. For example, as consensus began to form for the SDG zero hunger goal to aim to end hunger by 2030, a number of organizations formed initiatives with the more ambitious target to achieve this outcome early, by 2025: In 2013 Caritas International started a Caritas-wide initiative aimed at ending systemic hunger by 2025. The One human family, food for all campaign focuses on awareness raising, improving the impact of Caritas programs and advocating the implementation of the right to food. The partnership Compact2025, led by IFPRI with the involvement of UN organisations, NGOs and private foundations develops and disseminates evidence-based advice to politicians and other decision-makers aimed at ending hunger and undernutrition in the coming 10 years, by 2025. It bases its claim that hunger can be ended by 2025 on a report by Shenggen Fan and Paul Polman that analyzed the experiences from Russia, China, Vietnam, Brazil and Thailand and concludes that eliminating hunger and undernutrition was possible by 2025. In June 2015, the European Union and the Bill & Melinda Gates Foundation launched a partnership to combat undernutrition especially in children. The program would initially be implemented in Bangladesh, Burundi, Ethiopia, Kenya, Laos and Niger and will help these countries to improve information and analysis about nutrition so they can develop effective national nutrition policies. Sustainable Development Goal 2 (SDG 2 or Goal 2) The objective of SDG 2 is to "end hunger, achieve food security and improved nutrition and promote sustainable agriculture" by 2030. SDG2 recognizes that dealing with hunger is not only based on increasing food production but also on proper markets, access to land and technology and increased and efficient incomes for farmers. A report by the International Food Policy Research Institute (IFPRI) of 2013 argued that the emphasis of the SDGs should be on eliminating hunger and under-nutrition, rather than on poverty, and that attempts should be made to do so by 2025 rather than 2030. The argument is based on an analysis of experiences in Russia, China, Vietnam, Brazil, and Thailand and the fact that people suffering from severe hunger face extra impediments to improving their lives, whether it be by education or work. Three pathways to achieve this were identified: 1) agriculture-led; 2) social protection- and nutrition- intervention-led; or 3) a combination of both of these approaches. Regional Much of the world's regional alliances are located in Africa. For example, the Alliance for Food Sovereignty in Africa or the Alliance for a Green Revolution in Africa. The Food and Agriculture Organization of the UN has created a partnership that will act through the African Union's CAADP framework aiming to end hunger in Africa by 2025. It includes different interventions including support for improved food production, a strengthening of social protection and integration of the right to food into national legislation. National Examples of hunger relief organisations that operate on the national level include The Trussell Trust in the United Kingdom, the Nalabothu Foundation in India, and Feeding America in the United States. Local Food bank A food bank (or foodbank) is a non-profit, charitable organization that aids in the distribution of food to those who have difficulty purchasing enough to avoid hunger. Food banks tend to run on different operating models depending on where they are located. In the U.S., Australia, and to some extent in Canada, foodbanks tend to perform a warehouse type function, storing and delivering food to front line food orgs, but not giving it directly to hungry peoples themselves. In much of Europe and elsewhere, food banks operate on the front line model, where they hand out parcels of uncooked food direct to the hungry, typically giving them enough for several meals which they can eat in their homes. In the U.S and Australia, establishments that hand out uncooked food to individual people are instead called food pantries, food shelves or food closets'. In Less Developed Countries, there are charity-run food banks that operate on a semi-commercial system that differs from both the more common "warehouse" and "frontline" models. In some rural LDCs such as Malawi, food is often relatively cheap and plentiful for the first few months after the harvest, but then becomes more and more expensive. Food banks in those areas can buy large amounts of food shortly after the harvest, and then as food prices start to rise, they sell it back to local people throughout the year at well below market prices. Such food banks will sometimes also act as centers to provide small holders and subsistence farmers with various forms of support. Soup kitchen A soup kitchen, meal center, or food kitchen is a place where food is offered to the hungry for free or at a below market price. Frequently located in lower-income neighborhoods, they are often staffed by volunteer organizations, such as church or community groups. Soup kitchens sometimes obtain food from a food bank for free or at a low price, because they are considered a charity, which makes it easier for them to feed the many people who require their services. Others Local establishments calling themselves "food banks" or "soup kitchens" are often run either by Christian churches or less frequently by secular civil society groups. Other religions carry out similar hunger relief efforts, though sometimes with slightly different methods. For example, in the Sikh tradition of Langar, food is served to the hungry direct from Sikh temples. There are exceptions to this, for example in the UK Sikhs run some of the food banks, as well as giving out food direct from their Gurdwaras. Hunger and gender World Bank studies consistently find that about 60% of those who are hungry are female. Globally, women typically face greater economic barriers compared to men and have access to fewer resources, creating greater obstacles to food security. In both developing and advanced countries, parents sometimes go without food so they can feed their children. Women, however, seem more likely to make this sacrifice than men. Older sources sometimes claim this phenomenon is unique to developing countries, due to greater sexual inequality. More recent findings suggested that mothers often miss meals in advanced economies too. For example, a 2012 study undertaken by Netmums in the UK found that one in five mothers sometimes misses out on food to save their children from hunger. One partner-households are especially vulnerable to food insecurity and highlight a gender disparity in food security. In the U.S., households with children raised by single-mothers are more likely to be food insecure compared to households with single-fathers. Differences in time allocation between paid work and unpaid work may also be an explanation for increased food disparity in women-lead households, as women tend to dedicate more time to unpaid work comparatively. In several periods and regions, gender has also been an important factor determining whether or not victims of hunger would make suitable examples for generating enthusiasm for hunger relief efforts. James Vernon, in his Hunger: A Modern History, wrote that in Britain before the twentieth century, it was generally only women and children suffering from hunger who could arouse compassion. Men who failed to provide for themselves and their families were often regarded with contempt. This changed after World War I, where thousands of men who had proved their manliness in combat found themselves unable to secure employment. Similarly, female gender could be advantageous for those wishing to advocate for hunger relief, with Vernon writing that being a woman helped Emily Hobhouse draw the plight of hungry people to wider attention during the Second Boer War. Hunger and age United States The elderly have an increased risk of going hungry as well as increased negative effects of hunger. In the US the number of seniors experiencing hunger rose 88% between 2001 and 2011. This age group suffers the most from chronic conditions, including heart disease, diabetes, and respiratory diseases. Eighty percent of this group has a minimum of one chronic condition, and almost 70% have two or more. These illnesses are exacerbated and are more likely to develop under the addition of hunger. A report from 2017 shows that seniors facing this issue are 60% more likely to experience depression than seniors who are not hungry, and 40% are more likely to develop congestive heart failure. The added stress of inconsistent and inadequate feedings make these conditions much more dangerous. Fixed incomes often limit the elderly's ability to freely purchase food necessities. Medical costs and housing may take priority over quality foods. Limited mobility makes it difficult for these individuals to physically leave their homes, especially in areas lacking public transportation or transportation catering to a disabled body. The COVID-19 pandemic made things more difficult, older people statistically suffer worse outcomes, and so could be reluctant to venture out for food. The Supplemental Nutrition Assistance Program (SNAP) provides aid to low-income seniors in relation to food security. This is an opportunity for seniors who receive benefits to allocate money in their budgets for other needs, such as medical or housing bills. However, participation is extremely low. Fewer than half of eligible seniors are enrolled and receive benefits; 3 out of five seniors are qualified but not enrolled.
Biology and health sciences
Health and fitness
null
149426
https://en.wikipedia.org/wiki/Subnet
Subnet
A subnetwork, or subnet, is a logical subdivision of an IP network. The practice of dividing a network into two or more networks is called subnetting. Computers that belong to the same subnet are addressed with an identical group of its most-significant bits of their IP addresses. This results in the logical division of an IP address into two fields: the network number or routing prefix, and the rest field or host identifier. The rest field is an identifier for a specific host or network interface. The routing prefix may be expressed as the first address of a network, written in Classless Inter-Domain Routing (CIDR) notation, followed by a slash character (/), and ending with the bit-length of the prefix. For example, is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range to belong to this network, with as the subnet broadcast address. The IPv6 address specification is a large address block with 296 addresses, having a 32-bit routing prefix. For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that, when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an IP address. For example, the prefix would have the subnet mask . Traffic is exchanged between subnets through routers when the routing prefixes of the source address and the destination address differ. A router serves as a logical or physical boundary between the subnets. The benefits of subnetting an existing network vary with each deployment scenario. In the address allocation architecture of the Internet using CIDR and in large organizations, efficient allocation of address space is necessary. Subnetting may also enhance routing efficiency, or have advantages in network management when subnets are administratively controlled by different entities in a larger organization. Subnets may be arranged logically in a hierarchical architecture, partitioning an organization's network address space into a tree-like routing structure, or other structures, such as meshes. Network addressing and routing Computers participating in an IP network have at least one network address. Usually, this address is unique to each device and can either be configured automatically by a network service with the Dynamic Host Configuration Protocol (DHCP), manually by an administrator, or automatically by the operating system with stateless address autoconfiguration. An address fulfills the functions of identifying the host and locating it on the network in destination routing. The most common network addressing architecture is Internet Protocol version 4 (IPv4), but its successor, IPv6, has been increasingly deployed since approximately 2006. An IPv4 address consists of 32 bits. An IPv6 address consists of 128 bits. In both architectures, an IP address is divided into two logical parts, the network prefix and the host identifier. All hosts on a subnet have the same network prefix. This prefix occupies the most-significant bits of the address. The number of bits allocated within a network to the prefix may vary between subnets, depending on the network architecture. The host identifier is a unique local identification and is either a host number on the local network or an interface identifier. This addressing structure permits the selective routing of IP packets across multiple networks via special gateway computers, called routers, to a destination host if the network prefixes of origination and destination hosts differ, or sent directly to a target host on the local network if they are the same. Routers constitute logical or physical borders between the subnets, and manage traffic between them. Each subnet is served by a designated default router but may consist internally of multiple physical Ethernet segments interconnected by network switches. The routing prefix of an address is identified by the subnet mask, written in the same form used for IP addresses. For example, the subnet mask for a routing prefix that is composed of the most-significant 24 bits of an IPv4 address is written as . The modern standard form of specification of the network prefix is CIDR notation, used for both IPv4 and IPv6. It counts the number of bits in the prefix and appends that number to the address after a slash (/) character separator. This notation was introduced with Classless Inter-Domain Routing (CIDR). In IPv6 this is the only standards-based form to denote network or routing prefixes. For example, the IPv4 network with the subnet mask is written as , and the IPv6 notation designates the address and its network prefix consisting of the most significant 32 bits. In classful networking in IPv4, before the introduction of CIDR, the network prefix could be directly obtained from the IP address, based on its highest-order bit sequence. This determined the class (A, B, C) of the address and therefore the subnet mask. Since the introduction of CIDR, however, the assignment of an IP address to a network interface requires two parameters, the address and a subnet mask. Given an IPv4 source address, its associated subnet mask, and the destination address, a router can determine whether the destination is on a locally connected network or a remote network. The subnet mask of the destination is not needed, and is generally not known to a router. For IPv6, however, on-link determination is different in detail and requires the Neighbor Discovery Protocol (NDP). IPv6 address assignment to an interface carries no requirement of a matching on-link prefix and vice versa, with the exception of link-local addresses. Since each locally connected subnet must be represented by a separate entry in the routing tables of each connected router, subnetting increases routing complexity. However, by careful design of the network, routes to collections of more distant subnets within the branches of a tree hierarchy can be aggregated into a supernetwork and represented by single routes. Internet Protocol version 4 Determining the network prefix An IPv4 subnet mask consists of 32 bits; it is a sequence of ones (1) followed by a block of zeros (0). The ones indicate bits in the address used for the network prefix and the trailing block of zeros designates that part as being the host identifier. The following example shows the separation of the network prefix and the host identifier from an address () and its associated subnet mask (). The operation is visualized in a table using binary address formats. The result of the bitwise AND operation of IP address and the subnet mask is the network prefix . The host part, which is , is derived by the bitwise AND operation of the address and the ones' complement of the subnet mask. Subnetting Subnetting is the process of designating some high-order bits from the host part as part of the network prefix and adjusting the subnet mask appropriately. This divides a network into smaller subnets. The following diagram modifies the above example by moving 2 bits from the host part to the network prefix to form four smaller subnets each one quarter of the previous size. Special addresses and subnets IPv4 uses specially designated address formats to facilitate recognition of special address functionality. The first and the last subnets obtained by subnetting a larger network have traditionally had a special designation and, early on, special usage implications. In addition, IPv4 uses the all ones host address, i.e. the last address within a network, for broadcast transmission to all hosts on the link. The first subnet obtained from subnetting a larger network has all bits in the subnet bit group set to zero. It is therefore called subnet zero. The last subnet obtained from subnetting a larger network has all bits in the subnet bit group set to one. It is therefore called the all-ones subnet. The IETF originally discouraged the production use of these two subnets. When the prefix length is not available, the larger network and the first subnet have the same address, which may lead to confusion. Similar confusion is possible with the broadcast address at the end of the last subnet. Therefore, reserving the subnet values consisting of all zeros and all ones on the public Internet was recommended, reducing the number of available subnets by two for each subnetting. This inefficiency was removed, and the practice was declared obsolete in 1995 and is only relevant when dealing with legacy equipment. Although the all-zeros and the all-ones host values are reserved for the network address of the subnet and its broadcast address, respectively, in systems using CIDR all subnets are available in a subdivided network. For example, a network can be divided into sixteen usable networks. Each broadcast address, i.e. , , …, , reduces only the host count in each subnets. Subnet host count The number of subnets available and the number of possible hosts in a network may be readily calculated. For instance, the network may be subdivided into the following four subnets. The highlighted two address bits become part of the network number in this process. The remaining bits after the subnet bits are used for addressing hosts within the subnet. In the above example, the subnet mask consists of 26 bits, making it 255.255.255.192, leaving 6 bits for the host identifier. This allows for 62 host combinations (26−2). In general, the number of available hosts on a subnet is 2h−2, where h is the number of bits used for the host portion of the address. The number of available subnets is 2n, where n is the number of bits used for the network portion of the address. There is an exception to this rule for 31-bit subnet masks, which means the host identifier is only one bit long for two permissible addresses. In such networks, usually point-to-point links, only two hosts (the endpoints) may be connected and a specification of network and broadcast addresses is not necessary. Internet Protocol version 6 The design of the IPv6 address space differs significantly from IPv4. The primary reason for subnetting in IPv4 is to improve efficiency in the utilization of the relatively small address space available, particularly to enterprises. No such limitations exist in IPv6, as the large address space available, even to end-users, is not a limiting factor. As in IPv4, subnetting in IPv6 is based on the concepts of variable-length subnet masking (VLSM) and the Classless Inter-Domain Routing methodology. It is used to route traffic between the global allocation spaces and within customer networks between subnets and the Internet at large. A compliant IPv6 subnet always uses addresses with 64 bits in the host identifier. Given the address size of 128 bits, it therefore has a /64 routing prefix. Although it is technically possible to use smaller subnets, they are impractical for local area networks based on Ethernet technology, because 64 bits are required for stateless address autoconfiguration. The Internet Engineering Task Force recommends the use of subnets for point-to-point links, which have only two hosts. IPv6 does not implement special address formats for broadcast traffic or network numbers, and thus all addresses in a subnet are acceptable for host addressing. The all-zeroes address is reserved as the subnet-router anycast address. The subnet router anycast address is the lowest address in the subnet, so it looks like the “network address”. If a router has multiple subnets on the same link, then it has multiple subnet router anycast addresses on that link. The first and last address in any network or subnet is not allowed to be assigned to any individual host. In the past, the recommended allocation for an IPv6 customer site was an address space with a 48-bit () prefix. However, this recommendation was revised to encourage smaller blocks, for example using 56-bit prefixes. Another common allocation size for residential customer networks has a 64-bit prefix.
Technology
Networks
null
149463
https://en.wikipedia.org/wiki/Insecticide
Insecticide
Insecticides are pesticides used to kill insects. They include ovicides and larvicides used against insect eggs and larvae, respectively. The major use of insecticides is in agriculture, but they are also used in home and garden settings, industrial buildings, for vector control, and control of insect parasites of animals and humans. Acaricides, which kill mites and ticks, are not strictly insecticides, but are usually classified together with insecticides. Some insecticides (including common bug sprays) are effective against other non-insect arthropods as well, such as scorpions, spiders, etc. Insecticides are distinct from insect repellents, which repel but do not kill. Sales In 2016 insecticides were estimated to account for 18% of worldwide pesticide sales. Worldwide sales of insecticides in 2018 were estimated as $ 18.4 billion, of which 25% were neonicotinoids, 17% were pyrethroids, 13% were diamides, and the rest were many other classes which sold for less than 10% each of the market. Synthetic insecticides Insecticides are most usefully categorised according to their modes of action. The insecticide resistance action committee (IRAC) lists 30 modes of action plus unknowns. There can be several chemical classes of insecticide with the same mode or action. IRAC lists 56 chemical classes plus unknowns. The mode of action describes how the insecticide kills or inactivates a pest. Development Insecticides with systemic activity against sucking pests, which are safe to pollinators, are sought after, particularly in view of the partial bans on neonicotinoids. Revised 2023 guidance by registration authorities describes the bee testing that is required for new insecticides to be approved for commercial use. Systemicity and Translocation Insecticides may be systemic or non-systemic (contact insecticides). Systemic insecticides penetrate into the plant and move (translocate) inside the plant. Translocation may be upward in the xylem, or downward in the phloem or both. Systemicity is a prerequisite for the pesticide to be used as a seed-treatment. Contact insecticides (non-systemic insecticides) remain on the leaf surface and act through direct contact with the insect. Insects feed from various compartments in the plant. Most of the major pests are either chewing insects or sucking insects. Chewing insects, such as caterpillars, eat whole pieces of leaf. Sucking insects use feeding tubes to feed from phloem (e.g. aphids, leafhoppers, scales and whiteflies), or to suck cell contents (e.g. thrips and mites). An insecticide is more effective if it is in the compartment the insect feeds from. The physicochemical properties of the insecticide determine how it is distributed throughout the plant. Organochlorides The best known organochloride, DDT, was created by Swiss scientist Paul Müller. For this discovery, he was awarded the 1948 Nobel Prize for Physiology or Medicine. DDT was introduced in 1944. It functions by opening sodium channels in the insect's nerve cells. The contemporaneous rise of the chemical industry facilitated large-scale production of chlorinated hydrocarbons including various cyclodiene and hexachlorocyclohexane compounds. Although commonly used in the past, many older chemicals have been removed from the market due to their health and environmental effects (e.g. DDT, chlordane, and toxaphene). Organophosphates Organophosphates are another large class of contact insecticides. These also target the insect's nervous system. Organophosphates interfere with the enzymes acetylcholinesterase and other cholinesterases, causing an increase in synaptic acetylcholine and overstimulation of the parasympathetic nervous system, killing or disabling the insect. Organophosphate insecticides and chemical warfare nerve agents (such as sarin, tabun, soman, and VX) have the same mechanism of action. Organophosphates have a cumulative toxic effect to wildlife, so multiple exposures to the chemicals amplifies the toxicity. In the US, organophosphate use declined with the rise of substitutes. Many of these insecticides, first developed in the mid 20th century, are very poisonous. Many organophosphates do not persist in the environment. Carbamates Carbamate insecticides have similar mechanisms to organophosphates, but have a much shorter duration of action and are somewhat less toxic. Pyrethroids Pyrethroid insecticides mimic the insecticidal activity of the natural compound pyrethrin, the biopesticide found in Pyrethrum (Now Chrysanthemum and Tanacetum) species. They have been modified to increase their stability in the environment. These compounds are nonpersistent sodium channel modulators and are less toxic than organophosphates and carbamates. Compounds in this group are often applied against household pests. Some synthetic pyrethroids are toxic to the nervous system. Neonicotinoids Neonicotinoids are a class of neuro-active insecticides chemically similar to nicotine.(with much lower acute mammalian toxicity and greater field persistence). These chemicals are acetylcholine receptor agonists. They are broad-spectrum systemic insecticides, with rapid action (minutes-hours). They are applied as sprays, drenches, seed and soil treatments. Treated insects exhibit leg tremors, rapid wing motion, stylet withdrawal (aphids), disoriented movement, paralysis and death.Imidacloprid, of the neonicotinoid family, is the most widely used insecticide in the world. In the late 1990s neonicotinoids came under increasing scrutiny over their environmental impact and were linked in a range of studies to adverse ecological effects, including honey-bee colony collapse disorder (CCD) and loss of birds due to a reduction in insect populations. In 2013, the European Union and a few non EU countries restricted the use of certain neonicotinoids. and its potential to increase the susceptibility of rice to planthopper attacks. Phenylpyrazoles Phenylpyrazole insecticides, such as fipronil are a class of synthetic insecticides that operate by interfering with GABA receptors. Butenolides Butenolide pesticides are a novel group of chemicals, similar to neonicotinoids in their mode of action, that have so far only one representative: flupyradifurone. They are acetylcholine receptor agonists, like neonicotinoids, but with a different pharmacophore. They are broad-spectrum systemic insecticides, applied as sprays, drenches, seed and soil treatments. Although the classic risk assessment considered this insecticide group (and flupyradifurone specifically) safe for bees, novel research has raised concern on their lethal and sublethal effects, alone or in combination with other chemicals or environmental factors. Diamides Diamides selectively activate insect ryanodine receptors (RyR), which are large calcium release channels present in cardiac and skeletal muscle, leading to the loss of calcium crucial for biological processes. This causes insects to act lethargic, stop feeding, and eventually die. The first insecticide from this class to be registered was flubendiamide. Insect growth regulators Insect growth regulator (IGR) is a term coined to include insect hormone mimics and an earlier class of chemicals, the benzoylphenyl ureas, which inhibit chitin (exoskeleton) biosynthesis in insects Diflubenzuron is a member of the latter class, used primarily to control caterpillars that are pests. Of these, methoprene is most widely used. It has no observable acute toxicity in rats and is approved by World Health Organization (WHO) for use in drinking water cisterns to combat malaria. Most of its uses are to combat insects where the adult is the pest, including mosquitoes, several fly species, and fleas. Two very similar products, hydroprene and kinoprene, are used for controlling species such as cockroaches and white flies. Methoprene was registered with the EPA in 1975. Virtually no reports of resistance have been filed. A more recent type of IGR is the ecdysone agonist tebufenozide (MIMIC), which is used in forestry and other applications for control of caterpillars, which are far more sensitive to its hormonal effects than other insect orders. Biological pesticides Definition The EU defines biopesticides as "a form of pesticide based on micro-organisms or natural products". The US EPA defines biopesticides as “certain types of pesticides derived from such natural materials as animals, plants, bacteria, and certain minerals”. Microorganisms that control pests may also be categorised as biological pest control agents together with larger organisms such as parasitic insects, entomopathic nematodes etc. Natural products may also be categorised as chemical insecticides. The US EPA describes three types of biopesticide. Biochemical pesticides (meaning bio-derived chemicals), which are naturally occurring substances that control pests by non-toxic mechanisms. Microbial pesticides consisting of a microorganism (e.g., a bacterium, fungus, virus or protozoan) as the active ingredient. Plant-Incorporated-Protectants (PIPs) are pesticidal substances that plants produce from genetic material that has been added to the plant (thus producing transgenic crops). Market The global bio-insecticide market was estimated to be less than 10% of the total insecticide market. The bio-insecticde market is dominated by microbials. The bio-insecticide market is growing more that 10% yearly, which is a higher growth than the total insecticide market, mainly due to the increase in organic farming and IPM, and also due to benevolent government policies. Biopesticides are regarded by the US and European authorities as posing fewer risks of environmental and mammalian toxicity. Biopesticides are more than 10 x (often 100 x) cheaper and 3 x faster to register than synthetic pesticides. Advantages and disadvantages There is a wide variety of biological insecticides with differing attributes, but in general the following has been described. They are easier, faster and cheaper to register, usually with lower mammalian toxicity. They are more specific, and thus preserve beneficial insects and biodiversity in general. This makes them compatible with IPM regimes. They degrade rapidly cause less impact on the environment. They have a shorter withholding period. The spectrum of control is narrow. They are less effective and prone to adverse ambient conditions. They degrade rapidly and are thus less persistant. They are slower to act. They are more expensive, have a shorter shelf-life, and are more difficult to source. They require mor specialised knowledge to use. Plant Extracts Most or all plants produce chemical insecticides to stop insects eating them. Extracts and purified chemicals from thousands of plants have been shown to be insecticidal, however only a few are used in agriculture. In the USA 13 are registered for use, in the EU 6. In Korea, where it is easier to register botanical pesticides, 38 are used. Most used are neem oil, chenopodium, pyrethrins, and azadirachtin. Many botanical insecticides used in past decades (e.g. rotenone, nicotine, ryanodine) have been banned because of their toxicity. Genetically modified crops The first transgenic crop, which incorporated an insecticidal PIP, contained a gene for the CRY toxin from Bacillus thuringiensis (B.t.) and was introduced in 1997. For the next ca 25 years the only insecticidal agents used in GMOs were the CRY and VIP toxins from various strains of B.t, which control a wide number of insect types. These are widely used with > 100 million hectares planted with B.t. modified crops in 2019. Since 2020 several novel agents have been engineered into plants and approved.  ipd072Aa from Pseudomonas chlororaphis, ipd079Ea from Ophioglossum pendulum, and mpp75Aa1.1 from Brevibacillus laterosporus code for protein toxins. The trait dvsnf7 is an RNAi agent consisting of a double-stranded RNA transcript containing a 240 bp fragment of the WCR Snf7 gene of the western corn rootworm (Diabrotica virgifera virgifera). RNA interference RNA interference (RNAi) uses segments of RNA to fatally silence crucial insect genes. In 2024 two uses of RNAi have been registered by the authorities for use:Genetic modification of a crop to introduce a gene coding for an RNAi fragment, and spraying double stranded RNA fragments onto a field. Monsanto introduced the trait DvSnf7 which expresses a double-stranded RNA transcript containing a 240 bp fragment of the WCR Snf7 gene of the Western Corn Rootworm. GreenLight Biosciences introduced Ledprona, a formulation of double stranded RNA as a spray for potato fields. It targets the essential gene for proteasome subunit beta type-5 (PSMB5) in the Colorado potato beetle. Spider toxins Spider venoms contain many, often hundreds, of insecticidally active toxins. Many are proteins that attack the nervous system of the insect. Vestaron introduced for agricultural use a spray formulation of GS-omega/kappa-Hxtx-Hv1a (HXTX), derived from the venom of the Australian blue mountain funnel web spider (Hadronyche versuta). HXTX acts by allosterically (site II) modifying the nicotinic acetylcholine receptor (IRAC group 32). Entomopathic bacteria Entomopathic bacteria can be mass-produced. The most widely used is Bacillus thuringiensis (B.t.), used since decades. There are several strains used with different applications against lepidoptera, coleoptera and diptera. Also used are Lysinibacillus sphaericus, Burkholderia spp, and Wolbachia pipientis. Avermectins and spinosyns are bacterial metabolites, mass-produced by fermentation and used as insecticides. The toxins from B.t. have been incorporated into plants through genetic engineering. Entomopathic fungi Entomopathic fungi have been used since 1965 for agricultural use. Hundreds of strains are now in use. They often kill a broad range of insect species. Most strains are from Beauveria, Metarhizium, Cordyceps and Akanthomyces species. Entomopathic viruses Of the many types of entomopathic viruses, only baculaviruses are used commercially, and are each specific for their target insect. They have to be grown on insects, so their production is labour-intensive. Environmental toxicity Effects on nontarget species Some insecticides kill or harm other creatures in addition to those they are intended to kill. For example, birds may be poisoned when they eat food that was recently sprayed with insecticides or when they mistake an insecticide granule on the ground for food and eat it. Sprayed insecticide may drift from the area to which it is applied and into wildlife areas, especially when it is sprayed aerially. Persistence in the environment and accumulation in the food chain DDT was the first organic insecticide. It was introduced during WW2, and was widely used. One use was vector control and it was sprayed on open water. It degrades slowly in the environment, and it is lipophilic (fat soluble). It became the first global pollutant, and the first pollutant to accumulate and magnify in the food chain. During the 1950s and 1960s these very undesirable side effects were recognized, and after some often contentious discussion, DDT was banned in many countries in the 1960s and 1970s. Finally in 2001 DDT and all other persistent insecticides were banned via the Stockholm Convention. Since many decades the authorities require new insecticides to degrade in the environment and not to bioaccumulate. Runoff and percolation Solid bait and liquid insecticides, especially if improperly applied in a location, get moved by water flow. Often, this happens through nonpoint sources where runoff carries insecticides in to larger bodies of water. As snow melts and rainfall moves over and through the ground, the water picks applied insecticides and deposits them in to larger bodies of water, rivers, wetlands, underground sources of previously potable water, and percolates in to watersheds. This runoff and percolation of insecticides can effect the quality of water sources, harming the natural ecology and thus, indirectly effect human populations through biomagnification and bioaccumulation. Insect decline Both number of insects and number of insect species have declined dramatically and continuously over past decades, causing much concern. Many causes are proposed to contribute to this decline, the most agreed upon are loss of habitat, intensification of farming practices, and insecticide usage. Domestic bees were declining some years ago but population and number of colonies have now risen both in the USA and worldwide. Wild species of bees are still declining. Bird decline Besides the effects of direct consumption of insecticides, populations of insectivorous birds decline due to the collapse of their prey populations. Spraying of especially wheat and corn in Europe is believed to have caused an 80 per cent decline in flying insects, which in turn has reduced local bird populations by one to two thirds. Alternatives Instead of using chemical insecticides to avoid crop damage caused by insects, there are many alternative options available now that can protect farmers from major economic losses. Some of them are: Breeding crops resistant, or at least less susceptible, to pest attacks. Releasing predators, parasitoids, or pathogens to control pest populations as a form of biological control. Chemical control like releasing pheromones into the field to confuse the insects into not being able to find mates and reproduce. Integrated Pest Management: using multiple techniques in tandem to achieve optimal results. Push-pull technique: intercropping with a "push" crop that repels the pest, and planting a "pull" crop on the boundary that attracts and traps it. Examples Source: Organochlorides Aldrin Chlordane Chlordecone DDT Dieldrin Endosulfan Endrin Heptachlor Hexachlorobenzene Lindane (gamma-hexachlorocyclohexane) Methoxychlor Mirex Pentachlorophenol TDE Organophosphates Acephate Azinphos-methyl Bensulide Chlorethoxyfos Chlorpyrifos Chlorpyriphos-methyl Diazinon Dichlorvos (DDVP) Dicrotophos Dimethoate Disulfoton Ethoprop Fenamiphos Fenitrothion Fenthion Fosthiazate Malathion Methamidophos Methidathion Mevinphos Monocrotophos Naled Omethoate Oxydemeton-methyl Parathion Parathion-methyl Phorate Phosalone Phosmet Phostebupirim Phoxim Pirimiphos-methyl Profenofos Terbufos Tetrachlorvinphos Tribufos Trichlorfon Carbamates Aldicarb Bendiocarb Carbofuran Carbaryl Dioxacarb Fenobucarb Fenoxycarb Isoprocarb Methomyl Oxamyl Propoxur 2-(1-Methylpropyl)phenyl methylcarbamate Pyrethroids Allethrin Bifenthrin Cyhalothrin, Lambda-cyhalothrin Cypermethrin Cyfluthrin Deltamethrin Etofenprox Fenvalerate Permethrin Phenothrin Prallethrin Resmethrin Tetramethrin Tralomethrin Transfluthrin Neonicotinoids Acetamiprid Clothianidin Dinotefuran Imidacloprid Nithiazine Thiacloprid Thiamethoxam Ryanoids Chlorantraniliprole Cyantraniliprole Flubendiamide Insect growth regulators Benzoylureas Diflubenzuron Flufenoxuron Cyromazine Methoprene Hydroprene Tebufenozide Derived from plants or microbes Anabasine Anethole (mosquito larvae) Annonin Asimina (pawpaw tree seeds) for lice Azadirachtin Caffeine Carapa Cinnamaldehyde (very effective for killing mosquito larvae) Cinnamon leaf oil (very effective for killing mosquito larvae) Cinnamyl acetate (kills mosquito larvae) Citral Citronellol Deguelin Derris (active ingredient is rotenone) Desmodium caudatum (leaves and roots) Eucalyptol Eugenol (mosquito larvae) Hinokitiol Ivermectin Limonene Linalool Menthol Myristicin Neem (Azadirachtin) Nicotine Nootkatone Peganum harmala, seeds (smoke from), root Oregano oil kills Rhyzopertha dominica (bug found in stored cereal) Pyrethrum Quassia (South American plant genus) Ryanodine Spinosad AKA Spinosyn A Spinosyn D Tetranortriterpenoid Thymol (controls varroa mites in bee colonies) Biologicals Bacillus sphaericus Bacillus thuringiensis Bacillus thuringiensis aizawi Bacillus thuringiensis israelensis Bacillus thuringiensis kurstaki Bacillus thuringiensis tenebrionis Nuclear Polyhedrosis virus Granulovirus Lecanicillium lecanii Inorganic/mineral derived insecticides Diatomaceous earth Borax Boric Acid
Technology
Pest and disease control
null
149467
https://en.wikipedia.org/wiki/Myalgia
Myalgia
Myalgia or muscle pain is a painful sensation evolving from muscle tissue. It is a symptom of many diseases. The most common cause of acute myalgia is the overuse of a muscle or group of muscles; another likely cause is viral infection, especially when there has been no injury. Long-lasting myalgia can be caused by metabolic myopathy, some nutritional deficiencies, ME/CFS, fibromyalgia, and amplified musculoskeletal pain syndrome. Causes The most common causes of myalgia are overuse, injury, and strain. Myalgia might also be caused by allergies, diseases, medications, or as a response to a vaccination. Dehydration at times results in muscle pain as well, especially for people involved in extensive physical activities such as workout. Muscle pain is also a common symptom in a variety of diseases, including infectious diseases, such as influenza, muscle abscesses, Lyme disease, malaria, trichinosis or poliomyelitis; autoimmune diseases, such as celiac disease, systemic lupus erythematosus, Sjögren's syndrome or polymyositis; gastrointestinal diseases, such as non-celiac gluten sensitivity (which can also occur without digestive symptoms) and inflammatory bowel disease (including Crohn's disease and ulcerative colitis). The most common causes are: Overuse Overuse of a muscle is using it too much, too soon or too often. One example is repetitive strain injury.
Biology and health sciences
Symptoms and signs
Health
149612
https://en.wikipedia.org/wiki/Sertraline
Sertraline
Sertraline, sold under the brand name Zoloft among others, is an antidepressant medication of the selective serotonin reuptake inhibitor (SSRI) class used to treat major depressive disorder, generalized anxiety disorder, social anxiety disorder, obsessive–compulsive disorder (OCD), panic disorder, and premenstrual dysphoric disorder. Although also having approval for post-traumatic stress disorder (PTSD), findings indicate it leads to only modest improvements in symptoms associated with this condition. The drug shares the common side effects and contraindications of other SSRIs, with high rates of nausea, diarrhea, headache, insomnia, mild sedation, dry mouth, and sexual dysfunction, but it appears not to lead to much weight gain, and its effects on cognitive performance are mild. Similar to other antidepressants, the use of sertraline for depression may be associated with a mildly elevated rate of suicidal thoughts in people under the age of 25 years old. It should not be used together with monoamine oxidase inhibitors (MAOIs): this combination may cause serotonin syndrome, which can be life-threatening in some cases. Sertraline taken during pregnancy is associated with an increase in congenital heart defects in newborns. Sertraline was developed by scientists at Pfizer and approved for medical use in the United States in 1991. It is on the World Health Organization's List of Essential Medicines and available as a generic medication. In 2016, sertraline was the most commonly prescribed psychotropic medication in the United States. It was also the eleventh most commonly prescribed medication in the United States, with more than 39million prescriptions in 2022, and sertraline ranks among the top 10 most prescribed medications in Australia between 2017 and 2023. For alleviating the symptoms of depression and anxiety, the drug is usually second in potency to another SSRI, escitalopram. Sertraline's effectiveness is similar to that of other antidepressants in its class, such as fluoxetine and paroxetine, which are also considered first-line treatments and are better tolerated than the older tricyclic antidepressants. Medical uses Sertraline has been approved for major depressive disorder, obsessive–compulsive disorder (OCD), post-traumatic stress disorder (PTSD), premenstrual dysphoric disorder, panic disorder, social anxiety disorder (SAD), and generalized anxiety disorder (GAD). Sertraline is approved for use in children with OCD. Depression In meta-analyses, sertraline displays similar efficacy to other SSRI antidepressants, with an odds ratio for response in clinical depression of between 1.44 and 1.67. However, as with other antidepressants, the nature and clinical significance of this effect remain disputed. A major study of sertraline in a broad primary care population found improvements in general mental health, quality of life, and anxiety. However, it failed to find significant effects on depression in either the mildly or severely depressed, and the clinical relevance and accuracy of the positive effects found have been questioned. In several double-blind studies, sertraline was consistently more effective than placebo for dysthymia, a more chronic variety of depression, and comparable to imipramine in that respect. Sertraline also improves the functional impairments of dysthymia to a similar degree whether group Cognitive-Behavioral Therapy is undergone or not. Limited pediatric data also demonstrates a reduction in depressive symptoms in the pediatric population though remains a second-line therapy after fluoxetine. Comparison with other antidepressants In general, sertraline efficacy is similar to that of other antidepressants. For example, a meta-analysis of 12 new-generation antidepressants showed that sertraline and escitalopram are the best in terms of efficacy and acceptability in the acute-phase treatment of adults with depression. Comparative clinical trials demonstrated that sertraline is similar in efficacy against depression to moclobemide, nefazodone, escitalopram, bupropion, citalopram, fluvoxamine, paroxetine, venlafaxine, and mirtazapine. Sertraline may be more efficacious for the treatment of depression in the acute phase (first four weeks) than fluoxetine. There are differences between sertraline and some other antidepressants in their efficacy in the treatment of different subtypes of depression and in their adverse effects. For severe depression, sertraline is as good as clomipramine but is better tolerated. Sertraline appears to work better in melancholic depression than fluoxetine, paroxetine, and mianserin and is similar to the tricyclic antidepressants such as amitriptyline and clomipramine. In the treatment of depression accompanied by OCD, sertraline performs significantly better than desipramine on the measures of both OCD and depression. Sertraline is equivalent to imipramine for the treatment of depression with co-morbid panic disorder, but it is better tolerated. Compared with amitriptyline, sertraline offered a greater overall improvement in quality of life of depressed patients. Depression in elderly Sertraline used for the treatment of depression in elderly (older than 60) patients is superior to placebo and comparable to another SSRI fluoxetine, and tricyclic antidepressants (TCAs) amitriptyline, nortriptyline and imipramine. Sertraline has much lower rates of adverse effects than these TCAs, with the exception of nausea, which occurs more frequently with sertraline. In addition, sertraline appears to be more effective than fluoxetine or nortriptyline in the older-than-70 subgroup. Accordingly, a meta-analysis of antidepressants in older adults found that sertraline, paroxetine and duloxetine were better than placebo. However, in a 2003 trial the effect size was modest, and there was no improvement in quality of life as compared to placebo. With depression in dementia, there is no benefit of sertraline treatment compared to either placebo or mirtazapine. Obsessive–compulsive disorder Sertraline is effective for the treatment of OCD in adults, adolescents and children. It was better tolerated and, based on intention-to-treat analysis, performed better than the gold standard of OCD treatment clomipramine. Continuing sertraline treatment helps prevent relapses of OCD with long-term data supporting its use for up to 24 months. The sertraline dosages necessary for the effective treatment of OCD are higher than the usual dosage for depression. The onset of action is also slower for OCD than for depression. The treatment recommendation is to start treatment with half of the maximal recommended dose for at least two months. After that, the dose can be raised to the maximal recommended in the cases of unsatisfactory response. Cognitive behavioral therapy alone is not more effective than sertraline in adolescents and children; however, a combination of these treatments is effective. Panic disorder Sertraline is superior to placebo for the treatment of panic disorder. The response rate was independent of the dose. In addition to decreasing the frequency of panic attacks by about 80% (vs. 45% for placebo) and decreasing general anxiety, sertraline resulted in an improvement in quality of life on most parameters. The patients rated as "improved" on sertraline reported better quality of life than the ones who "improved" on placebo. The authors of the study argued that the improvement achieved with sertraline is different and of a better quality than the improvement achieved with a placebo. Sertraline is equally effective for men and women, and for patients with or without agoraphobia. Previous unsuccessful treatment with benzodiazepines does not diminish its efficacy. However, the response rate was lower for the patients with more severe panic. Starting treatment simultaneously with sertraline and clonazepam, with subsequent gradual discontinuation of clonazepam, may accelerate the response. Double-blind comparative studies found sertraline to have the same effect on panic disorder as paroxetine or imipramine. While imprecise, comparison of the results of trials of sertraline with separate trials of other anti-panic agents (clomipramine, imipramine, clonazepam, alprazolam, and fluvoxamine) indicates approximate equivalence of these medications. Other anxiety disorders Sertraline has been successfully used for the treatment of social anxiety disorder. All three major domains of the disorder (fear, avoidance, and physiological symptoms) respond to sertraline. Maintenance treatment, after the response is achieved, prevents the return of the symptoms. The improvement is greater among the patients with later, adult onset of the disorder. In a comparison trial, sertraline was superior to exposure therapy, but patients treated with the psychological intervention continued to improve during a year-long follow-up, while those treated with sertraline deteriorated after treatment termination. The combination of sertraline and cognitive behavioral therapy appears to be more effective in children and young people than either treatment alone. Sertraline has not been approved for the treatment of generalized anxiety disorder; however, several guidelines recommend it as a first-line medication referring to good quality controlled clinical trials. Premenstrual dysphoric disorder Sertraline is effective in alleviating the symptoms of premenstrual dysphoric disorder, a severe form of premenstrual syndrome. Significant improvement was observed in 50–60% of cases treated with sertraline vs. 20–30% of cases on placebo. The improvement began during the first week of treatment, and in addition to mood, irritability, and anxiety, improvement was reflected in better family functioning, social activity, and general quality of life. Work functioning and physical symptoms, such as swelling, bloating, and breast tenderness, were less responsive to sertraline. Taking sertraline only during the luteal phase, that is, the 12–14 days before menses is not as effective as continuous treatment. Continuous treatment with sub-therapeutic doses of sertraline (25 mg vs. usual 50–100 mg) is also effective. Other indications Sertraline is approved for the treatment of post-traumatic stress disorder (PTSD). The National Institute for Clinical Excellence recommends it for patients who prefer drug treatment to a psychological one. Other guidelines also suggest sertraline as a first-line option for pharmacological therapy. When necessary, long-term pharmacotherapy can be beneficial. There are both negative and positive clinical trial results for sertraline, which may be explained by the types of psychological traumas, symptoms, and comorbidities included in the various studies. Positive results were obtained in trials that included predominantly women (75%) with a majority (60%) having physical or sexual assault as the traumatic event. Somewhat contrary to the above suggestions, a meta-analysis of sertraline clinical trials for PTSD found it to be statistically superior to placebo in the reduction of PTSD symptoms but the effect size was small. Another meta-analysis relegated sertraline to the second line, proposing trauma focused psychotherapy as a first-line intervention. The authors noted that Pfizer had declined to submit the results of a negative trial for inclusion in the meta-analysis making the results unreliable. Sertraline, when taken daily, can be useful for the treatment of premature ejaculation. A disadvantage of sertraline is that it requires continuous daily treatment to delay ejaculation significantly. A 2019 systematic review suggested that sertraline may be a good way to control anger, irritability, and hostility in depressed patients and patients with other comorbidities. Contraindications Sertraline is contraindicated in individuals taking monoamine oxidase inhibitors or the antipsychotic pimozide. Sertraline concentrate contains ethanol and is therefore contraindicated with disulfiram. The prescribing information recommends that treatment of the elderly and patients with liver impairment "must be approached with caution". Due to the slower elimination of sertraline in these groups, their exposure to sertraline may be as high as three times the average exposure for the same dose. Side effects Nausea, ejaculation failure, insomnia, diarrhea, dry mouth, somnolence, dizziness, tremor, headache, excessive sweating, fatigue, restless legs syndrome and decreased libido are the common adverse effects associated with sertraline with the greatest difference from placebo. Those that most often result in interruption of the treatment are nausea, diarrhea, and insomnia. The incidence of diarrhea is higher with sertraline – especially when prescribed at higher doses – in comparison with other SSRIs. Over more than six months of sertraline therapy for depression, people showed no significant weight increase. A 30-month-long treatment with sertraline for OCD also resulted in no significant weight gain. Although the difference did not reach statistical significance, the average weight gain was lower for fluoxetine (1%) but higher for citalopram, fluvoxamine and paroxetine (2.5%). Of the sertraline group, 4.5% gained a large amount of weight (defined as more than 7% gain). This result compares favorably with placebo, where, according to the literature, 3–6% of patients gained more than 7% of their initial weight. The large weight gain was observed only among female members of the sertraline group; the significance of this finding is unclear because of the small size of the group. Over a two-week treatment of healthy volunteers, sertraline slightly improved verbal fluency but did not affect word learning, short-term memory, vigilance, flicker fusion time, choice reaction time, memory span, or psychomotor coordination. In spite of lower subjective rating, that is, feeling that they performed worse, no clinically relevant differences were observed in the objective cognitive performance in a group of people treated for depression with sertraline for 1.5 years as compared to healthy controls. In children and adolescents taking sertraline for six weeks for anxiety disorders, 18 out of 20 measures of memory, attention, and alertness stayed unchanged. Divided attention was improved and verbal memory under interference conditions decreased marginally. Because of the large number of measures taken, it is possible that these changes were still due to chance. The unique effect of sertraline on dopaminergic neurotransmission may be related to these effects on cognition and vigilance. Sertraline has a low level of exposure of an infant through the breast milk and is recommended as the preferred option for the antidepressant therapy of breast-feeding mothers. There is 29–42% increase in congenital heart defects among children whose mothers were prescribed sertraline during pregnancy, with sertraline use in the first trimester associated with 2.7-fold increase in septal heart defects. Abrupt interruption of sertraline treatment may result in withdrawal or discontinuation syndrome. Dizziness, insomnia, anxiety, agitation, and irritability are common symptoms. It typically occurs within a few days from drug discontinuation and lasts a few weeks. The withdrawal symptoms for sertraline are less severe and frequent than for paroxetine, and more frequent than for fluoxetine. In most cases symptoms are mild, short-lived, and resolve without treatment. More severe cases are often successfully treated by temporary reintroduction of the drug with a slower tapering-off rate. Sertraline and SSRI antidepressants in general may be associated with bruxism and other movement disorders. Sertraline appears to be associated with microscopic colitis, a rare condition of unknown etiology. Sexual Like other SSRIs, sertraline is associated with sexual side effects, including sexual arousal disorder, erectile dysfunction and difficulty achieving orgasm. While nefazodone and bupropion do not have negative effects on sexual functioning, 67% of men on sertraline experienced ejaculation difficulties versus 18% before the treatment. Sexual arousal disorder, defined as "inadequate lubrication and swelling for women and erectile difficulties for men", occurred in 12% of people on sertraline as compared with 1% of patients on placebo. The mood improvement resulting from the treatment with sertraline sometimes counteracted these side effects, so that sexual desire and overall satisfaction with sex stayed the same as before the sertraline treatment. However, under the action of placebo the desire and satisfaction slightly improved. Some people continue experiencing sexual side effects after they stop taking SSRIs. Suicide The US Food and Drug Administration (FDA) requires all antidepressants, including sertraline, to carry a boxed warning stating that antidepressants increase the risk of suicide in persons younger than 25 years. This warning is based on statistical analyses conducted by two independent groups of FDA experts that found a 100% increase of suicidal thoughts and behavior in children and adolescents, and a 50% increase in the 18–24 age group. Suicidal ideation and behavior in clinical trials are rare. For the above analysis, the FDA combined the results of 295 trials of 11 antidepressants for psychiatric indications to obtain statistically significant results. Considered separately, sertraline use in adults decreased the odds of suicidal behavior with a marginal statistical significance of 37% or 50% depending on the statistical technique used. The authors of the FDA analysis note that "given the large number of comparisons made in this review, chance is a very plausible explanation for this difference". The more complete data submitted later by the sertraline manufacturer Pfizer indicated increased suicidal behavior. Similarly, the analysis conducted by the UK MHRA found a 50% increase of odds of suicide-related events, not reaching statistical significance, in the patients on sertraline as compared to the ones on placebo. Overdose Acute overdosage is often manifested by emesis, lethargy, ataxia, tachycardia and seizures. Plasma, serum or blood concentrations of sertraline and norsertraline, its major active metabolite, may be measured to confirm a diagnosis of poisoning in hospitalized patients or to aid in the medicolegal investigation of fatalities. As with most other SSRIs its toxicity in overdose is considered relatively low. Interactions As with other SSRIs, sertraline may increase the risk of bleeding with NSAIDs (ibuprofen, naproxen, mefenamic acid), antiplatelet drugs, anticoagulants, omega-3 fatty acids, vitamin E, and garlic supplements due to sertraline's inhibitory effects on platelet aggregation via blocking serotonin transporters on platelets. Sertraline, in particular, may potentially diminish the efficacy of levothyroxine. Sertraline is a moderate inhibitor of CYP2D6 and CYP2B6 in vitro. Accordingly, in human trials it caused increased blood levels of CYP2D6 substrates such as metoprolol, dextromethorphan, desipramine, imipramine and nortriptyline, as well as the CYP3A4/CYP2D6 substrate haloperidol. This effect is dose-dependent; for example, co-administration with 50 mg of sertraline resulted in 20% greater exposure to desipramine, while 150 mg of sertraline led to a 70% increase. In a placebo-controlled study, the concomitant administration of sertraline and methadone caused a 40% increase in blood levels of the latter, which is primarily metabolized by CYP2B6. Bupropion is metabolized by CYP2B6, which is inhibited by sertraline, and this may result in an interaction between sertraline and bupropion. Sertraline had a slight inhibitory effect on the metabolism of diazepam, tolbutamide, and warfarin, which are CYP2C9 or CYP2C19 substrates; the clinical relevance of this effect was unclear. As expected from in vitro data, sertraline did not alter the human metabolism of the CYP3A4 substrates erythromycin, alprazolam, carbamazepine, clonazepam, and terfenadine; neither did it affect metabolism of the CYP1A2 substrate clozapine. Sertraline did not affect the actions of digoxin and atenolol, which are not metabolized in the liver. Case reports suggest that taking sertraline with phenytoin or zolpidem may induce sertraline metabolism and decrease its efficacy, and that taking sertraline with lamotrigine may increase the blood level of lamotrigine, possibly by inhibition of glucuronidation. CYP2C19 inhibitor esomeprazole increased sertraline concentrations in blood plasma by approximately 40%. Clinical reports indicate that interaction between sertraline and the MAOIs isocarboxazid and tranylcypromine may cause serotonin syndrome. In a placebo-controlled study in which sertraline was co-administered with lithium, 35% of the subjects experienced tremors, while none of those taking placebo did. Pharmacology Pharmacodynamics Sertraline is a selective serotonin reuptake inhibitor (SSRI). By binding to the serotonin transporter (SERT) it inhibits neuronal reuptake of serotonin and potentiates serotonergic activity in the central nervous system. Over time, this leads to a downregulation of pre-synaptic 5-HT1A receptors, which is associated with an improvement in passive stress tolerance, and delayed downstream increase in expression of brain-derived neurotrophic factor (BDNF), which may contribute to a reduction in negative affective biases. It does not significantly affect histamine, acetylcholine, GABA or benzodiazepine receptors. Sertraline also shows relatively high activity as an inhibitor of the dopamine transporter (DAT) occupying ~20% of DAT receptors at doses 200mg and above, and antagonist of the sigma σ1 receptor (but not the σ2 receptor). However, sertraline affinity for its main target (SERT) is much greater than its affinity for σ1 receptor and DAT. Although there could be a role for the σ1 receptor in the pharmacology of sertraline, the significance of this receptor in its actions is unclear. Similarly, the clinical relevance of sertraline's blockade of the dopamine transporter is uncertain. Pharmacokinetics Absorption Following a single oral dose of sertraline, mean peak blood levels of sertraline occur between 4.5 and 8.4 hours. Bioavailability is likely linear and dose-proportional over a dose range of 150 to 200 mg. Concomitant intake of sertraline with food slightly increases sertraline peak levels and total exposure. There is an approximate 2-fold accumulation of sertraline with continuous administration and steady-state levels are reached within one week. Distribution Sertraline is highly plasma protein bound (98.5%) across a concentration range of 20 to 500 ng/mL. Despite the high plasma protein binding, sertraline and its metabolite desmethylsertraline at respective tested concentrations of 300 ng/mL and 200 ng/mL were found not to interfere with the plasma protein binding of warfarin and propranolol, two other highly plasma protein-bound drugs. Metabolism Sertraline is subject to extensive first-pass metabolism, as indicated by a small study of radiolabeled sertraline in which less than 5% of plasma radioactivity was unchanged sertraline in two males. The principal metabolic pathway for sertraline is N-demethylation into desmethylsertraline (N-desmethylsertraline) mainly by CYP2B6. Reduction, hydroxylation, and glucuronide conjugation of both sertraline and desmethylsertraline also occur. Desmethylsertraline, while pharmacologically active, is substantially (50-fold) weaker than sertraline as a serotonin reuptake inhibitor and its influence on the clinical effects of sertraline is thought to be negligible. Based on in vitro studies, sertraline is metabolized by multiple cytochrome 450 isoforms; however, it appears that in the human body CYP2C19 plays the most important role, followed by CYP2B6. In addition to the cytochrome P450 system, sertraline can be oxidatively deaminated in vitro by monoamine oxidases; however, this metabolic pathway has never been studied in vivo. Elimination The elimination half-life of sertraline is on average 26 hours, with a range of 13 to 45 hours. The elimination half-life of desmethylsertraline is 62 to 104 hours. In a small study of two males, sertraline was excreted to similar degrees in urine and feces (40 to 45% each within 9 days). Unchanged sertraline was not detectable in urine, whereas 12 to 14% of unchanged sertraline was present in feces. Pharmacogenomics CYP2C19 and CYP2B6 are thought to be the key cytochrome P450 enzymes involved in the metabolism of sertraline. Relative to CYP2C19 normal (extensive) metabolizers, poor metabolizers have 2.7-fold higher levels of sertraline and intermediate metabolizers have 1.4-fold higher levels. In contrast, CYP2B6 poor metabolizers have 1.6-fold higher levels of sertraline and intermediate metabolizers have 1.2-fold higher levels. History The history of sertraline dates back to the early 1970s when Pfizer chemist Reinhard Sarges invented a novel series of psychoactive compounds, including lometraline, based on the structures of the neuroleptics thiothixene and pinoxepin. Further work on these compounds led to tametraline, a norepinephrine and weaker dopamine reuptake inhibitor. Development of tametraline was soon stopped because of undesired stimulant effects observed in animals. A few years later, in 1977, pharmacologist Kenneth Koe, after comparing the structural features of a variety of reuptake inhibitors, became interested in the tametraline series. He asked another Pfizer chemist, Willard Welch, to synthesize some previously unexplored tametraline derivatives. Welch generated several potent norepinephrine and triple reuptake inhibitors, but to the surprise of the scientists, one representative of the generally inactive cis-analogs was a serotonin reuptake inhibitor. Welch then prepared stereoisomers of this compound, which were tested in vivo by animal behavioral scientist Albert Weissman. The most potent and selective (+)-isomer was taken into further development and eventually named sertraline. Weissman and Koe recalled that the group did not set up to produce an antidepressant of the SSRI type—in that sense their inquiry was not "very goal driven", and the invention of the sertraline molecule was serendipitous. According to Welch, they worked outside the mainstream at Pfizer, and even "did not have a formal project team". The group had to overcome initial bureaucratic reluctance to pursue sertraline development, as Pfizer was considering licensing an antidepressant candidate from another company. Sertraline was approved by the US Food and Drug Administration (FDA) in 1991 based on the recommendation of the Psychopharmacological Drugs Advisory Committee; it had already become available in the United Kingdom the previous year. The FDA committee achieved a consensus that sertraline was safe and effective for the treatment of major depression. During the discussion, Paul Leber, the director of the FDA Division of Neuropharmacological Drug Products, noted that granting approval was a "tough decision", since the treatment effect on outpatients with depression had been "modest to minimal". Other experts emphasized that the drug's effect on inpatients had not differed from placebo and criticized the poor design of the clinical trials by Pfizer. For example, 40% of participants dropped out of the trials, significantly decreasing their validity. Until 2002, sertraline was only approved for use in adults ages 18 and over; that year, it was approved by the FDA for use in treating children aged 6 or older with severe OCD. In 2003, the UK Medicines and Healthcare products Regulatory Agency issued guidance that, apart from fluoxetine (Prozac), SSRIs are not suitable for the treatment of depression in patients under 18. However, sertraline can still be used in the UK for the treatment of OCD in children and adolescents. In 2005, the FDA added a boxed warning concerning pediatric suicidal behavior to all antidepressants, including sertraline. In 2007, labeling was again changed to add a warning regarding suicidal behavior in young adults ages 18 to 24. Society and culture Generic availability The US patent for Zoloft expired in 2006, and sertraline is available in generic form and is marketed under many brand names worldwide. Brand names In the US, Zoloft is marketed by Viatris after Upjohn was spun off from Pfizer. Interest during COVID-19 pandemic Sertraline has been the most sought-after antidepressant worldwide before, during, and after the COVID-19 pandemic, according to Google Trends data. The pandemic has led to an increase in searches for antidepressants, with sertraline, fluoxetine, duloxetine, and venlafaxine showing the highest search volumes, whereas searches of citalopram decreased during the pandemic. Other uses Sertraline may be useful to treat murine Zaire ebolavirus (murine EBOV). The World Health Organization (WHO) considers this a promising area of research. Lass-Flörl et al., 2003 finds it significantly inhibits phospholipase B in the fungal genus Candida, reducing virulence. Sertraline is also a very effective leishmanicide. Specifically, Palit & Ali 2008 find that sertraline kills almost all promastigotes of Leishmania donovani. Sertraline is strongly antibacterial against some species. It is also known to act as a photosensitizer of bacterial surfaces. In combination with antibacterials its photosensitization effect reverses antibacterial resistance. As such sertraline shows promise for food preservation. Lass-Flörl et al., 2003 finds this compound acts as a fungicide against Candida parapsilosis. Its anti-Cp effect is indeed due to its serotonergic activity and not its other effects. Sertraline is a promising trypanocide. It acts at several different life stages and against several strains. Sertraline's trypanocidal mechanism of action is by way of interference with bioenergetics.
Biology and health sciences
Psychiatric drugs
Health
149646
https://en.wikipedia.org/wiki/Hamiltonian%20path%20problem
Hamiltonian path problem
The Hamiltonian path problem is a topic discussed in the fields of complexity theory and graph theory. It decides if a directed or undirected graph, G, contains a Hamiltonian path, a path that visits every vertex in the graph exactly once. The problem may specify the start and end of the path, in which case the starting vertex s and ending vertex t must be identified. The Hamiltonian cycle problem is similar to the Hamiltonian path problem, except it asks if a given graph contains a Hamiltonian cycle. This problem may also specify the start of the cycle. The Hamiltonian cycle problem is a special case of the travelling salesman problem, obtained by setting the distance between two cities to one if they are adjacent and two otherwise, and verifying that the total distance travelled is equal to n. If so, the route is a Hamiltonian cycle. The Hamiltonian path problem and the Hamiltonian cycle problem belong to the class of NP-complete problems, as shown in Michael Garey and David S. Johnson's book Computers and Intractability: A Guide to the Theory of NP-Completeness and Richard Karp's list of 21 NP-complete problems. Reductions Reduction from the path problem to the cycle problem The problems of finding a Hamiltonian path and a Hamiltonian cycle can be related as follows: In one direction, the Hamiltonian path problem for graph G can be related to the Hamiltonian cycle problem in a graph H obtained from G by adding a new universal vertex x, connecting x to all vertices of G. Thus, finding a Hamiltonian path cannot be significantly slower (in the worst case, as a function of the number of vertices) than finding a Hamiltonian cycle. In the other direction, the Hamiltonian cycle problem for a graph G is equivalent to the Hamiltonian path problem in the graph H obtained by adding terminal (degree-one) vertices s and t attached respectively to a vertex v of G and to v', a cleaved copy of v which gives v' the same neighbourhood as v. The Hamiltonian path in H running through vertices corresponds to the Hamiltonian cycle in G running through . Algorithms Brute force To decide if a graph has a Hamiltonian path, one would have to check each possible path in the input graph G. There are n! different sequences of vertices that might be Hamiltonian paths in a given n-vertex graph (and are, in a complete graph), so a brute force search algorithm that tests all possible sequences would be very slow. Partial paths An early exact algorithm for finding a Hamiltonian cycle on a directed graph was the enumerative algorithm of Martello. A search procedure by Frank Rubin divides the edges of the graph into three classes: those that must be in the path, those that cannot be in the path, and undecided. As the search proceeds, a set of decision rules classifies the undecided edges, and determines whether to halt or continue the search. Edges that cannot be in the path can be eliminated, so the search gets continually smaller. The algorithm also divides the graph into components that can be solved separately, greatly reducing the search size. In practice, this algorithm is still the fastest. Dynamic programming Also, a dynamic programming algorithm of Bellman, Held, and Karp can be used to solve the problem in time O(n2 2n). In this method, one determines, for each set S of vertices and each vertex v in S, whether there is a path that covers exactly the vertices in S and ends at v. For each choice of S and v, a path exists for (S,v) if and only if v has a neighbor w such that a path exists for (S − v,w), which can be looked up from already-computed information in the dynamic program. Monte Carlo Andreas Björklund provided an alternative approach using the inclusion–exclusion principle to reduce the problem of counting the number of Hamiltonian cycles to a simpler counting problem, of counting cycle covers, which can be solved by computing certain matrix determinants. Using this method, he showed how to solve the Hamiltonian cycle problem in arbitrary n-vertex graphs by a Monte Carlo algorithm in time O(1.657n); for bipartite graphs this algorithm can be further improved to time O(1.415n). Backtracking For graphs of maximum degree three, a careful backtracking search can find a Hamiltonian cycle (if one exists) in time O(1.251n). Boolean satisfiability Hamiltonian paths can be found using a SAT solver. The Hamiltonian path is NP-Complete meaning it can be mapping reduced to the 3-SAT problem. As a result, finding a solution to the Hamiltonian Path problem is equivalent to finding a solution for 3-SAT. Unconventional methods Because of the difficulty of solving the Hamiltonian path and cycle problems on conventional computers, they have also been studied in unconventional models of computing. For instance, Leonard Adleman showed that the Hamiltonian path problem may be solved using a DNA computer. Exploiting the parallelism inherent in chemical reactions, the problem may be solved using a number of chemical reaction steps linear in the number of vertices of the graph; however, it requires a factorial number of DNA molecules to participate in the reaction. An optical solution to the Hamiltonian problem has been proposed as well. The idea is to create a graph-like structure made from optical cables and beam splitters which are traversed by light in order to construct a solution for the problem. The weak point of this approach is the required amount of energy which is exponential in the number of nodes. Complexity The problem of finding a Hamiltonian cycle or path is in FNP; the analogous decision problem is to test whether a Hamiltonian cycle or path exists. The directed and undirected Hamiltonian cycle problems were two of Karp's 21 NP-complete problems. They remain NP-complete even for special kinds of graphs, such as: bipartite graphs, undirected planar graphs of maximum degree three, directed planar graphs with indegree and outdegree at most two, bridgeless undirected planar 3-regular bipartite graphs, 3-connected 3-regular bipartite graphs, subgraphs of the square grid graph, cubic subgraphs of the square grid graph. However, for some special classes of graphs, the problem can be solved in polynomial time: 4-connected planar graphs are always Hamiltonian by a result due to Tutte, and the computational task of finding a Hamiltonian cycle in these graphs can be carried out in linear time by computing a so-called Tutte path. Tutte proved this result by showing that every 2-connected planar graph contains a Tutte path. Tutte paths in turn can be computed in quadratic time even for 2-connected planar graphs, which may be used to find Hamiltonian cycles and long cycles in generalizations of planar graphs. Putting all of these conditions together, it remains open whether 3-connected 3-regular bipartite planar graphs must always contain a Hamiltonian cycle, in which case the problem restricted to those graphs could not be NP-complete; see Barnette's conjecture. In graphs in which all vertices have odd degree, an argument related to the handshaking lemma shows that the number of Hamiltonian cycles through any fixed edge is always even, so if one Hamiltonian cycle is given, then a second one must also exist. However, finding this second cycle does not seem to be an easy computational task. Papadimitriou defined the complexity class PPA to encapsulate problems such as this one. Polynomial time verifier The Hamiltonian path problem is NP-Complete meaning a proposed solution can be verified in polynomial time. A verifier algorithm for Hamiltonian path will take as input a graph G, starting vertex s, and ending vertex t. Additionally, verifiers require a potential solution known as a certificate, c. For the Hamiltonian Path problem, c would consist of a string of vertices where the first vertex is the start of the proposed path and the last is the end. The algorithm will determine if c is a valid Hamiltonian Path in G and if so, accept. To decide this, the algorithm first verifies that all of the vertices in G appear exactly once in c. If this check passes, next, the algorithm will ensure that the first vertex in c is equal to s and the last vertex is equal to t. Lastly, to verify that c is a valid path, the algorithm must check that every edge between vertices in c is indeed an edge in G. If any of these checks fail, the algorithm will reject. Otherwise, it will accept. The algorithm can check in polynomial time if the vertices in G appear once in c. Additionally, it takes polynomial time to check the start and end vertices, as well as the edges between vertices. Therefore, the algorithm is a polynomial time verifier for the Hamiltonian path problem. Applications Networks on chip Networks on chip (NoC) are used in computer systems and processors serving as communication for on-chip components. The performance of NoC is determined by the method it uses to move packets of data across a network. The Hamiltonian Path problem can be implemented as a path-based method in multicast routing. Path-based multicast algorithms will determine if there is a Hamiltonian path from the start node to each end node and send packets across the corresponding path. Utilizing this strategy guarantees deadlock and livelock free routing, increasing the efficiency of the NoC. Computer graphics Rendering engines are a form of software used in computer graphics to generate images or models from input data. In three dimensional graphics rendering, a common input to the engine is a polygon mesh. The time it takes to render the object is dependent on the rate at which the input is received, meaning the larger the input the longer the rendering time. For triangle meshes, however, the rendering time can be decreased by up to a factor of three. This is done through "ordering the triangles so that consecutive triangles share a face." That way, only one vertex changes between each consecutive triangle. This ordering exists if the dual graph of the triangular mesh contains a Hamiltonian path.
Mathematics
Graph theory
null
149697
https://en.wikipedia.org/wiki/Boeing%20737
Boeing 737
The Boeing 737 is an American narrow-body airliner produced by Boeing at its Renton factory in Washington. Developed to supplement the Boeing 727 on short and thin routes, the twinjet retained the 707 fuselage width and six abreast seating but with two underwing Pratt & Whitney JT8D low-bypass turbofan engines. Envisioned in 1964, the initial 737-100 made its first flight in April 1967 and entered service in February 1968 with Lufthansa. The lengthened 737-200 entered service in April 1968, and evolved through four generations, offering several variants for 85 to 215 passengers. The First Generation 737-100/200 variants were powered by Pratt & Whitney JT8D low-bypass turbofan engines and offered seating for 85 to 130 passengers. Launched in 1980 and introduced in 1984, the Second Generation 737 Classic -300/400/500 variants were upgraded with more fuel-efficient CFM56-3 high-bypass turbofans and offered 110 to 168 seats. Introduced in 1997, the Third Generation 737 Next Generation (NG) -600/700/800/900 variants have updated CFM56-7 high-bypass turbofans, a larger wing and an upgraded glass cockpit, and seat 108 to 215 passengers. The latest, and Fourth Generation, the 737 MAX -7/8/9/10 variants, powered by improved CFM LEAP-1B high-bypass turbofans and accommodating 138 to 204 people, entered service in 2017. Boeing Business Jet versions have been produced since the 737NG, as well as military models. , 16,703 Boeing 737s have been ordered and 11,925 delivered. It was the highest-selling commercial airliner until being surpassed by the competing Airbus A320 family in October 2019, but maintains the record in total deliveries. Initially, its main competitor was the McDonnell Douglas DC-9, followed by its MD-80/MD-90 derivatives. In 2013, the global 737 fleet had completed more than 184 million flights over 264 million block hours since its entry into service. The 737 MAX, designed to compete with the A320neo, was grounded worldwide between March 2019 and November 2020 following two fatal crashes. Development Initial design Boeing had been studying short-haul jet aircraft designs, and saw a need for a new aircraft to supplement the 727 on short and thin routes. Preliminary design work began on May 11, 1964, based on research that indicated a market for a fifty to sixty passenger airliner flying routes of . The initial concept featured podded engines on the aft fuselage, a T-tail as with the 727, and five-abreast seating. Engineer Joe Sutter relocated the engines to the wings which lightened the structure and simplified the accommodation of six-abreast seating in the fuselage. The engine nacelles were mounted directly to the underside of the wings, without pylons, allowing the landing gear to be shortened, thus lowering the fuselage to improve baggage and passenger access. Relocating the engines from the aft fuselage also allowed the horizontal stabilizer to be attached to the aft fuselage instead of as a T-tail. Many designs for the engine attachment strut were tested in the wind tunnel and the optimal shape for high speed was found to be one which was relatively thick, filling the narrow channels formed between the wing and the top of the nacelle, particularly on the outboard side. At the time, Boeing was far behind its competitors; the SE 210 Caravelle had been in service since 1955, and the BAC One-Eleven (BAC-111), Douglas DC-9, and Fokker F28 were already into flight certification. To expedite development, Boeing used 60% of the structure and systems of the existing 727, particularly the fuselage, which differs in length only. This 148-inch (3.76 m) wide fuselage cross-section permitted six-abreast seating compared to the rivals' five-abreast. The 727's fuselage was derived from the 707. The proposed wing airfoil sections were based on those of the 707 and 727, but somewhat thicker; altering these sections near the nacelles achieved a substantial drag reduction at high Mach numbers. The engine chosen was the Pratt & Whitney JT8D-1 low-bypass ratio turbofan engine, delivering of thrust. The concept design was presented in October 1964 at the Air Transport Association maintenance and engineering conference by chief project engineer Jack Steiner, where its elaborate high-lift devices raised concerns about maintenance costs and dispatch reliability. Major design developments The original 737 continued to be developed into thirteen passenger, cargo, corporate and military variants. These were later divided into what has become known as the four generations of the Boeing 737 family: The first generation "Original" series: the 737-100 and -200, also the military T-43 and CT-43, launched February 1965. The second generation "Classic" series: 737-300, -400 and -500, launched in 1979. The third generation "Next Generation" series: 737-600, -700, -800 and -900, also the military C-40 and P-8, launched late 1993. The fourth generation 737 MAX series: 737-7, -8, -9 and -10, launched August 2011. Launch The launch decision for the $150 million (~$ in ) development was made by the board on February 1, 1965. The sales pitch was big-jet comfort on short-haul routes. Lufthansa became the launch customer on February 19, 1965, with an order for 21 aircraft, worth $67 million (~$ in ) after the airline had been assured by Boeing that the 737 project would not be canceled. Consultation with Lufthansa over the previous winter had resulted in the seating capacity being increased to 100. On April 5, 1965, Boeing announced an order by United Airlines for 40 737s. United wanted a slightly larger capacity than the 737-100, so the fuselage was stretched ahead of, and behind the wing. The longer version was designated the 737-200, with the original short-body aircraft becoming the 737-100. Detailed design work continued on both variants simultaneously. Introduction The first -100 was rolled out on January 17, 1967, and took its maiden flight on April 9, 1967, piloted by Brien Wygle and Lew Wallick. After several test flights the Federal Aviation Administration (FAA) issued Type Certificate A16WE certifying the 737-100 for commercial flight on December 15, 1967. It was the first aircraft to have, as part of its initial certification, approval for Category II approaches, which refers to a precision instrument approach and landing with a decision height between . Lufthansa received its first aircraft on December 28, 1967, and on February 10, 1968, became the first non-American airline to launch a new Boeing aircraft. Lufthansa was the only significant customer to purchase the 737-100 and only 30 aircraft were produced. The -200 was rolled out on June 29, 1967, and had its maiden flight on August 8, 1967. It was then certified by the FAA on December 21, 1967. The inaugural flight for United Airlines took place on April 28, 1968, from Chicago to Grand Rapids, Michigan. The lengthened -200 was widely preferred over the -100 by airlines. The improved version, the 737-200 Advanced, was introduced into service by All Nippon Airways on May 20, 1971. The 737 original model with its variants, known later as the Boeing 737 Original, initially competed with SE 210 Caravelle and BAC-111 due to their earlier entry into service and later primarily with the McDonnell Douglas DC-9, then its MD-80 derivatives as the three European short-haul single aisles slowly withdrew from the competition. Sales were low in the early 1970s and, after a peak of 114 deliveries in 1969, only 22 737s were shipped in 1972 with 19 in backlog. The US Air Force saved the program by ordering T-43s, which were modified Boeing 737-200s. African airline orders kept the production running until the 1978 US Airline Deregulation Act, which improved demand for six-abreast narrow-body aircraft. Demand further increased after being re-engined with the CFM56. The 737 went on to become the highest-selling commercial aircraft in terms of orders until surpassed by the competing Airbus A320 family in October 2019, but maintains the record in total deliveries. The fuselage is manufactured in Wichita, Kansas, by Boeing spin-off company Spirit AeroSystems, before being moved by rail to Renton. The Renton factory has three assembly lines for the 737 MAX; a fourth is planned to open at the Everett factory in 2024. Generations and variants 737 Original (first generation) The Boeing 737 Original is the name given to the -100/200 and -200 Advanced series of the Boeing 737 family. 737-100 The initial model was the 737-100, the smallest variant of the 737 aircraft family, which was launched in February 1965 and entered service with Lufthansa in February 1968. In 1968, its unit cost was US$3.7M (1968), $M today. A total of 30 737-100s were ordered: 22 by Lufthansa, 5 by Malaysia–Singapore Airlines (MSA) and 2 by Avianca with the final commercial aircraft delivered to MSA on October 31, 1969. This variant was largely overshadowed by its bigger 737-200 sibling, which entered service two months later. The original engine nacelles incorporated thrust reversers taken from the 727 outboard nacelles. They proved to be relatively ineffective and tended to lift the aircraft up off the runway when deployed. This reduced the downforce on the main wheels thereby reducing the effectiveness of the wheel brakes. In 1968, an improvement to the thrust reversal system was introduced. A 48-inch tailpipe extension was added and new target-style thrust reversers were incorporated. The thrust reverser doors were set 35 degrees away from the vertical to allow the exhaust to be deflected inboard and over the wings and outboard and under the wings. The improvement became standard on all aircraft after March 1969, and a retrofit was provided for active aircraft. Longer nacelle/wing fairings were introduced, and the airflow over the flaps and slats was improved. The production line also introduced an improvement to the flap system, allowing increased use during takeoff and landing. All these changes gave the aircraft a boost to payload and range, and improved short-field performance. Both the first and last 737-100s became the last 737-100s in service. The first aircraft used by Boeing as prototype under registration N73700 was later ordered by and delivered to NASA on July 26, 1973, which then operated it under registration N515NA and retired after 30 years on September 27, 2003. The last 737-100 built and also the last operating was originally sold to Malaysia–Singapore Airlines: it was transferred to Air Florida before being used as a VIP aircraft by the Mexican Air Force for 23 years under registration TP-03. TP-03 would be broken up in 2006. The first 737-100, NASA 515, is on static display in the Museum of Flight in Seattle and is the last surviving example of the type. 737-200 The 737-200 was a 737-100 with an extended fuselage, launched by an order from United Airlines in 1965 and entered service with the launch customer in April 1968. Its unit cost was US$4.0M (1968) ($M today). The -200's unit cost was US$5.2M (1972) ($M today). The 737-200 Advanced is an improved version of the -200, introduced into service by All Nippon Airways on May 20, 1971. After aircraft #135, the 737-200 Advanced has improved aerodynamics, automatic wheel brakes, more powerful engines, more fuel capacity, and hence a 15% increase in payload and range over the original -200s and respectively -100s. The 737-200 Advanced became the production standard in June 1971. Boeing also provided the 737-200C (Combi), which allowed for conversion between passenger and cargo use and the 737-200QC (Quick Change), which facilitated a rapid conversion between roles. The 1,114th and last delivery of a -200 series aircraft was in August 1988 to Xiamen Airlines. Nineteen 737-200s, designated T-43, were used to train aircraft navigators for the U.S. Air Force. Some were modified into CT-43s, which are used to transport passengers, and one was modified as the NT-43A Radar Test Bed. The first was delivered on July 31, 1973, and the last on July 19, 1974. The Indonesian Air Force ordered three modified 737-200s, designated Boeing 737-2X9 Surveiller. They were used as Maritime reconnaissance (MPA)/transport aircraft, fitted with SLAMMAR (Side-looking Multi-mission Airborne Radar). The aircraft were delivered between May 1982 and October 1983.After 40 years, in March 2008, the final 737-200 aircraft in the U.S. flying scheduled passenger service were phased out, with the last flights of Aloha Airlines. As of 2018, the variant still saw regular service through North American charter operators such as Sierra Pacific Airlines. The short-field capabilities of the 737-200 led Boeing to offer the "Unpaved Strip Kit" (see the Air North example, right). This option reduced foreign object damage when operated on remote, unimproved or unpaved runways, that competing jetliners could not use safely. The kit included a gravel deflector on the nose gear and a vortex dissipator extending from the front of the engine. Alaska Airlines used the gravel kit for some of its combi aircraft rural operations in Alaska until retiring its -200 fleet in 2007. Air Inuit, Nolinor Aviation and Chrono Aviation still use the gravel kit in Northern Canada. Canadian North also operated a gravel-kitted 737-200 Combi, but this was due to be retired in early 2023. , a relatively high number of 737-200s remain in service compared to other early jet airliners, with fifty examples actively flying for thirty carriers. During the 737 MAX groundings, older 737s, including the 200 and Classic series, were in demand for leasing. C-GNLK, one of Nolinor's 737-200s, is the oldest jet airliner in commercial service as of 2024, having entered service 50 years prior in 1974. 737 Classic (second generation) The Boeing 737 Classic is the name given to the 737-300/400/500 series after the introduction of the -600/700/800/900 series of the Boeing 737 family. Produced from 1984 to 2000, a total of 1,988 Classic series were delivered. Close to the next major upgrade of single aisle aircraft at Airbus and Boeing, the price of jet fuel reached a peak in 2008, when airlines devoted 40% of the retail price of an air ticket to pay for fuel, versus 15% in 2000. Consequently, in that year carriers retired Boeing 737 Classic aircraft to reduce fuel consumption; replacements consisted of more efficient 737 Next Generation or A320 family aircraft. On June 4, 2008, United Airlines announced it would retire all 94 of its Classic 737 aircraft (64 737-300 and 30 737-500 aircraft), replacing them with A320 family jets taken from its Ted subsidiary, which has been shut down. This intensified the competition between the two giant aircraft manufacturers, which has since become a duopoly competition. An optional upgrade with winglets became available for the Classic and NG series. The 737-300 and 737-500 can be retrofitted with Aviation Partners Boeing winglets, and the 737-300 retrofitted with winglets is designated the -300SP (Special Performance). WestJet was to launch the 737-600 with winglets, but dropped them in 2006. 737-300 Development began in 1979 for the 737's first major revision, which was originally introduced as the 'new generation' of the 737. Boeing wanted to increase capacity and range, incorporating improvements to upgrade the aircraft to modern specifications, while also retaining commonality with previous 737 variants. In 1980, preliminary aircraft specifications of the variant, dubbed 737-300, were released at the Farnborough Airshow. This first major upgrade series was later renamed 737 Classic. It competed primarily with the MD-80, its later derivative the MD-90, and the newcomer Airbus A320 family. Boeing engineer Mark Gregoire led a design team, which cooperated with CFM International to select, modify and deploy a new engine and nacelle that would make the 737-300 into a viable aircraft. They chose the CFM56-3B-1 high-bypass turbofan engine to power the aircraft, which yielded significant gains in fuel economy and a reduction in noise, but also posed an engineering challenge, given the low ground clearance of the 737 and the larger diameter of the engine over the original Pratt & Whitney engines. Gregoire's team and CFM solved the problem by reducing the size of the fan (which made the engine slightly less efficient than it had been forecast to be), placing the engine ahead of the wing, and by moving engine accessories to the sides of the engine pod, giving the engine a distinctive non-circular "hamster pouch" air intake. Earlier customers for the CFM56 included the U.S. Air Force with its program to re-engine KC-135 tankers. The passenger capacity of the aircraft was increased to 149 by extending the fuselage around the wing by . The wing incorporated several changes for improved aerodynamics. The wingtip was extended , and the wingspan by . The leading-edge slats and trailing-edge flaps were adjusted. The tailfin was redesigned, the flight deck was improved with the optional EFIS (Electronic Flight Instrumentation System), and the passenger cabin incorporated improvements similar to those developed on the Boeing 757. The prototype -300, the 1,001st 737 built, first flew on February 24, 1984, with pilot Jim McRoberts. It and two production aircraft flew a nine-month-long certification program. The 737-300 retrofitted with Aviation Partners' winglets was designated the -300SP (Special Performance). The 737-300 was replaced by the 737-700 of the Next Generation series. 737-400 The 737-400 was launched in 1985 to fill the gap between the 737-300 and the 757-200. In June 1986, Boeing announced the development of the 737-400, which stretched the fuselage a further , increasing the capacity to 188 passengers, and requiring a tail bumper to prevent tailstrikes during take-off and a strengthened wing spar. The -400s first flight was on February 19, 1988, and, after a seven-month/500-hour flight-testing run, entered service with Piedmont Airlines that October. The last two -400s, i.e. the last 737 Classics series, were delivered to CSA Czech Airlines on February 28, 2000. The 737-400 was replaced by the 737-800 of the Next Generation series. The 737-400SF was a 737-400 converted to freighter, though it was not a model delivered by Boeing and hence the nickname Special Freighter (SF). Alaska Airlines was the first to convert one of their 400s from regular service to an aircraft with the ability to handle 10 pallets. The airline had also converted five more into fixed combi aircraft for half passenger and freight. These 737-400 Combi aircraft were retired in 2017 and replaced by the 737-700F of the Next Generation series. 737-500 The 737-500 was offered as a modern and direct replacement of the 737-200. It was launched in 1987 by Southwest Airlines, with an order for 20 aircraft, and it flew for the first time on June 30, 1989. A single prototype flew 375 hours for the certification process, and on February 28, 1990, Southwest Airlines received the first delivery. The -500 incorporated the improvements of the 737 Classic series, allowing longer routes with fewer passengers to be more economical than with the 737-300. The fuselage length of the 737-500 is longer than the 737-200, accommodating up to 140 passengers. Both glass and older-style mechanical cockpits arrangements were available. Using the CFM56-3 engine also gave a 25 percent increase in fuel efficiency over the older 737-200s P&W engines. The 737-500 has faced accelerated retirement due to its smaller size, after 21 years in service compared to 24 years for the -300. While a few 737-300s were slated for freighter conversion, no demand at all existed for a -500 freighter conversion. The 737-500 was replaced by the 737-600 of the Next Generation series, though the -600 was not as successful in total orders as the -500. 737 NG (third generation) The Boeing 737 Next Generation, abbreviated as 737 Next Gen or 737NG, is the name given to the main models 737-600/700/800/900 series and the extended range -700ER/900ER variants of the Boeing 737 family. It has been produced since 1996 and introduced in 1997, with a total order of 7,097 aircraft, of which 7,031 have been delivered . The primary goal was to re-engine the 737 with the high bypass ratio CFM56-7. By the early 1990s, as the MD-80 slowly withdrew from the competition following the introduction of the MD-90, it had become clear that the new A320 family was a serious threat to Boeing's market share. Airbus won previously loyal 737 customers, such as Lufthansa and United Airlines. In November 1993, to stay in the single aisle competition, Boeing's board of directors authorized the Next Generation program to mainly upgrade the 737 Classic series. In late 1993, after engineering trade studies and discussions with major customers, Boeing proceeded to launch a second derivative of the Boeing 737, the 737 Next Generation (NG) -600/700/800/900 series. It featured a redesigned wing with a wider wingspan and larger area, greater fuel capacity, longer range and higher MTOWs. It was equipped with CFM56-7 high pressure ratio engines, a glass cockpit, and upgraded interior configurations. The four main models of the series can accommodate seating for 108 to 215 passengers. It was further developed into additional versions such as the corporate Boeing Business Jet (BBJ) and military P-8 Poseidon aircraft. Following the merger between Boeing with McDonnell Douglas in 1997, the primary competitor for the 737NG series remained only the A320 family. 737-600 The 737-600 was the smallest of the Next-Generation models, replacing the 737-500. It had no winglets and was similar in size to the Airbus A318. Launch customer Scandinavian Airlines (SAS) placed its order in March 1995 and took the first delivery in September 1998. A total of 69 aircraft were produced, with the last one delivered to WestJet in 2006. 737-700 The 737-700, the first variant of the Next-Generation, was launched in November 1993 with an order of 63 aircraft. The -700 seats 126 passengers in a two-class or 149 passengers in a one-class layout. Launch customer Southwest Airlines took the first delivery in December 1997. The 737-700 replaced the 737-300 and competes with the Airbus A319. The 737-700C is a convertible version where the seats can be removed to carry cargo instead. There is a large door on the left side of the aircraft. The United States Navy was the launch customer for the 737-700C under the military designation C-40 Clipper. The 737-700ER (Extended Range) was launched on January 31, 2006, and featured the fuselage of the 737-700 and the wings and landing gear of the 737-800. A 737-700ER can typically accommodate 126 passengers in two classes with a range similar to the Airbus A319LR. 737-800 The 737-800 was a stretched version of the 737-700 launched on September 5, 1994, and first flew on July 31, 1997. The -800 seats 162 passengers in a two-class or 189 passengers in a high-density, one-class layout. Launch customer Hapag-Lloyd Flug (now TUIfly) received the first one in April 1998. The 737-800 replaced directly the -400 and aging 727-200 of US airlines. It filled also the gap left by Boeing's decision to discontinue the MD-80 and MD-90 aircraft, following Boeing's merger with McDonnell Douglas. The 737-800 is the most widely used narrowbody aircraft and competes primarily with the Airbus A320. 737-900 The 737-900 was launched in November 1997 and took its first flight on August 3, 2000. It is longer than the -800, but retains the MTOW, fuel capacity, and exit configuration of the -800, essentially trading range for capacity. The exit configuration limits its seat capacity to approximately 177 in a two class and 189 in a high-density, one class layout. Launch customer Alaska Airlines received the first delivery on May 2001. The 737-900ER (Extended Range), the newest and largest variant of the 737NG generation, was launched in July 2005, first flew in September 2006, and first delivered to launch customer Lion Air in April 2007.. An additional pair of exit doors and a flat rear pressure bulkhead increased its seating capacity to 180 passengers in a two-class and up to 220 passengers in a one-class configuration. The -900ER partly closed the gap left by the discontinuation of the Boeing 757-200, and directly competes with the Airbus A321. 737 MAX (fourth generation) The Boeing 737 MAX is the name given to the main models 737 MAX 7/8/9/10 series and the higher-density MAX 200 variant of the Boeing 737 family. It is offered in four main variants, typically offering 138 to 230 seats and a range of . The 737 MAX 7, MAX 8 (including the denser, 200-seat MAX 200), and MAX 9 replace the 737-700, -800, and -900 respectively. The further stretched 737 MAX 10 has also been added to the series. The aim was to re-engine the 737NG family using CFM LEAP-1B engines having very high bypass ratio, to compete with the Airbus A320neo family. On July 20, 2011, Boeing announced plans for a third major upgrade and respectively fourth generation of 737 series to be powered by the CFM LEAP-1B engine, with American Airlines intending to order 100 of these aircraft. On August 30, 2011, Boeing confirmed the launch of the 737 new engine variant, to be called the Boeing 737 MAX. It was based on earlier 737 designs with more efficient LEAP-1B power plants, aerodynamic improvements (most notably split-tip winglets), and airframe modifications. It competes with the Airbus A320neo family that was launched in December 2010 and reached 1,029 orders by June 2011, breaking Boeing's monopoly with American Airlines, which had an order for 130 A320neos that July. The 737 MAX had its first flight on January 29, 2016, and gained FAA certification on March 8, 2017. The first delivery was a MAX 8 on May 6, 2017, to Lion Air's subsidiary Malindo Air, which put it into service on May 22, 2017. , the series has received 5,011 firm orders. In March 2019, civil aviation authorities around the world grounded the 737 MAX following two hull loss crashes which caused 346 deaths. On December 16, 2019, Boeing announced that it would suspend production of the 737 MAX from January 2020, which was resumed in May 2020. In the midyear 2020, the FAA and Boeing conducted a series of recertification test flights. On November 18, 2020, the FAA cleared the MAX to return to service. Before the aircraft can fly again, repairs must be implemented and airlines' training programs must be approved. Passenger flights in the U.S. are expected to resume before the end of the year. Worldwide, the first airline to resume passenger service was Brazilian low-cost Gol, on December 9, 2020. 737 MAX 7 The 737 MAX 7, a shortened variant of the MAX 8, was originally based on the 737-700, flying farther and accommodating two more seat rows at 18% lower fuel costs per seat. The redesign uses the 737-8 wing and landing gear; a pair of over-wing exits rather than the single-door configuration; a aft fuselage and a longer forward fuselage; structural re-gauging and strengthening; and systems and interior modifications to accommodate the longer length. Entry into service with launch operator Southwest Airlines was originally expected in January 2019, but certification delays have pushed this back, with Boeing CEO David Calhoun saying certification was possible in the first half of 2025. The 737 MAX 7 replaced the 737-700 and was predicted to carry 12 more passengers and fly farther than the competing Airbus A319neo with 7% lower operating costs per seat. 737 MAX 8 The 737 MAX 8, the first variant of the 737 MAX, has a longer fuselage than the MAX 7. On July 23, 2013, Boeing completed the firm configuration for the 737 MAX 8. Its first commercial flight was operated by Malindo Air on May 22, 2017. The MAX 8 replaced the 737-800 and competed with the A320neo. The 737 MAX 200, a high-density version of the 737 MAX 8, was launched in September 2014 and named for seating for up to 200 passengers in a single-class layout with slimline seats requiring an extra pair of exit doors. The MAX 200 would be 20% more cost-efficient per seat, including 5% lower operating costs than the MAX 8 and would be the most efficient narrow-body on the market when entering service. In mid-November 2018, the first MAX 200 of the 135 ordered by Ryanair rolled out, in a 197-seat configuration. It was first flown from Renton on January 13, 2019, and was due to enter service in April 2019. 737 MAX 9 The 737 MAX 9, the stretched variant of the MAX 8, was launched with an order of 201 aircraft in February 2012. It made its roll-out on March 7, 2017, and first flight on April 13, 2017; It was certified by February 2018. The launch customer, Lion Air Group, took the first MAX 9 on March 21, 2018, before entering service with Thai Lion Air. The 737 MAX 9 replaced the 737-900 and competes with the Airbus A321neo. 737 MAX 10 The 737 MAX 10 was proposed as a stretched MAX 9 in mid-2016, enabling seating for 230 in a single class or 189 in two-class layout, compared to 193 in two-class seating for the A321neo. The modest stretch of fuselage enables the MAX 10 to retain the existing wing and CFM Leap 1B engine from the MAX 9 with a trailing-link main landing gear as the only major change. The MAX 10 was launched on June 19, 2017, with 240 orders and commitments from more than ten customers. The variant configuration with a predicted 5% lower trip cost and seat cost compared to the A321neo was firmed up by February 2018, and by mid-2018, the critical design review was completed. The MAX 10 has a similar capacity to the A321XLR, but shorter range and much poorer field performance in smaller airports. It was unveiled in Boeing's Renton factory on November 22, 2019, and first flew on June 18, 2021.. The MAX 10 is still awaiting certification, with Boeing CEO David Calhoun saying in July 2024 that the MAX 10 could be certified in the first half of 2025. In the late 2010s, Boeing worked on a medium-range Boeing New Midsize Airplane (NMA) with two variants seating 225 or 275 passengers and targeting the same market segment as the 737 MAX 10 and the Airbus A321neo. A Future Small Airplane (FSA) was also touted during this period. The NMA project was set aside in January 2020, as Boeing focused on returning the 737 MAX to service and announced that it would be taking a new approach to future projects. Design The 737 continued to evolve into many variants but still remains recognizable as the 737. These are divided into four generations but all are based on the same basic design. Airframe The fuselage cross section and nose are derived from that of the Boeing 707 and Boeing 727. Early 737 cockpits also inherited the "eyebrow windows" positioned above the main glareshield, which were a feature of the original 707 and 727 to allow for better crew visibility. Contrary to popular belief, these windows were not intended for celestial navigation (only the military T-43A had a sextant port for star navigation, which the civilian models lacked.) With modern avionics, the windows became redundant, and many pilots placed newspapers or other objects in them to block out sun glare. They were eliminated from the 737 cockpit design in 2004, although they are still installed on customer request. The eyebrow windows were sometimes removed and plugged, usually during maintenance overhauls, and can be distinguished by the metal plug which differs from the smooth metal in later aircraft that were not originally fitted with the windows. The 737 was designed to sit relatively low to the ground to accommodate the design of smaller airports in the late 1960s which often lacked jetbridges or motorized belt loaders. The low fuselage allowed passengers to easily board from a mobile stairway or airstairs (which are still available as an option on the 737 MAX) and for luggage to be hand-lifted into the cargo holds. However, the design has proved to be an issue as the 737 has been modernized with larger and more fuel efficient engines. The 737's main landing gear, under the wings at mid-cabin, rotates into wheel wells in the aircraft's belly. The legs are covered by partial doors, and "brush-like" seals aerodynamically smooth (or "fair") the wheels in the wells. The sides of the tires are exposed to the air in flight. "Hub caps" complete the aerodynamic profile of the wheels. It is forbidden to operate without the caps, because they are linked to the ground speed sensor that interfaces with the anti-skid brake system. The dark circles of the tires are clearly visible when a 737 takes off, or is at low altitude. From July 2008, the steel landing gear brakes on new NGs were replaced by Messier-Bugatti carbon brakes, achieving weight savings to depending on whether standard or high-capacity brakes were equipped. On a 737-800 this gives a 0.5% improvement in fuel efficiency. 737s are not equipped with fuel dump systems. The original design was too small to require this, and adding a fuel dump system to the later, larger variants would have incurred a large weight penalty. Boeing instead demonstrated an "equivalent level of safety". Depending on the nature of the emergency, 737s either circle to burn off fuel or land overweight. If the latter is the case, the aircraft is inspected by maintenance personnel for damage and then returned to service if none is found. Engines Engines on the 737 Classic series (-300, -400, -500) and Next-Generation series (-600, -700, -800, -900) do not have circular inlets like most aircraft but rather a planform on the lower side, which has been dictated largely by the need to accommodate ever larger engine diameters. The 737 Classic series featured CFM56 high bypass turbofan engines, which were 25% more efficient and also reduced noise significantly over JT8D low bypass engines used on the 737 Original series (-100 and -200), but also posed an engineering challenge given the low ground clearance of the Boeing 737 family. Boeing and engine supplier CFM International (CFMI) solved the problem by placing the engine ahead of (rather than below) the wing, and by moving engine accessories to the sides (rather than the bottom) of the engine pod, giving the 737 Classic and later generations a distinctive non-circular air intake. The improved, higher pressure ratio CFM56-7 turbofan engine on the 737 Next Generation is 7% more fuel-efficient than the previous CFM56-3 on the 737 Classic with the same bypass ratio. The newest 737 variants, the 737 MAX series, feature LEAP-1B engines from CFMI with a fan diameter. These engines were expected to be 10-12% more efficient than the CFM56-7B engines on the 737 Next Generation series. Flight systems The 737 uses a hydro-mechanical flight control system, similar to the Boeing 707 and typical of the period in which the 737 was originally designed. Pilot commands are transmitted to hydraulic boosters attached to the control surfaces via steel cables that run through the fuselage and wings, rather than by the electrical fly-by-wire systems found in more recent designs like the Airbus A320 or Boeing 777. The primary flight controls have mechanical backups. In the event of total hydraulic system failure or double engine failure, they will automatically revert to control via servo tab. In this mode, termed manual reversion, the servo tabs aerodynamically control the elevators and ailerons; these servo tabs are in turn controlled by cables running to the control yoke. The pilot's muscle forces alone control the tabs. The 737 Next Generation series introduced a six-screen LCD glass cockpit with modern avionics but designed to retain crew commonality with previous 737 generations. The 737 MAX introduced a 4 15.1 inch landscape LCD screen cockpit manufactured by Rockwell Collins derived from the Boeing 787 Dreamliner. Except for the spoilers, which are fly-by-wire controlled, and all the analog instruments, which became digital, everything else is similar to the cockpits of the previous 737 generations to maintain commonality. Aerodynamics The Original -100 and -200 series were built without wingtip devices, but these were later introduced to improve fuel efficiency. The 737 has evolved four winglet types: the 737-200 Mini-winglet, 737 Classic/NG Blended Winglet, 737 Split Scimitar Winglet, and 737 MAX Advanced Technology Winglet. The 737-200 Mini-winglets are part of the Quiet Wing Corp modification kit that received certification in 2005. Blended winglets were standard on the 737 NG since 2000 and are available for retrofit on 737 Classic models. These winglets stand approximately tall and are installed at the wing tips. They improve fuel efficiency by up to 5% through lift-induced drag reduction achieved by moderating wingtip vortices. Split Scimitar winglets became available in 2014 for the 737-800, 737-900ER, BBJ2 and BBJ3, and in 2015 for the 737-700, 737-900 and BBJ1. Split Scimitar winglets were developed by Aviation Partners, the same Seattle-based corporation that developed the blended winglets; the Split Scimitar winglets produce up to a 5.5% fuel savings per aircraft compared to 3.3% savings for the blended winglets. Southwest Airlines flew their first flight of a 737-800 with Split Scimitar winglets on April 14, 2014. The next generation 737, 737 MAX, will feature an Advanced Technology (AT) Winglet that is produced by Boeing. The Boeing AT Winglet resembles a cross between the Blended Winglet and the Split Scimitar Winglet. An optional Enhanced Short Runway Package was developed for use on short runways. Interior The first generation Original series 737 cabin was replaced for the second generation Classic series with a design based on the Boeing 757 cabin. The Classic cabin was then redesigned once more for the third, Next Generation, 737 with a design based on the Boeing 777 cabin. Boeing later offered the redesigned Sky Interior on the NG. The principal features of the Sky Interior include sculpted sidewalls, redesigned window housings, increased headroom and LED mood lighting, larger pivot-bins based on the 777 and 787 designs and generally more luggage space, and claims to have improved cabin noise levels by 2–4 dB. The first 737 equipped Boeing Sky Interior was delivered to Flydubai in late 2010. Continental Airlines, Alaska Airlines, Malaysia Airlines, and TUIFly have also received Sky Interior-equipped 737s. Other variants 737 AEW&C The Boeing 737 AEW&C is a 737-700IGW roughly similar to the 737-700ER. This is an Airborne Early Warning and Control (AEW&C) version of the 737NG. Australia is the first customer (as Project Wedgetail), followed by Turkey and South Korea. T-43/CT-43A The T-43 was a 737-200 modified for use by the United States Air Force for training navigators, now known as USAF combat systems officers. Informally referred to as the Gator (an abbreviation of "navigator") and "Flying Classroom", nineteen of these aircraft were delivered to the Air Training Command at Mather AFB, California during 1973 and 1974. Two additional aircraft were delivered to the Colorado Air National Guard at Buckley ANGB (later Buckley AFB) and Peterson AFB, Colorado, in direct support of cadet air navigation training at the nearby U.S. Air Force Academy. Two T-43s were later converted to CT-43As, similar to the CT-40A Clipper below, in the early 1990s and transferred to Air Mobility Command and United States Air Forces in Europe, respectively, as executive transports. A third aircraft was also transferred to Air Force Materiel Command for use as a radar test bed aircraft and was redesignated as an NT-43A. The T-43 was retired by the Air Education and Training Command in 2010 after 37 years of service. C-40 Clipper The Boeing C-40 Clipper is a military version of the 737-700C NG. It is used by both the United States Navy and the United States Air Force, and has been ordered by the United States Marine Corps. Technically, only the Navy C-40A variant is named "Clipper", whereas the USAF C-40B/C variants are officially unnamed. P-8 Poseidon The P-8 Poseidon developed for the United States Navy by Boeing Defense, Space & Security, based on the Next Generation 737-800ERX. The P-8 can be operated in the anti-submarine warfare (ASW), anti-surface warfare (ASUW), and shipping interdiction roles. It is armed with torpedoes, Harpoon anti-ship missiles and other weapons, and is able to drop and monitor sonobuoys, as well as operate in conjunction with other assets such as the Northrop Grumman MQ-4C Triton maritime surveillance unmanned aerial vehicle (UAV). Boeing Business Jet (BBJ) In the late 1980s, Boeing marketed the 77-33 jet, a business jet version of the 737-300. The name was short-lived. After the introduction of the Next Generation series, Boeing introduced the Boeing Business Jet (BBJ) series. The BBJ1 was similar in dimensions to the 737-700 but had additional features, including stronger wings and landing gear from the 737-800, and had increased range over the other 737 models through the use of extra fuel tanks. The first BBJ rolled out on August 11, 1998, and flew for the first time on September 4. On October 11, 1999, Boeing launched the BBJ2. Based on the 737-800, it is longer than the BBJ1, with 25% more cabin space and twice the baggage space, but has slightly reduced range. It is also fitted with auxiliary belly fuel tanks and winglets. The first BBJ2 was delivered on February 28, 2001. Boeing's BBJ3 is based on the 737-900ER. The BBJ3 has of floor space, 35% more interior space, and 89% more luggage space than the BBJ2. It has an auxiliary fuel system, giving it a range of up to , and a Head-up display. Boeing completed the first example in August 2008. This aircraft's cabin is pressurized to a simulated altitude. Boeing Converted Freighter program The Boeing Converted Freighter program (BCF), or the 737-800BCF program, was launched by Boeing in 2016. It converts old 737-800 passenger jets to dedicated freighters. The first 737-800BCF was delivered in 2018 to GECAS, which is leased to West Atlantic. Boeing has signed an agreement with Chinese YTO Cargo Airlines to provide the airline with 737-800BCFs pending a planned program launch. Experimental Four 737 aircraft have been used in Boeing test programs. In 2012, a new 737-800 bound for American Airlines became the first ecoDemonstrator airframe in a program that continues annually into the 2020s. In conjunction with many industry partners, the program aims to reduce the environmental impact of aviation. In 2012 it tested the winglets which would eventually be used in the 737 MAX series. Testing also included a variable area exhaust nozzle, regenerative hydrogen fuel cells for electrical power, and sustainable aviation fuel (SAF). In 2018, one of the 737 MAX 7 prototypes participated in Boeing's Quiet Technology Demonstrator 3 (QTD3) program, in which a NASA engine inlet designed to reduce engine noise was tested over an acoustic array at Moses Lake, Washington. A 737 MAX 9 was used as the 2021 ecoDemonstrator. A new airframe in a special Alaska Airlines livery flew an extensive test program, a major part of which was the use of SAF in blends of up to 50% including a flight from Seattle to Glasgow, Scotland, to attend the United Nations COP26 Climate Change Conference. Other test areas included halon-free fire extinguisher (ground testing only), a low-profile anti-collision light, and text-based air traffic control communications. At the end of the testing the aircraft was returned to standard configuration, and was delivered to Alaska Airlines in 2022. During October 2023 a 737 MAX 10 destined for United Airlines flew a series of test flights to compare the emissions of SAF, including the contrails, with those of conventional fuel. The emissions were measured by NASA’s Douglas DC-8 Airborne Science Lab which flew close behind the 737, which wore a special livery as part of a series of special tests named ecoDemonstrator Explorer. Competition The Boeing 737 Classic, Next Generation and MAX series have faced significant competition from the Airbus A320 family first introduced in 1988. The relatively recent Airbus A220 family now also competes against the smaller capacity end of the 737 variants. The A320 was developed to compete also with the McDonnell Douglas MD-80/90 and 95 series; the 95 later becoming the Boeing 717. Since July 2017, Airbus had a 59.4% market share of the re-engined single aisle market, while Boeing had 40.6%; Boeing had doubts on over-ordered A320neos by new operators and expected to narrow the gap with replacements not already ordered. However, in July 2017, Airbus had still 1,350 more A320neo orders than Boeing had for the 737 MAX. Boeing delivered 8,918 of the 737 family between March 1988 and December 2018, while Airbus delivered 8,605 A320 family aircraft over a similar period since first delivery in early 1988. <noinclude> Operators The five largest operators of the Boeing 737 are Southwest Airlines (815), Ryanair (566), United Airlines (496), American Airlines (363), and Delta Air Lines (240) as of June 2024. Usage Civilian In 2006, over 4,500 Boeing 737s were operated by more than 500 airlines, flying to 1,200 destinations in 190 countries and on average 1,250 aircraft were airborne, with two either departing or landing every five seconds. The 737 was the most commonly flown aircraft in 2008, 2009, and 2010. In 2013, over 5,580 Boeing 737s were operated by more than 342 airlines in 111 countries, which represented more than 25% of the worldwide fleet of large jet airliners. The 737 had carried over 16.8 billion passengers (twice of 7.1 billion world population in that time) over 119 billion miles (192 billion km) with more than 184 million flights or 264 million hours in the air. In 2016, there were 6,512 Boeing 737 airliners in service (5,567 737NGs plus 945 737-200s and 737 Classics), more than the 6,510 Airbus A320 family. while in 2017, there were 6,858 737s in service (5,968 737NGs plus 890 737-200s and classics), fewer than the 6,965 A320 family. By 2018, over 7,500 Boeing 737s were in service and on average 2,800 aircraft were airborne, with two either departing or landing every three seconds, carrying around three million passengers daily. At the time, the global 737 fleet had carried over 22 billion passengers since its introduction. , there were 9,315 Boeing 737s in service, slightly fewer than the 9,353 of the A320 family, as more 737s were already out of service. Military Many countries operate the 737 passenger, BBJ, and cargo variants in government or military applications. Users with 737s include: Orders and deliveries Orders The 737 had the highest, cumulative orders for any airliner until surpassed by the A320 family in October 2019. In that year, 737 orders dropped by 90%, as 737 MAX orders dried up after the March grounding. The 737 MAX backlog fell by 182, mainly due to the Jet Airways bankruptcy, a drop in Boeing's airliner backlog was a first in at least the past 30 years. , 16,703 units of the Boeing 737 family had been ordered, with 4,778 orders were pending, or 4,303 when including "additional criteria for recognizing contracted backlog with customers beyond the existence of a firm contract" (ASC 606 Adjustment). Deliveries Boeing delivered the 5,000th 737 to Southwest Airlines on February 13, 2006, the 6,000th 737 to Norwegian Air Shuttle in April 2009, the 7,000th 737 to Flydubai on December 16, 2011, the 8,000th 737 to United Airlines on April 16, 2014, and the 9,000th 737 to China United Airlines in April 2016. The 10,000th 737 was ordered in July 2012, rolled out on March 13, 2018, and was to be delivered to Southwest Airlines; the backlog at the time stood at over 4,600 aircraft. , 11,925 units of the Boeing 737 family had been delivered, while 11,865 of the competing A320 family had been delivered. Therefore, the 737 is the most delivered jetliner. Model summary Accidents and incidents , the Boeing 737 family has been involved in 529 aviation accidents and incidents, including 215 hull loss accidents out of 234 hull-losses, resulting in a total of 5,779 fatalities. A Boeing analysis of commercial jet airplane accidents between 1959 and 2013 found that the hull loss rate for the Original series was 1.75 per million departures, for the Classic series 0.54, and the Next Generation series 0.27. As of 2023, the analysis showed that the hull loss rate for the Original series was 1.78 (0.87 fatal hull loss rate), for the Classic series 0.81 (0.26 fatal hull loss rate), for the Next Generation series 0.18 (0.04 fatal hull loss rate), and for the MAX series 1.48 (1.48 fatal hull loss rate) per million departures. During the 1990s, a series of rudder issues on series -200 and -300 aircraft resulted in multiple incidents. In two total loss accidents, United Airlines Flight 585 (a -200 series) and USAir Flight 427, (a -300), the pilots lost control of the aircraft following a sudden and unexpected deflection of the rudder, killing everyone aboard, a total of 157 people. Similar rudder issues led to a temporary loss of control on at least five other 737 flights before the problem was ultimately identified. The National Transportation Safety Board determined that the accidents and incidents were the result of a design flaw that could result in an uncommanded movement of the aircraft's rudder. As a result of the NTSB's findings, the Federal Aviation Administration ordered that the rudder servo valves be replaced on all 737s and mandated new training protocols for pilots to handle an unexpected movement of control surfaces. Following the crashes of two 737 MAX 8 aircraft, Lion Air Flight 610 in October 2018 and Ethiopian Airlines Flight 302 in March 2019, which caused 346 deaths, civil aviation authorities around the world grounded the 737 MAX series. On December 16, 2019, Boeing announced that it would suspend production of the 737 MAX from January 2020. Production of the MAX series resumed on May 27, 2020. Aircraft on display Owing to the 737's long production history and popularity, many older 737s have found use in museums after reaching the end of useful service. 19437/1: 737-130 registered N515NA on static display at the Museum of Flight in Seattle, Washington. It was the first 737 built and is painted in NASA markings. 19047/14: 737-222 registered N9009U preserved by Southern Illinois University Carbondale at Southern Illinois Airport. 20213/160: 737-201 registered N213US forward fuselage on static display at the Museum of Flight in Seattle, Washington, in USAir livery. 20561/292: 737-281 registered LV-WTX on static display at the National Museum of Aeronautics in Morón, Buenos Aires. 20562/293: 737-281 registered CC-CSK fuselage preserved at Motel Bahía in Concón, Chile. 21262/470: 737-2H4 registered C-GWJT on static display at the British Columbia Institute of Technology Aerospace Technology Campus in Richmond, British Columbia. It is used for ground instructional training. The aircraft was donated by WestJet and bears its livery. 21340/499: 737-2H4 registered N29SW on static display at the Kansas Aviation Museum in Wichita, Kansas. It was formerly operated by Ryan International Airlines and prior to that Southwest Airlines. 21712/557: 737-275 registered C-GIPW preserved in operational condition at Alberta Flying Heritage Museum in Villeneuve, Alberta. Painted in Pacific Western Airlines livery. 22578/767: 737-290C registered N740AS on static display at the Alaska Aviation Heritage Museum in Anchorage, Alaska. It was formerly operated by Alaska Airlines. 22826/878: 737-2H4 registered YV1361 preserved at a hotel in Santiago, Chile. It was formerly operated by Avior Airlines. 23059/980: 737-2Z6 registered 22–222 on static display at the Royal Thai Air Force Museum in Bangkok. 22940/1037: 737-3H4 registered N300SW on static display at the Frontiers of Flight Museum in Dallas, Texas. It was the first such aircraft delivered to Southwest Airlines in November 1984. 23257/1124: 737-301 registered PK-AWU on static display at ITE College Central in Singapore. 23472/1194: 737-219 registered ZS-SMD on static display at the South African Airways Museum in Germiston, Gauteng. 23660/1294: 737-377 registered G-CELS (nickname Elsie) on static display at Norwich International Aviation Academy, as an aircraft maintenance trainer. It is painted in the silver & red Jet2.com color scheme, without the logo branding. 27286/2528: 737-3Q8 registered N759BA on static display at the Pima Air & Space Museum in Tucson, Arizona. It is painted in China Southern Airlines markings, and was previously operated by the airline as B-2921. Specifications
Technology
Specific aircraft_2
null
149708
https://en.wikipedia.org/wiki/IBM%207030%20Stretch
IBM 7030 Stretch
The IBM 7030, also known as Stretch, was IBM's first transistorized supercomputer. It was the fastest computer in the world from 1961 until the first CDC 6600 became operational in 1964. Originally designed to meet a requirement formulated by Edward Teller at Lawrence Livermore National Laboratory, the first example was delivered to Los Alamos National Laboratory in 1961, and a second customized version, the IBM 7950 Harvest, to the National Security Agency in 1962. The Stretch at the Atomic Weapons Research Establishment at Aldermaston, England was heavily used by researchers there and at AERE Harwell, but only after the development of the S2 Fortran compiler which was the first to add dynamic arrays, and which was later ported to the Ferranti Atlas of Atlas Computer Laboratory at Chilton. The 7030 was much slower than expected and failed to meet its aggressive performance goals. IBM was forced to drop its price from $13.5 million to only $7.78 million and withdrew the 7030 from sales to customers beyond those having already negotiated contracts. PC World magazine named Stretch one of the biggest project management failures in IT history. Within IBM, being eclipsed by the smaller Control Data Corporation seemed hard to accept. The project lead, Stephen W. Dunwell, was initially made a scapegoat for his role in the "failure", but as the success of the IBM System/360 became obvious, he was given an official apology and, in 1966 was made an IBM Fellow. In spite of Stretch's failure to meet its own performance goals, it served as the basis for many of the design features of the successful IBM System/360, which was announced in 1964 and first shipped in 1965. Development history In early 1955, Dr. Edward Teller of the University of California Radiation Laboratory wanted a new scientific computing system for three-dimensional hydrodynamic calculations. Proposals were requested from IBM and UNIVAC for this new system, to be called Livermore Automatic Reaction Calculator or LARC. According to IBM executive Cuthbert Hurd, such a system would cost roughly $2.5 million and would run at one to two MIPS. Delivery was to be two to three years after the contract was signed. At IBM, a small team at Poughkeepsie including John Griffith and Gene Amdahl worked on the design proposal. Just after they finished and were about to present the proposal, Ralph Palmer stopped them and said, "It's a mistake." The proposed design would have been built with either point-contact transistors or surface-barrier transistors, both likely to be soon outperformed by the then newly invented diffusion transistor. IBM returned to Livermore and stated that they were withdrawing from the contract, and instead proposed a dramatically better system, "We are not going to build that machine for you; we want to build something better! We do not know precisely what it will take but we think it will be another million dollars and another year, and we do not know how fast it will run but we would like to shoot for ten million instructions per second." Livermore was not impressed, and in May 1955 they announced that UNIVAC had won the LARC contract, now called the Livermore Automatic Research Computer. LARC would eventually be delivered in June 1960. In September 1955, fearing that Los Alamos National Laboratory might also order a LARC, IBM submitted a preliminary proposal for a high-performance binary computer based on the improved version of the design that Livermore had rejected, which they received with interest. In January 1956, Project Stretch was formally initiated. In November 1956, IBM won the contract with the aggressive performance goal of a "speed at least 100 times the IBM 704" (i.e. 4 MIPS). Delivery was slated for 1960. During design, it proved necessary to reduce the clock speeds, making it clear that Stretch could not meet its aggressive performance goals, but estimates of performance ranged from 60 to 100 times the IBM 704. In 1960, the price of $13.5 million was set for the IBM 7030. In 1961, actual benchmarks indicated that the performance of the IBM 7030 was only about 30 times the IBM 704 (i.e. 1.2 MIPS), causing considerable embarrassment for IBM. In May 1961, Thomas J. Watson Jr. announced a price cut of all 7030s under negotiation to $7.78 million and immediate withdrawal of the product from further sales. Its floating-point addition time is 1.38–1.50 microseconds, multiplication time is 2.48–2.70 microseconds, and division time is 9.00–9.90 microseconds. Technical impact While the IBM 7030 was not considered successful, it spawned many technologies incorporated in future machines that were highly successful. The Standard Modular System (SMS) transistor logic was the basis for the IBM 7090 line of scientific computers, the IBM 7070 and 7080 business computers, the IBM 7040 and IBM 1400 lines, and the IBM 1620 small scientific computer; the 7030 used about transistors. The IBM 7302 Model I Core Storage units were also used in the IBM 7090, IBM 7070 and IBM 7080. Multiprogramming, memory protection, generalized interrupts, the eight-bit byte for I/O were all concepts later incorporated in the IBM System/360 line of computers as well as most later central processing units (CPU). Stephen Dunwell, the project manager who became a scapegoat when Stretch failed commercially, pointed out soon after the phenomenally successful 1964 launch of System/360 that most of its core concepts were pioneered by Stretch. By 1966, he had received an apology and been made an IBM Fellow, a high honor that carried with it resources and authority to pursue one's desired research. Instruction pipelining, prefetch and decoding, and memory interleaving were used in later supercomputer designs such as the IBM System/360 Models 91, 95 and 195, and the IBM 3090 series as well as computers from other manufacturers. , these techniques are still used in most advanced microprocessors, starting with the 1990s generation that included the Intel Pentium and the Motorola/IBM PowerPC, as well as in many embedded microprocessors and microcontrollers from various manufacturers. Hardware implementation The 7030 CPU uses emitter-coupled logic (originally called current-steering logic) on 18 types of Standard Modular System cards. It uses 4,025 double cards (as shown) and 18,747 single cards, holding 169,100 transistors, requiring a total of 21 kW power. It uses high-speed NPN and PNP germanium drift transistors, with cut-off frequency over 100 MHz, and using ~50 mW each. Some third level circuits use a third voltage level. Each logic level has a delay of about 20 ns. To gain speed in critical areas emitter-follower logic is used to reduce the delay to about 10 ns. It uses the same core memory as the IBM 7090. Installations Los Alamos Scientific Laboratory (LASL) in April 1961, accepted in May 1961, and used until June 21, 1971. Lawrence Livermore National Laboratory, Livermore, California delivered November 1961. U.S. National Security Agency in February 1962 as the main CPU of the IBM 7950 Harvest system, used until 1976, when the IBM 7955 Tractor tape system developed problems due to worn cams that could not be replaced. Atomic Weapons Establishment, Aldermaston, England, delivered February 1962 U.S. Weather Bureau Washington D.C., delivered June/July 1962. MITRE Corporation, delivered December 1962. and used until August 1971. In the spring of 1972, it was sold to Brigham Young University, where it was used by the physics department until scrapped in 1982. U.S. Navy Dahlgren Naval Proving Ground, delivered Sep/Oct 1962. Commissariat à l'énergie atomique, France, delivered November 1963. IBM. The Lawrence Livermore Laboratory's IBM 7030 (except for its core memory) and portions of the MITRE Corporation/Brigham Young University IBM 7030 now reside in the Computer History Museum collection, in Mountain View, California. Architecture Data formats Fixed-point numbers are variable in length, stored in either binary (1 to 64 bits) or decimal (1 to 16 digits) and either unsigned format or sign/magnitude format. Fields may straddle word boundaries. In decimal format, digits are variable length bytes (four to eight bits). Floating-point numbers have a 1-bit exponent flag, a 10-bit exponent, a 1-bit exponent sign, a 48-bit magnitude, and a 4-bit sign byte in sign/magnitude format. Alphanumeric characters are variable length and can use any character code of 8 bits or less. Bytes are variable length (one to eight bits). Instruction format Instructions are either 32-bit or 64-bit. Registers The registers overlay the first 32 addresses of memory as shown. The accumulator and index registers operate in sign-and-magnitude format. Memory Main memory is 16K to 256K 64-bit binary words, in banks of 16K. The memory was immersion oil-heated/cooled to stabilize its operating characteristics. Software STRETCH Assembly Program (STRAP) MCP (not to be confused with the Burroughs MCP) COLASL and IVY programming languages FORTRAN programming language SOS (Stretch Operating System) Written at the BYU Scientific Computing Center as an upgrade to MCP, along with an updated variant of FORTRAN.
Technology
Early computers
null
149848
https://en.wikipedia.org/wiki/Combinatory%20logic
Combinatory logic
Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry, and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators, which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly in predicate logic. A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments. In mathematics Combinatory logic was originally intended as a 'pre-logic' that would clarify the role of quantified variables in logic, essentially by eliminating them. Another way of eliminating quantified variables is Quine's predicate functor logic. While the expressive power of combinatory logic typically exceeds that of first-order logic, the expressive power of predicate functor logic is identical to that of first order logic (Quine 1960, 1966, 1976). The original inventor of combinatory logic, Moses Schönfinkel, published nothing on combinatory logic after his original 1924 paper. Haskell Curry rediscovered the combinators while working as an instructor at Princeton University in late 1927. In the late 1930s, Alonzo Church and his students at Princeton invented a rival formalism for functional abstraction, the lambda calculus, which proved more popular than combinatory logic. The upshot of these historical contingencies was that until theoretical computer science began taking an interest in combinatory logic in the 1960s and 1970s, nearly all work on the subject was by Haskell Curry and his students, or by Robert Feys in Belgium. Curry and Feys (1958), and Curry et al. (1972) survey the early history of combinatory logic. For a more modern treatment of combinatory logic and the lambda calculus together, see the book by Barendregt, which reviews the models Dana Scott devised for combinatory logic in the 1960s and 1970s. In computing In computer science, combinatory logic is used as a simplified model of computation, used in computability theory and proof theory. Despite its simplicity, combinatory logic captures many essential features of computation. Combinatory logic can be viewed as a variant of the lambda calculus, in which lambda expressions (representing functional abstraction) are replaced by a limited set of combinators, primitive functions without free variables. It is easy to transform lambda expressions into combinator expressions, and combinator reduction is much simpler than lambda reduction. Hence combinatory logic has been used to model some non-strict functional programming languages and hardware. The purest form of this view is the programming language Unlambda, whose sole primitives are the S and K combinators augmented with character input/output. Although not a practical programming language, Unlambda is of some theoretical interest. Combinatory logic can be given a variety of interpretations. Many early papers by Curry showed how to translate axiom sets for conventional logic into combinatory logic equations. Dana Scott in the 1960s and 1970s showed how to marry model theory and combinatory logic. Summary of lambda calculus Lambda calculus is concerned with objects called lambda-terms, which can be represented by the following three forms of strings: where is a variable name drawn from a predefined infinite set of variable names, and and are lambda-terms. Terms of the form are called abstractions. The variable v is called the formal parameter of the abstraction, and is the body of the abstraction. The term represents the function which, applied to an argument, binds the formal parameter v to the argument and then computes the resulting value of — that is, it returns , with every occurrence of v replaced by the argument. Terms of the form are called applications. Applications model function invocation or execution: the function represented by is to be invoked, with as its argument, and the result is computed. If (sometimes called the applicand) is an abstraction, the term may be reduced: , the argument, may be substituted into the body of in place of the formal parameter of , and the result is a new lambda term which is equivalent to the old one. If a lambda term contains no subterms of the form then it cannot be reduced, and is said to be in normal form. The expression represents the result of taking the term and replacing all free occurrences of in it with . Thus we write By convention, we take as shorthand for (i.e., application is left associative). The motivation for this definition of reduction is that it captures the essential behavior of all mathematical functions. For example, consider the function that computes the square of a number. We might write The square of x is (Using "" to indicate multiplication.) x here is the formal parameter of the function. To evaluate the square for a particular argument, say 3, we insert it into the definition in place of the formal parameter: The square of 3 is To evaluate the resulting expression , we would have to resort to our knowledge of multiplication and the number 3. Since any computation is simply a composition of the evaluation of suitable functions on suitable primitive arguments, this simple substitution principle suffices to capture the essential mechanism of computation. Moreover, in lambda calculus, notions such as '3' and '' can be represented without any need for externally defined primitive operators or constants. It is possible to identify terms in lambda calculus, which, when suitably interpreted, behave like the number 3 and like the multiplication operator, q.v. Church encoding. Lambda calculus is known to be computationally equivalent in power to many other plausible models for computation (including Turing machines); that is, any calculation that can be accomplished in any of these other models can be expressed in lambda calculus, and vice versa. According to the Church–Turing thesis, both models can express any possible computation. It is perhaps surprising that lambda-calculus can represent any conceivable computation using only the simple notions of function abstraction and application based on simple textual substitution of terms for variables. But even more remarkable is that abstraction is not even required. Combinatory logic is a model of computation equivalent to lambda calculus, but without abstraction. The advantage of this is that evaluating expressions in lambda calculus is quite complicated because the semantics of substitution must be specified with great care to avoid variable capture problems. In contrast, evaluating expressions in combinatory logic is much simpler, because there is no notion of substitution. Combinatory calculi Since abstraction is the only way to manufacture functions in the lambda calculus, something must replace it in the combinatory calculus. Instead of abstraction, combinatory calculus provides a limited set of primitive functions out of which other functions may be built. Combinatory terms A combinatory term has one of the following forms: The primitive functions are combinators, or functions that, when seen as lambda terms, contain no free variables. To shorten the notations, a general convention is that , or even , denotes the term . This is the same general convention (left-associativity) as for multiple application in lambda calculus. Reduction in combinatory logic In combinatory logic, each primitive combinator comes with a reduction rule of the form where E is a term mentioning only variables from the set . It is in this way that primitive combinators behave as functions. Examples of combinators The simplest example of a combinator is I, the identity combinator, defined by (I x) = x for all terms x. Another simple combinator is K, which manufactures constant functions: (K x) is the function which, for any argument, returns x, so we say ((K x) y) = x for all terms x and y. Or, following the convention for multiple application, (K x y) = x A third combinator is S, which is a generalized version of application: (S x y z) = (x z (y z)) S applies x to y after first substituting z into each of them. Or put another way, x is applied to y inside the environment z. Given S and K, I itself is unnecessary, since it can be built from the other two: ((S K K) x) = (S K K x) = (K x (K x)) = x for any term x. Note that although ((S K K) x) = (I x) for any x, (S K K) itself is not equal to I. We say the terms are extensionally equal. Extensional equality captures the mathematical notion of the equality of functions: that two functions are equal if they always produce the same results for the same arguments. In contrast, the terms themselves, together with the reduction of primitive combinators, capture the notion of intensional equality of functions: that two functions are equal only if they have identical implementations up to the expansion of primitive combinators. There are many ways to implement an identity function; (S K K) and I are among these ways. (S K S) is yet another. We will use the word equivalent to indicate extensional equality, reserving equal for identical combinatorial terms. A more interesting combinator is the fixed point combinator or Y combinator, which can be used to implement recursion. Completeness of the S-K basis S and K can be composed to produce combinators that are extensionally equal to any lambda term, and therefore, by Church's thesis, to any computable function whatsoever. The proof is to present a transformation, , which converts an arbitrary lambda term into an equivalent combinator. may be defined as follows: (if x does not occur free in E) (if x occurs free in E) (if x occurs free in E or E) Note that T[ ] as given is not a well-typed mathematical function, but rather a term rewriter: Although it eventually yields a combinator, the transformation may generate intermediary expressions that are neither lambda terms nor combinators, via rule (5). This process is also known as abstraction elimination. This definition is exhaustive: any lambda expression will be subject to exactly one of these rules (see Summary of lambda calculus above). It is related to the process of bracket abstraction, which takes an expression E built from variables and application and produces a combinator expression [x]E in which the variable x is not free, such that [x]E x = E holds. A very simple algorithm for bracket abstraction is defined by induction on the structure of expressions as follows: [x]y := K y [x]x := I [x](E E) := S([x]E)([x]E) Bracket abstraction induces a translation from lambda terms to combinator expressions, by interpreting lambda-abstractions using the bracket abstraction algorithm. Conversion of a lambda term to an equivalent combinatorial term For example, we will convert the lambda term λx.λy.(y x) to a combinatorial term: T[λx.λy.(y x)] = Tλx.Tλy.(y x) (by 5) = T[λx.(S T[λy.y] T[λy.x])] (by 6) = T[λx.(S I T[λy.x])] (by 4) = T[λx.(S I (K T[x]))] (by 3) = T[λx.(S I (K x))] (by 1) = (S T[λx.(S I)] T[λx.(K x)]) (by 6) = (S (K (S I)) T[λx.(K x)]) (by 3) = (S (K (S I)) (S T[λx.K] T[λx.x])) (by 6) = (S (K (S I)) (S (K K) T[λx.x])) (by 3) = (S (K (S I)) (S (K K) I)) (by 4) If we apply this combinatorial term to any two terms x and y (by feeding them in a queue-like fashion into the combinator 'from the right'), it reduces as follows: (S (K (S I)) (S (K K) I) x y) = (K (S I) x (S (K K) I x) y) = (S I (S (K K) I x) y) = (I y (S (K K) I x y)) = (y (S (K K) I x y)) = (y (K K x (I x) y)) = (y (K (I x) y)) = (y (I x)) = (y x) The combinatory representation, (S (K (S I)) (S (K K) I)) is much longer than the representation as a lambda term, λx.λy.(y x). This is typical. In general, the T[ ] construction may expand a lambda term of length n to a combinatorial term of length Θ(n3). Explanation of the T[ ] transformation The T[ ] transformation is motivated by a desire to eliminate abstraction. Two special cases, rules 3 and 4, are trivial: λx.x is clearly equivalent to I, and λx.E is clearly equivalent to (K T[E]) if x does not appear free in E. The first two rules are also simple: Variables convert to themselves, and applications, which are allowed in combinatory terms, are converted to combinators simply by converting the applicand and the argument to combinators. It is rules 5 and 6 that are of interest. Rule 5 simply says that to convert a complex abstraction to a combinator, we must first convert its body to a combinator, and then eliminate the abstraction. Rule 6 actually eliminates the abstraction. λx.(E E) is a function which takes an argument, say a, and substitutes it into the lambda term (E E) in place of x, yielding (E E)[x : = a]. But substituting a into (E E) in place of x is just the same as substituting it into both E and E, so (E E)[x := a] = (E[x := a] E[x := a]) (λx.(E E) a) = ((λx.E a) (λx.E a)) = (S λx.E λx.E a) = ((S λx.E λx.E) a) By extensional equality, λx.(E E) = (S λx.E λx.E) Therefore, to find a combinator equivalent to λx.(E E), it is sufficient to find a combinator equivalent to (S λx.E λx.E), and (S T[λx.E] T[λx.E]) evidently fits the bill. E and E each contain strictly fewer applications than (E E), so the recursion must terminate in a lambda term with no applications at all—either a variable, or a term of the form λx.E. Simplifications of the transformation η-reduction The combinators generated by the T[ ] transformation can be made smaller if we take into account the η-reduction rule: T[λx.(E x)] = T[E] (if x is not free in E) λx.(E x) is the function which takes an argument, x, and applies the function E to it; this is extensionally equal to the function E itself. It is therefore sufficient to convert E to combinatorial form. Taking this simplification into account, the example above becomes: T[λx.λy.(y x)] = ... = (S (K (S I)) T[λx.(K x)]) = (S (K (S I)) K) (by η-reduction) This combinator is equivalent to the earlier, longer one: (S (K (S I)) K x y) = (K (S I) x (K x) y) = (S I (K x) y) = (I y (K x y)) = (y (K x y)) = (y x) Similarly, the original version of the T[ ] transformation transformed the identity function λf.λx.(f x) into (S (S (K S) (S (K K) I)) (K I)). With the η-reduction rule, λf.λx.(f x) is transformed into I. One-point basis There are one-point bases from which every combinator can be composed extensionally equal to any lambda term. The simplest example of such a basis is {X} where: X ≡ λx.((xS)K) It is not difficult to verify that: X (X (X X)) =β K and X (X (X (X X))) =β S. Since {K, S} is a basis, it follows that {X} is a basis too. The Iota programming language uses X as its sole combinator. Another simple example of a one-point basis is: X' ≡ λx.(x K S K) with (X' X') X' =β K and X' (X' X') =β S In fact, there exist infinitely many such bases. Combinators B, C In addition to S and K, included two combinators which are now called B and C, with the following reductions: (C f g x) = ((f x) g) (B f g x) = (f (g x)) He also explains how they in turn can be expressed using only S and K: B = (S (K S) K) C = (S (S (K (S (K S) K)) S) (K K)) These combinators are extremely useful when translating predicate logic or lambda calculus into combinator expressions. They were also used by Curry, and much later by David Turner, whose name has been associated with their computational use. Using them, we can extend the rules for the transformation as follows: (if x is not free in E) (if x is free in E) (if x is free in both E and E) (if x is free in E but not E) (if x is free in E but not E) Using B and C combinators, the transformation of λx.λy.(y x) looks like this: (by rule 7) (η-reduction) (traditional canonical notation: ) (traditional canonical notation: ) And indeed, (C I x y) does reduce to (y x): (C I x y) = (I y x) = (y x) The motivation here is that B and C are limited versions of S. Whereas S takes a value and substitutes it into both the applicand and its argument before performing the application, C performs the substitution only in the applicand, and B only in the argument. The modern names for the combinators come from Haskell Curry's doctoral thesis of 1930 (see B, C, K, W System). In Schönfinkel's original paper, what we now call S, K, I, B and C were called S, C, I, Z, and T respectively. The reduction in combinator size that results from the new transformation rules can also be achieved without introducing B and C, as demonstrated in Section 3.2 of . CLK versus CLI calculus A distinction must be made between the CLK as described in this article and the CLI calculus. The distinction corresponds to that between the λK and the λI calculus. Unlike the λK calculus, the λI calculus restricts abstractions to: λx.E where x has at least one free occurrence in E. As a consequence, combinator K is not present in the λI calculus nor in the CLI calculus. The constants of CLI are: I, B, C and S, which form a basis from which all CLI terms can be composed (modulo equality). Every λI term can be converted into an equal CLI combinator according to rules similar to those presented above for the conversion of λK terms into CLK combinators. See chapter 9 in Barendregt (1984). Reverse conversion The conversion L[ ] from combinatorial terms to lambda terms is trivial: L[I] = λx.x L[K] = λx.λy.x L[C] = λx.λy.λz.(x z y) L[B] = λx.λy.λz.(x (y z)) L[S] = λx.λy.λz.(x z (y z)) L[(E E)] = (L[E] L[E]) Note, however, that this transformation is not the inverse transformation of any of the versions of T[ ] that we have seen. Undecidability of combinatorial calculus A normal form is any combinatory term in which the primitive combinators that occur, if any, are not applied to enough arguments to be simplified. It is undecidable whether a general combinatory term has a normal form; whether two combinatory terms are equivalent, etc. This can be shown in a similar way as for the corresponding problems for lambda terms. Undefinability by predicates The undecidable problems above (equivalence, existence of normal form, etc.) take as input syntactic representations of terms under a suitable encoding (e.g., Church encoding). One may also consider a toy trivial computation model where we "compute" properties of terms by means of combinators applied directly to the terms themselves as arguments, rather than to their syntactic representations. More precisely, let a predicate be a combinator that, when applied, returns either T or F (where T and F represent the conventional Church encodings of true and false, λx.λy.x and λx.λy.y, transformed into combinatory logic; the combinatory versions have and ). A predicate N is nontrivial if there are two arguments A and B such that N A = T and N B = F. A combinator N is complete if NM has a normal form for every argument M. An analogue of Rice's theorem for this toy model then says that every complete predicate is trivial. The proof of this theorem is rather simple. From this undefinability theorem it immediately follows that there is no complete predicate that can discriminate between terms that have a normal form and terms that do not have a normal form. It also follows that there is no complete predicate, say EQUAL, such that: (EQUAL A B) = T if A = B and (EQUAL A B) = F if A ≠ B. If EQUAL would exist, then for all A, λx.(EQUAL x A) would have to be a complete non trivial predicate. However, note that it also immediately follows from this undefinability theorem that many properties of terms that are obviously decidable are not definable by complete predicates either: e.g., there is no predicate that could tell whether the first primitive function letter occurring in a term is a K. This shows that definability by predicates is a not a reasonable model of decidability. Applications Compilation of functional languages David Turner used his combinators to implement the SASL programming language. Kenneth E. Iverson used primitives based on Curry's combinators in his J programming language, a successor to APL. This enabled what Iverson called tacit programming, that is, programming in functional expressions containing no variables, along with powerful tools for working with such programs. It turns out that tacit programming is possible in any APL-like language with user-defined operators. Logic The Curry–Howard isomorphism implies a connection between logic and programming: every proof of a theorem of intuitionistic logic corresponds to a reduction of a typed lambda term, and conversely. Moreover, theorems can be identified with function type signatures. Specifically, a typed combinatory logic corresponds to a Hilbert system in proof theory. The K and S combinators correspond to the axioms AK: A → (B → A), AS: (A → (B → C)) → ((A → B) → (A → C)), and function application corresponds to the detachment (modus ponens) rule MP: from A and A → B infer B. The calculus consisting of AK, AS, and MP is complete for the implicational fragment of the intuitionistic logic, which can be seen as follows. Consider the set W of all deductively closed sets of formulas, ordered by inclusion. Then is an intuitionistic Kripke frame, and we define a model in this frame by This definition obeys the conditions on satisfaction of →: on one hand, if , and is such that and , then by modus ponens. On the other hand, if , then by the deduction theorem, thus the deductive closure of is an element such that , , and . Let A be any formula which is not provable in the calculus. Then A does not belong to the deductive closure X of the empty set, thus , and A is not intuitionistically valid.
Mathematics
Computability theory
null
149861
https://en.wikipedia.org/wiki/Work%20%28physics%29
Work (physics)
In science, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force. For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction. Both force and displacement are vectors. The work done is given by the dot product of the two vectors, where the result is a scalar. When the force is constant and the angle between the force and the displacement is also constant, then the work done is given by: If the force is variable, then work is given by the line integral: where is the tiny change in displacement vector. Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy. History The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Mechanics), in which he showed the underlying mathematical similarity of the machines as force amplifiers. He was the first to explain that simple machines do not create energy, only transform it. Early concepts of work Although work was not formally used until 1826, similar concepts existed before then. Early names for the same concept included moment of activity, quantity of action, latent live force, dynamic effect, efficiency, and even force. In 1637, the French philosopher René Descartes wrote: In 1686, the German philosopher Gottfried Leibniz wrote: In 1759, John Smeaton described a quantity that he called "power" "to signify the exertion of strength, gravitation, impulse, or pressure, as to produce motion." Smeaton continues that this quantity can be calculated if "the weight raised is multiplied by the height to which it can be raised in a given time," making this definition remarkably similar to Coriolis's. Etymology and modern usage The term work (or mechanical work), and the use of the work-energy principle in mechanics, was introduced in the late 1820s independently by French mathematician Gaspard-Gustave Coriolis and French Professor of Applied Mechanics Jean-Victor Poncelet. Both scientists were pursuing a view of mechanics suitable for studying the dynamics and power of machines, for example steam engines lifting buckets of water out of flooded ore mines. According to Rene Dugas, French engineer and historian, it is to Solomon of Caux "that we owe the term work in the sense that it is used in mechanics now". The concept of virtual work, and the use of variational methods in mechanics, preceded the introduction of "mechanical work" but was originally called "virtual moment". It was re-named once the terminology of Poncelet and Coriolis was adopted. Units The SI unit of work is the joule (J), named after English physicist James Prescott Joule (1818-1889), which is defined as the work required to exert a force of one newton through a displacement of one metre. The dimensionally equivalent newton-metre (N⋅m) is sometimes used as the measuring unit for work, but this can be confused with the measurement unit of torque. Usage of N⋅m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton-metres is a torque measurement, or a measurement of work. Another unit for work is the foot-pound, which comes from the English system of measurement. As the unit name suggests, it is the product of pounds for the unit of force and feet for the unit of displacement. One joule is approximately equal to 0.7376 ft-lbs. Non-SI units of work include the newton-metre, erg, the foot-pound, the foot-poundal, the kilowatt hour, the litre-atmosphere, and the horsepower-hour. Due to work having the same physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU and calorie, are used as a measuring unit. Work and energy The work done by a constant force of magnitude on a point that moves a displacement in a straight line in the direction of the force is the product For example, if a force of 10 newtons () acts along a point that travels 2 metres (), then . This is approximately the work done lifting a 1 kg object from ground level to over a person's head against the force of gravity. The work is doubled either by lifting twice the weight the same distance or by lifting the same weight twice the distance. Work is closely related to energy. Energy shares the same unit of measurement with work (Joules) because the energy from the object doing work is transferred to the other objects it interacts with when work is being done. The work–energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in kinetic energy is caused by an equal amount of negative work done by the resultant force. Thus, if the net work is positive, then the particle's kinetic energy increases by the amount of the work. If the net work done is negative, then the particle's kinetic energy decreases by the amount of work. From Newton's second law, it can be shown that work on a free (no fields), rigid (no internal degrees of freedom) body, is equal to the change in kinetic energy corresponding to the linear velocity and angular velocity of that body, The work of forces generated by a potential function is known as potential energy and the forces are said to be conservative. Therefore, work on an object that is merely displaced in a conservative force field, without change in velocity or rotation, is equal to minus the change of potential energy of the object, These formulas show that work is the energy associated with the action of a force, so work subsequently possesses the physical dimensions, and units, of energy. The work/energy principles discussed here are identical to electric work/energy principles. Constraint forces Constraint forces determine the object's displacement in the system, limiting it within a range. For example, in the case of a slope plus gravity, the object is stuck to the slope and, when attached to a taut string, it cannot move in an outwards direction to make the string any 'tauter'. It eliminates all displacements in that direction, that is, the velocity in the direction of the constraint is limited to 0, so that the constraint forces do not perform work on the system. For a mechanical system, constraint forces eliminate movement in directions that characterize the constraint. Thus the virtual work done by the forces of constraint is zero, a result which is only true if friction forces are excluded. Fixed, frictionless constraint forces do not perform work on the system, as the angle between the motion and the constraint forces is always 90°. Examples of workless constraints are: rigid interconnections between particles, sliding motion on a frictionless surface, and rolling contact without slipping. For example, in a pulley system like the Atwood machine, the internal forces on the rope and at the supporting pulley do no work on the system. Therefore, work need only be computed for the gravitational forces acting on the bodies. Another example is the centripetal force exerted inwards by a string on a ball in uniform circular motion sideways constrains the ball to circular motion restricting its movement away from the centre of the circle. This force does zero work because it is perpendicular to the velocity of the ball. The magnetic force on a charged particle is , where is the charge, is the velocity of the particle, and is the magnetic field. The result of a cross product is always perpendicular to both of the original vectors, so . The dot product of two perpendicular vectors is always zero, so the work , and the magnetic force does not do work. It can change the direction of motion but never change the speed. Mathematical calculation For moving objects, the quantity of work/time (power) is integrated along the trajectory of the point of application of the force. Thus, at any instant, the rate of the work done by a force (measured in joules/second, or watts) is the scalar product of the force (a vector), and the velocity vector of the point of application. This scalar product of force and velocity is known as instantaneous power. Just as velocities may be integrated over time to obtain a total distance, by the fundamental theorem of calculus, the total work along a path is similarly the time-integral of instantaneous power applied along the trajectory of the point of application. Work is the result of a force on a point that follows a curve , with a velocity , at each instant. The small amount of work that occurs over an instant of time is calculated as where the is the power over the instant . The sum of these small amounts of work over the trajectory of the point yields the work, where C is the trajectory from x(t1) to x(t2). This integral is computed along the trajectory of the particle, and is therefore said to be path dependent. If the force is always directed along this line, and the magnitude of the force is , then this integral simplifies to where is displacement along the line. If is constant, in addition to being directed along the line, then the integral simplifies further to where s is the displacement of the point along the line. This calculation can be generalized for a constant force that is not directed along the line, followed by the particle. In this case the dot product , where is the angle between the force vector and the direction of movement, that is When a force component is perpendicular to the displacement of the object (such as when a body moves in a circular path under a central force), no work is done, since the cosine of 90° is zero. Thus, no work can be performed by gravity on a planet with a circular orbit (this is ideal, as all orbits are slightly elliptical). Also, no work is done on a body moving circularly at a constant speed while constrained by mechanical force, such as moving at constant speed in a frictionless ideal centrifuge. Work done by a variable force Calculating the work as "force times straight path segment" would only apply in the most simple of circumstances, as noted above. If force is changing, or if the body is moving along a curved path, possibly rotating and not necessarily rigid, then only the path of the application point of the force is relevant for the work done, and only the component of the force parallel to the application point velocity is doing work (positive work when in the same direction, and negative when in the opposite direction of the velocity). This component of force can be described by the scalar quantity called scalar tangential component (, where is the angle between the force and the velocity). And then the most general definition of work can be formulated as follows: Thus, the work done for a variable force can be expressed as a definite integral of force over displacement. If the displacement as a variable of time is given by , then work done by the variable force from to is: Thus, the work done for a variable force can be expressed as a definite integral of power over time. Torque and rotation A force couple results from equal and opposite forces, acting on two different points of a rigid body. The sum (resultant) of these forces may cancel, but their effect on the body is the couple or torque T. The work of the torque is calculated as where the is the power over the instant . The sum of these small amounts of work over the trajectory of the rigid body yields the work, This integral is computed along the trajectory of the rigid body with an angular velocity that varies with time, and is therefore said to be path dependent. If the angular velocity vector maintains a constant direction, then it takes the form, where is the angle of rotation about the constant unit vector . In this case, the work of the torque becomes, where is the trajectory from to . This integral depends on the rotational trajectory , and is therefore path-dependent. If the torque is aligned with the angular velocity vector so that, and both the torque and angular velocity are constant, then the work takes the form, This result can be understood more simply by considering the torque as arising from a force of constant magnitude , being applied perpendicularly to a lever arm at a distance , as shown in the figure. This force will act through the distance along the circular arc , so the work done is Introduce the torque , to obtain as presented above. Notice that only the component of torque in the direction of the angular velocity vector contributes to the work. Work and potential energy The scalar product of a force and the velocity of its point of application defines the power input to a system at an instant of time. Integration of this power over the trajectory of the point of application, , defines the work input to the system by the force. Path dependence Therefore, the work done by a force on an object that travels along a curve is given by the line integral: where defines the trajectory and is the velocity along this trajectory. In general this integral requires that the path along which the velocity is defined, so the evaluation of work is said to be path dependent. The time derivative of the integral for work yields the instantaneous power, Path independence If the work for an applied force is independent of the path, then the work done by the force, by the gradient theorem, defines a potential function which is evaluated at the start and end of the trajectory of the point of application. This means that there is a potential function , that can be evaluated at the two points and to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is The function is called the potential energy associated with the applied force. The force derived from such a potential function is said to be conservative. Examples of forces that have potential energies are gravity and spring forces. In this case, the gradient of work yields and the force F is said to be "derivable from a potential." Because the potential defines a force at every point in space, the set of forces is called a force field. The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity of the body, that is Work by gravity In the absence of other forces, gravity results in a constant downward acceleration of every freely moving object. Near Earth's surface the acceleration due to gravity is and the gravitational force on an object of mass m is . It is convenient to imagine this gravitational force concentrated at the center of mass of the object. If an object with weight is displaced upwards or downwards a vertical distance , the work done on the object is: where Fg is weight (pounds in imperial units, and newtons in SI units), and Δy is the change in height y. Notice that the work done by gravity depends only on the vertical movement of the object. The presence of friction does not affect the work done on the object by its weight. In space The force of gravity exerted by a mass on another mass is given by where is the position vector from to and is the unit vector in the direction of . Let the mass move at the velocity ; then the work of gravity on this mass as it moves from position to is given by Notice that the position and velocity of the mass are given by where and are the radial and tangential unit vectors directed relative to the vector from to , and we use the fact that Use this to simplify the formula for work of gravity to, This calculation uses the fact that The function is the gravitational potential function, also known as gravitational potential energy. The negative sign follows the convention that work is gained from a loss of potential energy. Work by a spring Consider a spring that exerts a horizontal force that is proportional to its deflection in the x direction independent of how a body moves. The work of this spring on a body moving along the space with the curve , is calculated using its velocity, , to obtain For convenience, consider contact with the spring occurs at , then the integral of the product of the distance and the x-velocity, , over time is . The work is the product of the distance times the spring force, which is also dependent on distance; hence the result. Work by a gas The work done by a body of gas on its surroundings is: where is pressure, is volume, and and are initial and final volumes. Work–energy principle The principle of work and kinetic energy (also known as the work–energy principle) states that the work done by all forces acting on a particle (the work of the resultant force) equals the change in the kinetic energy of the particle. That is, the work W done by the resultant force on a particle equals the change in the particle's kinetic energy , where and are the speeds of the particle before and after the work is done, and is its mass. The derivation of the work–energy principle begins with Newton's second law of motion and the resultant force on a particle. Computation of the scalar product of the force with the velocity of the particle evaluates the instantaneous power added to the system. (Constraints define the direction of movement of the particle by ensuring there is no component of velocity in the direction of the constraint force. This also means the constraint forces do not add to the instantaneous power.) The time integral of this scalar equation yields work from the instantaneous power, and kinetic energy from the scalar product of acceleration with velocity. The fact that the work–energy principle eliminates the constraint forces underlies Lagrangian mechanics. This section focuses on the work–energy principle as it applies to particle dynamics. In more general systems work can change the potential energy of a mechanical device, the thermal energy in a thermal system, or the electrical energy in an electrical device. Work transfers energy from one place to another or one form to another. Derivation for a particle moving along a straight line In the case the resultant force is constant in both magnitude and direction, and parallel to the velocity of the particle, the particle is moving with constant acceleration a along a straight line. The relation between the net force and the acceleration is given by the equation (Newton's second law), and the particle displacement can be expressed by the equation which follows from (see Equations of motion). The work of the net force is calculated as the product of its magnitude and the particle displacement. Substituting the above equations, one obtains: Other derivation: In the general case of rectilinear motion, when the net force is not constant in magnitude, but is constant in direction, and parallel to the velocity of the particle, the work must be integrated along the path of the particle: General derivation of the work–energy principle for a particle For any net force acting on a particle moving along any curvilinear path, it can be demonstrated that its work equals the change in the kinetic energy of the particle by a simple derivation analogous to the equation above. It is known as the work–energy principle: The identity requires some algebra. From the identity and definition it follows The remaining part of the above derivation is just simple calculus, same as in the preceding rectilinear case. Derivation for a particle in constrained movement In particle dynamics, a formula equating work applied to a system to its change in kinetic energy is obtained as a first integral of Newton's second law of motion. It is useful to notice that the resultant force used in Newton's laws can be separated into forces that are applied to the particle and forces imposed by constraints on the movement of the particle. Remarkably, the work of a constraint force is zero, therefore only the work of the applied forces need be considered in the work–energy principle. To see this, consider a particle P that follows the trajectory with a force acting on it. Isolate the particle from its environment to expose constraint forces , then Newton's Law takes the form where is the mass of the particle. Vector formulation Note that n dots above a vector indicates its nth time derivative. The scalar product of each side of Newton's law with the velocity vector yields because the constraint forces are perpendicular to the particle velocity. Integrate this equation along its trajectory from the point to the point to obtain The left side of this equation is the work of the applied force as it acts on the particle along the trajectory from time to time . This can also be written as This integral is computed along the trajectory of the particle and is therefore path dependent. The right side of the first integral of Newton's equations can be simplified using the following identity (see product rule for derivation). Now it is integrated explicitly to obtain the change in kinetic energy, where the kinetic energy of the particle is defined by the scalar quantity, Tangential and normal components It is useful to resolve the velocity and acceleration vectors into tangential and normal components along the trajectory , such that where Then, the scalar product of velocity with acceleration in Newton's second law takes the form where the kinetic energy of the particle is defined by the scalar quantity, The result is the work–energy principle for particle dynamics, This derivation can be generalized to arbitrary rigid body systems. Moving in a straight line (skid to a stop) Consider the case of a vehicle moving along a straight horizontal trajectory under the action of a driving force and gravity that sum to . The constraint forces between the vehicle and the road define , and we have For convenience let the trajectory be along the X-axis, so and the velocity is , then , and , where Fx is the component of F along the X-axis, so Integration of both sides yields If is constant along the trajectory, then the integral of velocity is distance, so As an example consider a car skidding to a stop, where k is the coefficient of friction and W is the weight of the car. Then the force along the trajectory is . The velocity v of the car can be determined from the length of the skid using the work–energy principle, This formula uses the fact that the mass of the vehicle is . Coasting down an inclined surface (gravity racing) Consider the case of a vehicle that starts at rest and coasts down an inclined surface (such as mountain road), the work–energy principle helps compute the minimum distance that the vehicle travels to reach a velocity , of say 60 mph (88 fps). Rolling resistance and air drag will slow the vehicle down so the actual distance will be greater than if these forces are neglected. Let the trajectory of the vehicle following the road be which is a curve in three-dimensional space. The force acting on the vehicle that pushes it down the road is the constant force of gravity , while the force of the road on the vehicle is the constraint force . Newton's second law yields, The scalar product of this equation with the velocity, , yields where is the magnitude of . The constraint forces between the vehicle and the road cancel from this equation because , which means they do no work. Integrate both sides to obtain The weight force W is constant along the trajectory and the integral of the vertical velocity is the vertical distance, therefore, Recall that V(t1)=0. Notice that this result does not depend on the shape of the road followed by the vehicle. In order to determine the distance along the road assume the downgrade is 6%, which is a steep road. This means the altitude decreases 6 feet for every 100 feet traveled—for angles this small the sin and tan functions are approximately equal. Therefore, the distance in feet down a 6% grade to reach the velocity is at least This formula uses the fact that the weight of the vehicle is . Work of forces acting on a rigid body The work of forces acting at various points on a single rigid body can be calculated from the work of a resultant force and torque. To see this, let the forces F1, F2, ..., Fn act on the points X1, X2, ..., Xn in a rigid body. The trajectories of Xi, i = 1, ..., n are defined by the movement of the rigid body. This movement is given by the set of rotations [A(t)] and the trajectory d(t) of a reference point in the body. Let the coordinates xi i = 1, ..., n define these points in the moving rigid body's reference frame M, so that the trajectories traced in the fixed frame F are given by The velocity of the points along their trajectories are where is the angular velocity vector obtained from the skew symmetric matrix known as the angular velocity matrix. The small amount of work by the forces over the small displacements can be determined by approximating the displacement by so or This formula can be rewritten to obtain where F and T are the resultant force and torque applied at the reference point d of the moving frame M in the rigid body.
Physical sciences
Classical mechanics
null
149984
https://en.wikipedia.org/wiki/Wiener%20process
Wiener process
In mathematics, the Wiener process (or Brownian motion, due to its historical connection with the physical process of the same name) is a real-valued continuous-time stochastic process discovered by Norbert Wiener. It is one of the best known Lévy processes (càdlàg stochastic processes with stationary independent increments). It occurs frequently in pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics. The Wiener process plays an important role in both pure and applied mathematics. In pure mathematics, the Wiener process gave rise to the study of continuous time martingales. It is a key process in terms of which more complicated stochastic processes can be described. As such, it plays a vital role in stochastic calculus, diffusion processes and even potential theory. It is the driving process of Schramm–Loewner evolution. In applied mathematics, the Wiener process is used to represent the integral of a white noise Gaussian process, and so is useful as a model of noise in electronics engineering (see Brownian noise), instrument errors in filtering theory and disturbances in control theory. The Wiener process has applications throughout the mathematical sciences. In physics it is used to study Brownian motion and other types of diffusion via the Fokker–Planck and Langevin equations. It also forms the basis for the rigorous path integral formulation of quantum mechanics (by the Feynman–Kac formula, a solution to the Schrödinger equation can be represented in terms of the Wiener process) and the study of eternal inflation in physical cosmology. It is also prominent in the mathematical theory of finance, in particular the Black–Scholes option pricing model. Characterisations of the Wiener process The Wiener process is characterised by the following properties: almost surely has independent increments: for every the future increments are independent of the past values , has Gaussian increments: is normally distributed with mean and variance , has almost surely continuous paths: is almost surely continuous in . That the process has independent increments means that if then and are independent random variables, and the similar condition holds for n increments. An alternative characterisation of the Wiener process is the so-called Lévy characterisation that says that the Wiener process is an almost surely continuous martingale with and quadratic variation (which means that is also a martingale). A third characterisation is that the Wiener process has a spectral representation as a sine series whose coefficients are independent N(0, 1) random variables. This representation can be obtained using the Karhunen–Loève theorem. Another characterisation of a Wiener process is the definite integral (from time zero to time t) of a zero mean, unit variance, delta correlated ("white") Gaussian process. The Wiener process can be constructed as the scaling limit of a random walk, or other discrete-time stochastic processes with stationary independent increments. This is known as Donsker's theorem. Like the random walk, the Wiener process is recurrent in one or two dimensions (meaning that it returns almost surely to any fixed neighborhood of the origin infinitely often) whereas it is not recurrent in dimensions three and higher (where a multidimensional Wiener process is a process such that its coordinates are independent Wiener processes). Unlike the random walk, it is scale invariant, meaning that is a Wiener process for any nonzero constant . The Wiener measure is the probability law on the space of continuous functions , with , induced by the Wiener process. An integral based on Wiener measure may be called a Wiener integral. Wiener process as a limit of random walk Let be i.i.d. random variables with mean 0 and variance 1. For each n, define a continuous time stochastic process This is a random step function. Increments of are independent because the are independent. For large n, is close to by the central limit theorem. Donsker's theorem asserts that as , approaches a Wiener process, which explains the ubiquity of Brownian motion. Properties of a one-dimensional Wiener process Basic properties The unconditional probability density function follows a normal distribution with mean = 0 and variance = t, at a fixed time : The expectation is zero: The variance, using the computational formula, is : These results follow immediately from the definition that increments have a normal distribution, centered at zero. Thus Covariance and correlation The covariance and correlation (where ): These results follow from the definition that non-overlapping increments are independent, of which only the property that they are uncorrelated is used. Suppose that . Substituting we arrive at: Since and are independent, Thus A corollary useful for simulation is that we can write, for : where is an independent standard normal variable. Wiener representation Wiener (1923) also gave a representation of a Brownian path in terms of a random Fourier series. If are independent Gaussian variables with mean zero and variance one, then and represent a Brownian motion on . The scaled process is a Brownian motion on (cf. Karhunen–Loève theorem). Running maximum The joint distribution of the running maximum and is To get the unconditional distribution of , integrate over : the probability density function of a Half-normal distribution. The expectation is If at time the Wiener process has a known value , it is possible to calculate the conditional probability distribution of the maximum in interval (cf. Probability distribution of extreme points of a Wiener stochastic process). The cumulative probability distribution function of the maximum value, conditioned by the known value , is: Self-similarity Brownian scaling For every the process is another Wiener process. Time reversal The process for is distributed like for . Time inversion The process is another Wiener process. Projective invariance Consider a Wiener process , , conditioned so that (which holds almost surely) and as usual . Then the following are all Wiener processes : Thus the Wiener process is invariant under the projective group PSL(2,R), being invariant under the generators of the group. The action of an element is which defines a group action, in the sense that Conformal invariance in two dimensions Let be a two-dimensional Wiener process, regarded as a complex-valued process with . Let be an open set containing 0, and be associated Markov time: If is a holomorphic function which is not constant, such that , then is a time-changed Wiener process in . More precisely, the process is Wiener in with the Markov time where A class of Brownian martingales If a polynomial satisfies the partial differential equation then the stochastic process is a martingale. Example: is a martingale, which shows that the quadratic variation of W on is equal to . It follows that the expected time of first exit of W from (−c, c) is equal to . More generally, for every polynomial the following stochastic process is a martingale: where a is the polynomial Example: the process is a martingale, which shows that the quadratic variation of the martingale on [0, t] is equal to About functions more general than polynomials, see local martingales. Some properties of sample paths The set of all functions w with these properties is of full Wiener measure. That is, a path (sample function) of the Wiener process has all these properties almost surely. Qualitative properties For every ε > 0, the function w takes both (strictly) positive and (strictly) negative values on (0, ε). The function w is continuous everywhere but differentiable nowhere (like the Weierstrass function). For any , is almost surely not -Hölder continuous, and almost surely -Hölder continuous. Points of local maximum of the function w are a dense countable set; the maximum values are pairwise different; each local maximum is sharp in the following sense: if w has a local maximum at then The same holds for local minima. The function w has no points of local increase, that is, no t > 0 satisfies the following for some ε in (0, t): first, w(s) ≤ w(t) for all s in (t − ε, t), and second, w(s) ≥ w(t) for all s in (t, t + ε). (Local increase is a weaker condition than that w is increasing on (t − ε, t + ε).) The same holds for local decrease. The function w is of unbounded variation on every interval. The quadratic variation of w over [0,t] is t. Zeros of the function w are a nowhere dense perfect set of Lebesgue measure 0 and Hausdorff dimension 1/2 (therefore, uncountable). Quantitative properties Law of the iterated logarithm Modulus of continuity Local modulus of continuity: Global modulus of continuity (Lévy): Dimension doubling theorem The dimension doubling theorems say that the Hausdorff dimension of a set under a Brownian motion doubles almost surely. Local time The image of the Lebesgue measure on [0, t] under the map w (the pushforward measure) has a density . Thus, for a wide class of functions f (namely: all continuous functions; all locally integrable functions; all non-negative measurable functions). The density Lt is (more exactly, can and will be chosen to be) continuous. The number Lt(x) is called the local time at x of w on [0, t]. It is strictly positive for all x of the interval (a, b) where a and b are the least and the greatest value of w on [0, t], respectively. (For x outside this interval the local time evidently vanishes.) Treated as a function of two variables x and t, the local time is still continuous. Treated as a function of t (while x is fixed), the local time is a singular function corresponding to a nonatomic measure on the set of zeros of w. These continuity properties are fairly non-trivial. Consider that the local time can also be defined (as the density of the pushforward measure) for a smooth function. Then, however, the density is discontinuous, unless the given function is monotone. In other words, there is a conflict between good behavior of a function and good behavior of its local time. In this sense, the continuity of the local time of the Wiener process is another manifestation of non-smoothness of the trajectory. Information rate The information rate of the Wiener process with respect to the squared error distance, i.e. its quadratic rate-distortion function, is given by Therefore, it is impossible to encode using a binary code of less than bits and recover it with expected mean squared error less than . On the other hand, for any , there exists large enough and a binary code of no more than distinct elements such that the expected mean squared error in recovering from this code is at most . In many cases, it is impossible to encode the Wiener process without sampling it first. When the Wiener process is sampled at intervals before applying a binary code to represent these samples, the optimal trade-off between code rate and expected mean square error (in estimating the continuous-time Wiener process) follows the parametric representation where and . In particular, is the mean squared error associated only with the sampling operation (without encoding). Related processes The stochastic process defined by is called a Wiener process with drift μ and infinitesimal variance σ2. These processes exhaust continuous Lévy processes, which means that they are the only continuous Lévy processes, as a consequence of the Lévy–Khintchine representation. Two random processes on the time interval [0, 1] appear, roughly speaking, when conditioning the Wiener process to vanish on both ends of [0,1]. With no further conditioning, the process takes both positive and negative values on [0, 1] and is called Brownian bridge. Conditioned also to stay positive on (0, 1), the process is called Brownian excursion. In both cases a rigorous treatment involves a limiting procedure, since the formula P(A|B) = P(A ∩ B)/P(B) does not apply when P(B) = 0. A geometric Brownian motion can be written It is a stochastic process which is used to model processes that can never take on negative values, such as the value of stocks. The stochastic process is distributed like the Ornstein–Uhlenbeck process with parameters , , and . The time of hitting a single point x > 0 by the Wiener process is a random variable with the Lévy distribution. The family of these random variables (indexed by all positive numbers x) is a left-continuous modification of a Lévy process. The right-continuous modification of this process is given by times of first exit from closed intervals [0, x]. The local time of a Brownian motion describes the time that the process spends at the point x. Formally where δ is the Dirac delta function. The behaviour of the local time is characterised by Ray–Knight theorems. Brownian martingales Let A be an event related to the Wiener process (more formally: a set, measurable with respect to the Wiener measure, in the space of functions), and Xt the conditional probability of A given the Wiener process on the time interval [0, t] (more formally: the Wiener measure of the set of trajectories whose concatenation with the given partial trajectory on [0, t] belongs to A). Then the process Xt is a continuous martingale. Its martingale property follows immediately from the definitions, but its continuity is a very special fact – a special case of a general theorem stating that all Brownian martingales are continuous. A Brownian martingale is, by definition, a martingale adapted to the Brownian filtration; and the Brownian filtration is, by definition, the filtration generated by the Wiener process. Integrated Brownian motion The time-integral of the Wiener process is called integrated Brownian motion or integrated Wiener process. It arises in many applications and can be shown to have the distribution N(0, t3/3), calculated using the fact that the covariance of the Wiener process is . For the general case of the process defined by Then, for , In fact, is always a zero mean normal random variable. This allows for simulation of given by taking where Z is a standard normal variable and The case of corresponds to . All these results can be seen as direct consequences of Itô isometry. The n-times-integrated Wiener process is a zero-mean normal variable with variance . This is given by the Cauchy formula for repeated integration. Time change Every continuous martingale (starting at the origin) is a time changed Wiener process. Example: 2Wt = V(4t) where V is another Wiener process (different from W but distributed like W). Example. where and V is another Wiener process. In general, if M is a continuous martingale then where A(t) is the quadratic variation of M on [0, t], and V is a Wiener process. Corollary. (
Mathematics
Probability
null
149993
https://en.wikipedia.org/wiki/Lathe
Lathe
A lathe () is a machine tool that rotates a workpiece about an axis of rotation to perform various operations such as cutting, sanding, knurling, drilling, deformation, facing, threading and turning, with tools that are applied to the workpiece to create an object with symmetry about that axis. Lathes are used in woodturning, metalworking, metal spinning, thermal spraying, reclamation, and glass-working. Lathes can be used to shape pottery, the best-known design being the Potter's wheel. Most suitably equipped metalworking lathes can also be used to produce most solids of revolution, plane surfaces and screw threads or helices. Ornamental lathes can produce three-dimensional solids of incredible complexity. The workpiece is usually held in place by either one or two centers, at least one of which can typically be moved horizontally to accommodate varying workpiece lengths. Other work-holding methods include clamping the work about the axis of rotation using a chuck or collet, or to a faceplate, using clamps or dog clutch. Of course, lathes can also complete milling operations by installing special lathe milling fixtures. Examples of objects that can be produced on a lathe include screws, candlesticks, gun barrels, cue sticks, table legs, bowls, baseball bats, pens, musical instruments (especially woodwind instruments), and crankshafts. History The lathe is an ancient tool. The earliest evidence of a lathe dates back to Ancient Egypt around 1300 BC. There is also tenuous evidence for its existence at a Mycenaean Greek site, dating back as far as the 13th or 14th century BC. Clear evidence of turned artifacts have been found from the 6th century BC: fragments of a wooden bowl in an Etruscan tomb in Northern Italy as well as two flat wooden dishes with decorative turned rims from modern Turkey. During the Warring States period in China, , the ancient Chinese used rotary lathes to sharpen tools and weapons on an industrial scale. The first known painting showing a lathe dates to the 3rd century BC in ancient Egypt. Pliny later describes the use of a lathe for turning soft stone in his Natural History (Book XXX, Chapter 44). Precision metal-cutting lathes were developed during the lead up to the Industrial Revolution and were critical to the manufacture of mechanical inventions of that period. Some of the earliest examples include a version with a mechanical cutting tool-supporting carriage and a set of gears by Russian engineer Andrey Nartov in 1718 and another with a slide-rest shown in a 1717 edition of the French Encyclopédie. The slide-rest was a particularly important development because it constrains the motion of the cutting tool to generate accurate cylindrical or conical surfaces, unlike earlier lathes that involved freehand manipulation of the tool. By the 1770s, precision lathes became practical and well-known. A slide-rest is clearly shown in a 1772 edition of the Encyclopédie and during that same year a horse-powered cannon boring lathe was installed in the Royal Arsenal in Woolwich, England by Jan Verbruggen. Cannon bored by Verbruggen's lathe were stronger and more accurate than their predecessors and saw service in American Revolutionary War. Henry Maudslay, the inventor of many subsequent improvements to the lathe worked as an apprentice in Verbruggen's workshop in Woolwich. During the Industrial Revolution, mechanized power generated by water wheels or steam engines was transmitted to the lathe via line shafting, allowing faster and easier work. Metalworking lathes evolved into heavier machines with thicker, more rigid parts. Between the late 19th and mid-20th centuries, individual electric motors at each lathe replaced line shafting as the power source. Beginning in the 1950s, servomechanisms were applied to the control of lathes and other machine tools via numerical control, which often was coupled with computers to yield computerized numerical control (CNC). Today manually controlled and CNC lathes coexist in the manufacturing industries. Design The most common design is known as the universal lathe or parallel lathe. Other general designs include the frontal and vertical lathe, and others. Components A lathe may or may not have legs, which sit on the floor and elevate the lathe bed to a working height. A lathe may be small and sit on a workbench or table, not requiring a stand. Almost all lathes have a bed, which is almost always a horizontal beam, although CNC lathes commonly have an inclined or vertical beam for a bed to ensure that swarf, or chips, falls free of the bed. Woodturning lathes specialized for turning large bowls often have no bed or tail stock, merely a free-standing headstock and a cantilevered tool-rest. At one end of the bed (almost always the left, as the operator faces the lathe) is a headstock. The headstock contains high-precision spinning bearings. Rotating within the bearings is a horizontal axle, with an axis parallel to the bed, called the spindle. Spindles are often hollow and have an interior Morse taper on the spindle nose (i.e., facing to the right / towards the bed) by which work-holding accessories may be mounted to the spindle. Spindles may also have arrangements for work-holding on the left-hand end of the spindle with other tooling arrangements for particular tasks. (i.e., facing away from the main bed) end, or may have a hand-wheel or other accessory mechanism on their outboard end. Spindles are powered and impart motion to the workpiece. The spindle is driven either by foot power from a treadle and flywheel or by a belt or gear drive from a power source such as electric motor or overhead line shafts. In most modern lathes this power source is an integral electric motor, often either in the headstock, to the left of the headstock, or beneath the headstock, concealed in the stand. In addition to the spindle and its bearings, the headstock often contains parts to convert the motor speed into various spindle speeds. Various types of speed-changing mechanism achieve this, from a cone pulley or step pulley, to a cone pulley with back gear (which is essentially a low range, similar in net effect to the two-speed rear of a truck), to an entire gear train similar to that of a manual-shift automotive transmission. Some motors have electronic rheostat-type speed controls, which obviates cone pulleys or gears. The counterpoint to the headstock is the tailstock, sometimes referred to as the loose head, as it can be positioned at any convenient point on the bed by sliding it to the required area. The tail-stock contains a barrel, which does not rotate, but can slide in and out parallel to the axis of the bed and directly in line with the headstock spindle. The barrel is hollow and usually contains a taper to facilitate the gripping of various types of tooling. Its most common uses are to hold a hardened steel center, which is used to support long thin shafts while turning, or to hold drill bits for drilling axial holes in the work piece. Many other uses are possible. Metalworking lathes have a carriage (comprising a saddle and apron) topped with a cross-slide, which is a flat piece that sits crosswise on the bed and can be cranked at right angles to the bed. Sitting atop the cross slide is usually another slide called a compound rest, which provides two additional axes of motion, rotary and linear. Atop that sits a toolpost, which holds a cutting tool, which removes material from the workpiece. There may or may not be a leadscrew, which moves the cross-slide along the bed. Woodturning and metal spinning lathes do not have cross-slides, but rather have banjos, which are flat pieces that sit crosswise on the bed. The position of a banjo can be adjusted by hand; no gearing is involved. Ascending vertically from the banjo is a tool-post, at the top of which is a horizontal tool-rest. In woodturning, hand tools are braced against the tool-rest and levered into the workpiece. In metal spinning, the further pin ascends vertically from the tool-rest and serves as a fulcrum against which tools may be levered into the workpiece. Accessories Unless a workpiece has a taper machined onto it which perfectly matches the internal taper in the spindle, or has threads which perfectly match the external threads on the spindle (two conditions which rarely exist), an accessory must be used to mount a workpiece to the spindle. A workpiece may be bolted or screwed to a faceplate, a large, flat disk that mounts to the spindle. In the alternative, faceplate dogs may be used to secure the work to the faceplate. A workpiece may be mounted on a mandrel, or circular work clamped in a three- or four-jaw chuck. For irregular shaped workpieces it is usual to use a four jaw (independent moving jaws) chuck. These holding devices mount directly to the lathe headstock spindle. In precision work, and in some classes of repetition work, cylindrical workpieces are usually held in a collet inserted into the spindle and secured either by a draw-bar, or by a collet closing cap on the spindle. Suitable collets may also be used to mount square or hexagonal workpieces. In precision toolmaking work such collets are usually of the draw-in variety, where, as the collet is tightened, the workpiece moves slightly back into the headstock, whereas for most repetition work the dead length variety is preferred, as this ensures that the position of the workpiece does not move as the collet is tightened. A soft workpiece (e.g., wood) may be pinched between centers by using a spur drive at the headstock, which bites into the wood and imparts torque to it. A soft dead center is used in the headstock spindle as the work rotates with the centre. Because the centre is soft it can be trued in place before use. The included angle is 60°. Traditionally, a hard dead center is used together with suitable lubricant in the tailstock to support the workpiece. In modern practice the dead center is frequently replaced by a running or live center, as it turns freely with the workpiece—usually on ball bearings—reducing the frictional heat, especially important at high speeds. When clear facing a long length of material it must be supported at both ends. This can be achieved by the use of a traveling or fixed steady. If a steady is not available, the end face being worked on may be supported by a dead (stationary) half center. A half center has a flat surface machined across a broad section of half of its diameter at the pointed end. A small section of the tip of the dead center is retained to ensure concentricity. Lubrication must be applied at this point of contact and tail stock pressure reduced. A lathe carrier or lathe dog may also be employed when turning between two centers. In woodturning, one variation of a running center is a cup center, which is a cone of metal surrounded by an annular ring of metal that decreases the chances of the workpiece splitting. A circular metal plate with even spaced holes around the periphery, mounted to the spindle, is called an "index plate". It can be used to rotate the spindle to a precise angle, then lock it in place, facilitating repeated auxiliary operations done to the workpiece. Other accessories, including items such as taper turning attachments, knurling tools, vertical slides, fixed and traveling steadies, etc., increase the versatility of a lathe and the range of work it may perform. Modes of use When a workpiece is fixed between the headstock and the tail-stock, it is said to be "between centers". When a workpiece is supported at both ends, it is more stable, and more force may be applied to the workpiece, via tools, at a right angle to the axis of rotation, without fear that the workpiece may break loose. When a workpiece is fixed only to the spindle at the headstock end, the work is said to be "face work". When a workpiece is supported in this manner, less force may be applied to the workpiece, via tools, at a right angle to the axis of rotation, lest the workpiece rip free. Thus, most work must be done axially, towards the headstock, or at right angles, but gently. When a workpiece is mounted with a certain axis of rotation, worked, then remounted with a new axis of rotation, this is referred to as "eccentric turning" or "multi-axis turning". The result is that various cross sections of the workpiece are rotationally symmetric, but the workpiece as a whole is not rotationally symmetric. This technique is used for camshafts, various types of chair legs. Sizes Lathes are usually 'sized' by the capacity of the work that they may hold. Usually large work is held at both ends either using a chuck or other drive in the headstock and a centre in the tailstock. To maximise size, turning between centres allows the work to be as close to the headstock as possible and is used to determine the longest piece the lathe will turn: when the base of the tailstock is aligned with the end of the bed. The distance between centres gives the maximum length of work the lathe will officially hold. It is possible to get slightly longer items in if the tailstock overhangs the end of the bed but this is an ill-advised practice. Purchasing an extension or larger bed would be a wise alternative. The other dimension of the workpiece is how far off-centre it can be. This is known as the 'swing' ("The distance from the head center of a lathe to the bed or ways, or to the rest. The swing determines the diametric size of the object which is capable of being turned in the lathe; anything larger would interfere with the bed. This limit is called the swing of the bed. The swing of the rest is the size which will rotate above the rest, which lies upon the bed.") from the notion that the work 'swings' from the centre upon which it is mounted. This makes more sense with odd-shaped work but as the lathe is most often used with cylindrical work, it is useful to know the maximum diameter of work the lathe will hold. This is simply the value of the swing (or centre height above the bed) multiplied by two. For some reason, in the U.S. swing is assumed to be diameter but this is incorrect. To be clear on size, it is better, therefore, to describe the dimension as 'centre height above the bed'. As parts of the lathe reduce capacity, measurements such as 'swing over cross slide' or other named parts can be found. Varieties The smallest lathes are "jewelers lathes" or "watchmaker lathes", which, though often small enough to be held in one hand are normally fastened to a bench. There are rare and even smaller mini lathes made for precision cutting. The workpieces machined on a jeweler's lathe are often metal, but other softer materials can also be machined. Jeweler's lathes can be used with hand-held "graver" tools or with a "compound rest" that attach to the lathe bed and allows the tool to be clamped in place and moved by a screw or lever feed. Graver tools are generally supported by a T-rest, not fixed to a cross slide or compound rest. The work is usually held in a collet, but high-precision 3 and 6-jaw chucks are also commonly employed. Common spindle bore sizes are 6 mm, 8 mm and 10 mm. The term WW refers to the Webster/Whitcomb collet and lathe, invented by the American Watch Tool Company of Waltham, Massachusetts. Most lathes commonly referred to as watchmakers lathes are of this design. In 1909, the American Watch Tool company introduced the Magnus type collet (a 10-mm body size collet) using a lathe of the same basic design, the Webster/Whitcomb Magnus. (F.W.Derbyshire, Inc. retains the trade names Webster/Whitcomb and Magnus and still produces these collets.) Two bed patterns are common: the WW (Webster Whitcomb) bed, a truncated triangular prism (found only on 8 and 10 mm watchmakers' lathes); and the continental D-style bar bed (used on both 6 mm and 8 mm lathes by firms such as Lorch and Star). Other bed designs have been used, such as a triangular prism on some Boley 6.5 mm lathes, and a V-edged bed on IME's 8 mm lathes. Smaller metalworking lathes that are larger than jewelers' lathes and can sit on a bench or table, but offer such features as tool holders and a screw-cutting gear train are called hobby lathes, and larger versions, "bench lathes" - this term also commonly applied to a special type of high-precision lathe used by toolmakers for one-off jobs. Even larger lathes offering similar features for producing or modifying individual parts are called "engine lathes". Lathes of these types do not have additional integral features for repetitive production, but rather are used for individual part production or modification as the primary role. Lathes of this size that are designed for mass manufacture, but not offering the versatile screw-cutting capabilities of the engine or bench lathe, are referred to as "second operation" lathes. Lathes with a very large spindle bore and a chuck on both ends of the spindle are called "oil field lathes". Fully automatic mechanical lathes, employing cams and gear trains for controlled movement, are called screw machines. Lathes that are controlled by a computer are CNC lathes. Lathes with the spindle mounted in a vertical configuration, instead of horizontal configuration, are called vertical lathes or vertical boring machines. They are used where very large diameters must be turned, and the workpiece (comparatively) is not very long. A lathe with a tool post that can rotate around a vertical axis, so as to present different tools towards the headstock (and the workpiece) are turret lathes. A lathe equipped with indexing plates, profile cutters, spiral or helical guides, etc., so as to enable ornamental turning is an ornamental lathe. Various combinations are possible: for example, a vertical lathe can have CNC capabilities as well (such as a CNC VTL). Lathes can be combined with other machine tools, such as a drill press or vertical milling machine. These are usually referred to as combination lathes. Uses Woodworking Woodworking lathes are the oldest variety, apart from pottery wheels. All other varieties are descended from these simple lathes. An adjustable horizontal metal rail, the tool-rest, between the material and the operator accommodates the positioning of shaping tools, which are usually hand-held. After shaping, it is common practice to press and slide sandpaper against the still-spinning object to smooth the surface made with the metal shaping tools. The tool-rest is usually removed during sanding, as it may be unsafe to have the operators hands between it and the spinning wood. Many woodworking lathes can also be used for making bowls and plates. The bowl or plate needs only to be held at the bottom by one side of the lathe. It is usually attached to a metal face plate attached to the spindle. With many lathes, this operation happens on the left side of the headstock, where are no rails and therefore more clearance. In this configuration, the piece can be shaped inside and out. A specific curved tool-rest may be used to support tools while shaping the inside. Further detail can be found on the woodturning page. Most woodworking lathes are designed to be operated at a speed of between 200 and 1,400 revolutions per minute, with slightly over 1,000 rpm considered optimal for most such work, and with larger workpieces requiring lower speeds. Duplicating One type of specialized lathe is duplicating or copying lathe. Some types of them are known as Blanchard lathe, after Thomas Blanchard. This type of lathe was able to create shapes identical to a standard pattern and it revolutionized the process of gun stock making in the 1820s when it was invented. The Hermitage Museum, Russia displays the copying lathe for ornamental turning: making medals and guilloche patterns, designed by Andrey Nartov, 1721. Patternmaking Used to make a pattern for foundries, often from wood, but also plastics. A patternmaker's lathe looks like a heavy wood lathe, often with a turret and either a leadscrew or a rack and pinion to manually position the turret. The turret is used to accurately cut straight lines. They often have a provision to turn very large parts on the other end of the headstock, using a free-standing toolrest. Another way of turning large parts is a sliding bed, which can slide away from the headstock and thus open up a gap in front of the headstock for large parts. Metalworking In a metalworking lathe, metal is removed from the workpiece using a hardened cutting tool, which is usually fixed to a solid moveable mounting, either a tool-post or a turret, which is then moved against the workpiece using handwheels or computer-controlled motors. These cutting tools come in a wide range of sizes and shapes, depending upon their application. Some common styles are diamond, round, square and triangular. The tool-post is operated by lead-screws that can accurately position the tool in a variety of planes. The tool-post may be driven manually or automatically to produce the roughing and finishing cuts required to turn the workpiece to the desired shape and dimensions, or for cutting threads, worm gears, etc. Cutting fluid may also be pumped to the cutting site to provide cooling, lubrication and clearing of swarf from the workpiece. Some lathes may be operated under control of a computer for mass production of parts (see "Computer numerical control"). Manually controlled metalworking lathes are commonly provided with a variable-ratio gear-train to drive the main lead-screw. This enables different thread pitches to be cut. On some older lathes or more affordable new lathes, the gear trains are changed by swapping gears with various numbers of teeth onto or off of the shafts, while more modern or expensive manually controlled lathes have a quick-change box to provide commonly used ratios by the operation of a lever. CNC lathes use computers and servomechanisms to regulate the rates of movement. On manually controlled lathes, the thread pitches that can be cut are, in some ways, determined by the pitch of the lead-screw: A lathe with a metric lead-screw will readily cut metric threads (including BA), while one with an imperial lead-screw will readily cut imperial-unit-based threads such as BSW or UTS (UNF, UNC). This limitation is not insurmountable, because a 127-tooth gear, called a transposing gear, is used to translate between metric and inch thread pitches. However, this is optional equipment that many lathe owners do not own. It is also a larger change-wheel than the others, and on some lathes may be larger than the change-wheel mounting banjo is capable of mounting. The workpiece may be supported between a pair of points called centres, or it may be bolted to a faceplate or held in a chuck. A chuck has movable jaws that can grip the workpiece securely. There are some effects on material properties when using a metalworking lathe. There are few chemical or physical effects, but there are many mechanical effects, which include residual stress, micro-cracks, work-hardening, and tempering in hardened materials. Cue lathes Cue lathes function similarly to turning and spinning lathes, allowing a perfectly radially-symmetrical cut for billiard cues. They can also be used to refinish cues that have been worn over the years. Glass-working Glass-working lathes are similar in design to other lathes, but differ markedly in how the workpiece is modified. Glass-working lathes slowly rotate a hollow glass vessel over a fixed- or variable-temperature flame. The source of the flame may be either hand-held or mounted to a banjo/cross-slide that can be moved along the lathe bed. The flame serves to soften the glass being worked, so that the glass in a specific area of the workpiece becomes ductile and subject to forming either by inflation ("glassblowing") or by deformation with a heat-resistant tool. Such lathes usually have two head-stocks with chucks holding the work, arranged so that they both rotate together in unison. Air can be introduced through the headstock chuck spindle for glassblowing. The tools to deform the glass and tubes to blow (inflate) the glass are usually handheld. In diamond turning, a computer-controlled lathe with a diamond-tipped tool is used to make precision optical surfaces in glass or other optical materials. Unlike conventional optical grinding, complex aspheric surfaces can be machined easily. Instead of the dovetailed ways used on the tool slide of a metal-turning lathe, the ways typically float on air bearings, and the position of the tool is measured by optical interferometry to achieve the necessary standard of precision for optical work. The finished work piece usually requires a small amount of subsequent polishing by conventional techniques to achieve a finished surface suitably smooth for use in a lens, but the rough grinding time is significantly reduced for complex lenses. Metal-spinning In metal spinning, a disk of sheet metal is held perpendicularly to the main axis of the lathe, and tools with polished tips (spoons) or roller tips are hand-held, but levered by hand against fixed posts, to develop pressure that deforms the spinning sheet of metal. Metal-spinning lathes are almost as simple as wood-turning lathes. Typically, metal spinning requires a mandrel, usually made from wood, which serves as the template onto which the workpiece is formed (asymmetric shapes can be made, but it is a very advanced technique). For example, to make a sheet metal bowl, a solid block of wood in the shape of the bowl is required; similarly, to make a vase, a solid template of the vase is required. Given the advent of high-speed, high-pressure, industrial die forming, metal spinning is less common now than it once was, but still a valuable technique for producing one-off prototypes or small batches, where die forming would be uneconomical. Ornamental turning The ornamental turning lathe was developed around the same time as the industrial screw-cutting lathe in the nineteenth century. It was used not for making practical objects, but for decorative work: ornamental turning. By using accessories such as the horizontal and vertical cutting frames, eccentric chuck and elliptical chuck, solids of extraordinary complexity may be produced by various generative procedures. A special-purpose lathe, the Rose engine lathe, is also used for ornamental turning, in particular for engine turning, typically in precious metals, for example to decorate pocket-watch cases. As well as a wide range of accessories, these lathes usually have complex dividing arrangements to allow the exact rotation of the mandrel. Cutting is usually carried out by rotating cutters, rather than directly by the rotation of the work itself. Because of the difficulty of polishing such work, the materials turned, such as wood or ivory, are usually quite soft, and the cutter has to be exceptionally sharp. The finest ornamental lathes are generally considered to be those made by Holtzapffel around the turn of the 19th century. Reducing Many types of lathes can be equipped with accessory components to allow them to reproduce an item: the original item is mounted on one spindle, the blank is mounted on another, and as both turn in synchronized manner, one end of an arm "reads" the original and the other end of the arm "carves" the duplicate. A reduction lathe is a specialized lathe that is designed with this feature and incorporates a mechanism similar to a pantograph, so that when the "reading" end of the arm reads a detail that measures one inch (for example), the cutting end of the arm creates an analogous detail that is (for example) one quarter of an inch (a 4:1 reduction, although given appropriate machinery and appropriate settings, any reduction ratio is possible). Reducing lathes are used in coin-making, where a plaster original (or an epoxy master made from the plaster original, or a copper-shelled master made from the plaster original, etc.) is duplicated and reduced on the reducing lathe, generating a master die. Rotary lathes A lathe in which wood logs are turned against a very sharp blade and peeled off in one continuous or semi-continuous roll. Invented by Immanuel Nobel (father of the more famous Alfred Nobel). The first such lathes in the United States were set up in the mid-19th century. The product is called wood veneer and it is used for making plywood and as a cosmetic surface veneer on some grades of chipboard. Watchmaking Watchmakers lathes are delicate but precise metalworking lathes, usually without provision for screwcutting, and are still used by horologists for work such as the turning of balance staffs. A handheld tool called a graver, supported by a tool-rest, is often used in preference to a slide-mounted tool. The original watchmaker's turns was a simple dead-center lathe with a moveable rest and two loose head-stocks. The workpiece would be rotated by a bow, typically of horsehair, wrapped around it. Transcription or recording Transcription or recording lathes are used to make grooves on a surface for recording sounds. These were used in creating sound grooves on wax cylinders and then on flat recording discs originally also made of wax, but later as lacquers on a substratum. Originally the cutting lathes were driven by sound vibrations through a horn in a process known as acoustic recording and later driven by an electric current when microphones were first used in sound recording. Many such lathes were professional models, but others were developed for home recording and were common before the advent of home tape recording. Performance National and international standards are used to standardize the definitions, environmental requirements, and test methods used for the performance evaluation of lathes. Election of the standard to be used is an agreement between the supplier and the user and has some significance in the design of the lathe. In the United States, ASME has developed the B5.57 Standard entitled "Methods for Performance Evaluation of Computer Numerically Controlled Lathes and Turning Centers", which establishes requirements and methods for specifying and testing the performance of CNC lathes and turning centers.
Technology
Industrial machinery
null
150009
https://en.wikipedia.org/wiki/Twin-lens%20reflex%20camera
Twin-lens reflex camera
A twin-lens reflex camera (TLR) is a type of camera with two objective lenses of the same focal length. One of the lenses is the photographic objective or "taking lens" (the lens that takes the picture), while the other is used for the viewfinder system, which is usually viewed from above at waist level. In addition to the objective, the viewfinder consists of a 45-degree mirror (the reason for the word reflex in the name), a matte focusing screen at the top of the camera, and a pop-up hood surrounding it. The two objectives are connected, so that the focus shown on the focusing screen will be exactly the same as on the film. However, many inexpensive "pseudo" TLRs are fixed-focus models to save on the mechanical complexity. Most TLRs use leaf shutters with shutter speeds up to 1/500 of a second with a bulb setting. For practical purposes, all TLRs are film cameras, most often using 120 film, although there are many examples which used 620 film, 127 film, and 35 mm film. Few general-purpose digital TLR cameras exist, since the heyday of TLR cameras ended long before the era of digital cameras, though they can be adapted with digital backs. In 2015, MiNT Camera released Instantflex TL70, a twin-lens reflex camera that uses Fuji instax mini film. History In traditional cameras, the photographer first viewed the image on a screen of ground glass in the same place that a photographic plate would be placed. After adjusting the camera and closing the objective aperture the ground glass screen was swapped for the photographic plate, and finally the picture could be taken. (Some cameras used this layout as late as the 1960s, for example the Koni-Omegaflex.) With the addition of a second lens and a permanent piece of ground glass, this made it possible for a photographer to snap a picture immediately after focusing the image instead of having to remove and replace the ground glass screen every shot. This advantage of course applies to SLR cameras as well, but early SLR cameras caused delays and inconvenience due to moving the mirror needed for viewfinding out of the optical path to the photographic plate. When this process was automated, the movement of the mirror could cause shake in the camera and blur the image. Using a mirror to allow viewing from above also enabled the camera to be held much more steadily against the body than a camera held with the hands only. The London Stereoscopic Co's "Carlton" model, dating from 1885, is claimed to be the first off-the-shelf TLR camera. A major step forward to mass marketing of the TLR came with the Rolleiflex in 1929, developed by Franke & Heidecke in Germany. The Rolleiflex was widely imitated and most mass-market TLR cameras owe much to its design. It is said that Reinhold Heidecke had the inspiration for the Rollei TLRs while undertaking photography of enemy lines from the German trenches in 1916, when a periscopic approach to focusing and taking photos radically reduced the risk to the photographer from sniper fire. TLRs are still manufactured in Germany by DHW Fototechnik, the successor of Franke & Heidecke, in three versions. Features Higher-end TLRs may have a pop-up magnifying glass to assist the user in focusing the camera. In addition, many have a "sports finder" consisting of a square hole punched in the back of the pop-up hood, and a knock-out in the front. Photographers can sight through these instead of using the matte screen. This is especially useful in tracking moving subjects such as animals or race cars, since the image on the matte screen is reversed left-to-right. It is nearly impossible to accurately judge composition with such an arrangement, however. Mamiya's C-Series, introduced in the 1960s, the C-3, C-2, C-33, C-22 and the Mamiya C330 and Mamiya C220 along with their predecessor the Mamiyaflex, are the main conventional TLR cameras to feature truly interchangeable lenses. "Bayonet-mount" TLRs, notably Rolleis & Yashicas, had both wide-angle and tele supplementary front add-ons, with Rollei's Zeiss Mutars being expensive but fairly sharp. Rollei also made separate TLRs having fixed wide-angle or tele lenses: the Tele Rollei and the Rollei Wide, in relatively limited quantities; higher sharpness, more convenient (faster than changing lenses) if one could carry multiple cameras around one's neck, but much more costly than using 1 camera with supplements. The Mamiya TLRs also employ bellows focusing, making extreme closeups possible. Many TLRs used front and back cut-outs in the hinged top hood to provide a quick-action finder for sports and action photography. Late model Rollei Rolleiflex TLRs introduced the widely copied additional feature of a second-mirror "sports finder". When the hinged front hood knock-out is moved to the sports finder position a secondary mirror swings down over the view screen to reflect the image to a secondary magnifier on the back of the hood, just below the direct view cutout. This permits precise focusing while using the sports finder feature. The magnified central image is reversed both top-to-bottom and left-to-right. This feature made Rolleis the leading choice for press photographers during the 1940s to 1960s. Advantages A primary advantage of the TLR is in its mechanical simplicity as compared to the more common single-lens reflex cameras (SLR) cameras. The SLR must employ some method of blocking light from reaching the film during focusing, either with a focal plane shutter (most common) or with the reflex mirror itself. Both methods are mechanically complicated and add significant bulk and weight, especially in medium-format cameras. Because of their mechanical simplicity, TLR cameras are considerably cheaper than SLR cameras of similar optical quality, as well as inherently less prone to mechanical failure. TLRs are practically different from SLR in several respects. First, unlike virtually all film SLRs, TLRs provide a continuous image on the finder screen. The view does not black out during exposure. Since a mirror does not need to be moved out of the way, the picture can be taken much closer to the time the shutter is actuated by the photographer, reducing so-called shutter lag. This trait, and the continuous viewing, made TLRs the preferred camera style for dance photography. The separate viewing lens is also very advantageous for long-exposure photographs. During exposure, an SLR's mirror must be retracted, blacking out the image in the viewfinder. A TLR's mirror is fixed and the taking lens remains open throughout the exposure, letting the photographer examine the image while the exposure is in progress. This can ease the creation of special lighting or transparency effects. Models with leaf shutters within the lens, rather than focal-plane shutters installed inside the camera body, can synchronize with flash at higher speeds than can SLRs. Flashes on SLRs usually cannot synchronize accurately when the shutter speed is faster than 1/60th of a second and occasionally 1/125th. Some higher quality DSLRs can synchronize at up to 1/500th of a second. Leaf shutters allow for flash synchronization at all shutter speeds. SLR shutter mechanisms are comparatively noisy. Most TLRs use a leaf shutter in the lens. The only mechanical noise during exposure is from the shutter leaves opening and closing. TLRs are also ideal for candid camera shots where an eye-level camera would be conspicuous. A TLR can be hung on a neck strap and the shutter fired by cable release. Owing to the availability of medium-format cameras and the ease of image composition, the TLR was for many years also preferred by many portrait studios for static poses. Extreme dark photographic filters like the opaque Wratten 87 can be used without problems, as they cover and thus darken only the taking lens. The image in the viewfinder stays bright. Disadvantages Few TLR cameras offered interchangeable lenses and none were made with a zoom lens. In systems with interchangeable lenses, such as the Mamiya, the fixed distance between the lenses sets a hard limit on their size, which precludes the possibility of large aperture long-focus lenses. The lenses are also more expensive because the shutter mechanism is integrated with the lens, not the camera body, so each lens pair must include a shutter. Because the photographer views through one lens but takes the photograph through another, parallax error makes the photograph different from the view on the screen. This difference is negligible when the subject is far away, but is critical for nearby subjects. Parallax compensation may be performed by the photographer in adjustment of the sight line while compensating for the framing change, or for highly repeatable accuracy in tabletop photography (in which the subject might be within a foot (30 cm) of the camera), devices are available that move the camera upwards so that the taking lens goes to the exact position that the viewing lens occupied. (Mamiya's very accurate version was called the Para-mender and mounted on a tripod.) Some TLRs like the Rolleiflex (a notable early example is the Voigtländer Superb of 1933) also came with more or less complex devices to adjust parallax with focusing. It is generally not possible to preview depth of field, as one can with most SLRs, since the TLR's viewing lens usually has no diaphragm. Exceptions to this are the Rolleiflex, the Mamiya 105 D and 105 DS lenses, which have a depth-of-field preview. As the viewfinder of a TLR camera requires the photographer to look down toward the camera, it is inconvenient to frame a photo with a subject that requires the camera to be positioned above the photographer's chest unless a tripod is used. In these cases, the camera may be positioned with the lenses oriented horizontally. Due to the TLR's square format, the composition need not be altered. The image in the waist-level finder is reversed "left to right", which can make framing a photograph difficult, especially for an inexperienced user or with a moving subject. With high-quality TLRs like the Rolleiflex and the Mamiya C220/C330, the waist-level finder can be replaced by an eye-level finder, using a roof pentaprism or pentamirror to correct the image while making it viewable through an eyepiece at the rear of the camera. The design of the leaf shutter limits almost all TLRs to a maximum shutter speed between 1/100 and 1/500 of a second. Certain photographic filters are inconvenient without line of sight through the taking lens notably, graduated neutral density filters are hard to use with a TLR, as there is no easy way to position the filter accurately. Film formats 6×6 format The typical TLR is medium format, using 120 roll film with square images. Presently, the Chinese Seagull Camera is still in production along with Lomography's Lubitel, but in the past, many manufacturers made them. DHW-Fototechnik GmbH continues to make the Rolleiflex TLR as well. The Ciro-flex, produced by Ciro Cameras Inc., rose dramatically in popularity due in large part to the inability to obtain the German Rollei TLRs during World War II. The Ciro-flex was widely accessible, inexpensive, and produced high quality images. Models with the Mamiya, Minolta and Yashica brands are common on the used-camera market, and many other companies made TLRs that are now classics. The Mamiya C series TLRs had interchangeable lenses, allowing focal lengths from 55 mm (wide angle) to 250 mm (telephoto) to be used. The bellows focusing of these models also allowed extreme closeups to be taken, something difficult or impossible with most TLRs. The simple, sturdy construction of many TLRs means they have tended to endure the years well. Many low-end cameras used cheap shutters however, and the slow speeds on these often stick or are inaccurate. 127 format There were smaller TLR models, using 127 roll film with square images, most famous the "Baby" Rolleiflex and the Yashica 44. The TLR design was also popular in the 1950s for inexpensive fixed focus cameras such as the Kodak Duaflex and Argus 75. 35 mm format Though most used medium format film, a few 35 mm TLRs were made, the very expensive Contaflex TLR being the most elaborate, with interchangeable lenses and removable backs. The LOMO Lubitel 166+, a natively medium format camera, comes with an adapter for 35 mm film. As do most Rolleiflex models with their respective Rolleikin 35mm adapter. Furthermore the Yashica 635 was made specifically for use with 120 and 135 film and was shipped with the appropriate adapters. Instant film format The only twin lens reflex camera that uses instant film is Instantflex TL70 manufactured by MiNT Camera which is compatible with Fuji instax mini film (film size , picture size ) . It is the world's first instant twin lens reflex camera. Subminiature format Gemflex is a subminiature twin lens reflex camera made by Showa Optica Works (昭和光学精機) in occupied Japan in the 50s. Gemflex resembles the well known Rolleiflex 6×6 twin lens reflex, but much smaller in size. The body of Gemflex is die cast from shatter proof metal. The smallest photography TLR camera using 35 mm film is the Swiss-made Tessina, using perforated 35 mm film reloaded into special Tessina cassette, forming images of . Goerz Minicord twin lens reflex made format on double perforated 16 mm film in metal cassette. 6 Element Goerz Helgor F2 lens, metal focal plane shutter B, 10, 25, 50, 100, and 400. Viewing lens uses pentaprism reflex optics for the viewing lens. Picture format on double perforated 16 mm film. Minox rebadged Sharan Rolleflex 2.8F classic retro TLR film camera, 1/3 scale 6x6 Rolleiflex TLR, using Minox cassette image size , 15 mm F5.6 glass triplet lens, mechanical shutter 1/250 sec. Japan made Gemflex, a twin lens reflex using 17.5 mm paper back roll film. It has been argued that the medical gastroscopy camera, the Olympus Gastro Camera is technically the smallest TLR device.
Technology
Photography
null
150042
https://en.wikipedia.org/wiki/Nene%20%28bird%29
Nene (bird)
The nene (Branta sandvicensis), also known as the nēnē or the Hawaiian goose, is a species of bird endemic to the Hawaiian Islands. The nene is exclusively found in the wild on the islands of Oahu, Maui, Kauai, Molokai, and Hawaii. In 1957, it was designated as the official state bird of the state of Hawaii. The Hawaiian name nēnē comes from its soft call. The specific name sandvicensis refers to the Sandwich Islands, a former name for the Hawaiian Islands. Taxonomy The holotype specimen of Anser sandvicensis Vigors (List Anim. Garden Zool. Soc., ed.3, June 1833, p.4.) is held in the vertebrate zoology collection at World Museum, National Museums Liverpool, with accession number NML-VZ T12706. The specimen was collected from the Sandwich Islands (Hawaiian Islands) and came to the Liverpool national collection via the Museum of the Zoological Society of London collection, Thomas Campbell Eyton’s collection, and Henry Baker Tristram’s collection. It is thought that the nene evolved from the Canada goose (Branta canadensis), which most likely arrived on the Hawaiian islands about 500,000 years ago, shortly after the island of Hawaii was formed. The Canada goose is also the ancestor of the prehistoric giant Hawaii goose (Branta rhuax) and the nēnē-nui (Branta hylobadistes). The nēnē-nui was larger than the nene, varied from flightless to flighted depending on the individual, and inhabited the island of Maui. Similar fossil geese found on Oahu and Kauai may be of the same species. The giant Hawaii goose was restricted to the island of Hawaii and measured in length with a mass of , making it more than four times larger than the nene. It is believed that the herbivorous giant Hawaii goose occupied the same ecological niche as the goose-like ducks known as moa-nalo, which were not present on the Big Island. Based on mitochondrial DNA found in fossils, all Hawaiian geese, living and extinct, are closely related to the giant Canada goose (B. c. maxima) and dusky Canada goose (B. c. occidentalis). Description The nene is a large-sized goose at tall. Although they spend most of their time on the ground, they are capable of flight, with some individuals flying daily between nesting and feeding areas. Females have a mass of , while males average , 11% larger than females. Adult males have a black head and hindneck, buff cheeks and heavily furrowed neck. The neck has black and white diagonal stripes. Aside from being smaller, the female Nene is similar to the male in colouration. The adult's bill, legs and feet are black. It has soft feathers under its chin. Goslings resemble adults, but are a duller brown and with less demarcation between the colors of the head and neck, and striping and barring effects are much reduced. Habitat and range The nene is an inhabitant of shrubland, grassland, coastal dunes, and lava plains, and related anthropogenic habitats such as pasture and golf courses from sea level to as much as . Some populations migrated between lowland breeding grounds and montane foraging areas. The nene could at one time be found on the islands of Hawaii, Maui, Kahoolawe, Lānai, Molokai, Oʻahu and Kauai. Today, its range is restricted to Hawaii, Maui, Molokai, and Kauai. A pair arrived at the James Campbell National Wildlife Refuge on Oʻahu in January 2014; two of their offspring survived and are seen regularly on the nearby golf courses at Turtle Bay Resort. Ecology and behavior Breeding The breeding season of the nene, from August to April, is longer than that of any other goose; most eggs are laid between November and January. Unlike most other waterfowl, the nene mates on land. Nests are built by females on a site of her choosing, in which one to five eggs are laid (average is three on Maui and Hawaii, four on Kauai). Females incubate the eggs for 29 to 32 days, while the male acts as a sentry. Goslings are precocial, able to feed on their own; they remain with their parents until the following breeding season. Diet The nene is a herbivore that will either graze or browse, depending on the availability of vegetation. Food items include the leaves, seeds, fruit, and flowers of grasses and shrubs. Conservation The nene population stands at 3,862 birds, making it the world's rarest goose. It is believed that it was once common, with approximately 25,000 Hawaiian geese living in Hawaii when Captain James Cook arrived in 1778. Hunting and introduced predators, such as small Indian mongooses, pigs, and feral cats, reduced the population to 30 birds by 1952. The species breeds well in captivity, and has been successfully re-introduced. In 2004, it was estimated that there were 800 birds in the wild, as well as 1,000 in wildfowl collections and zoos. There is concern about inbreeding due to the small initial population of birds. The nature reserve WWT Slimbridge, in England, was instrumental in the successful breeding of Hawaiian geese in captivity. Under the direction of conservationist Peter Scott, it was bred back from the brink of extinction during the 1950s for later re-introduction into the wild in Hawaii. There are still Hawaiian geese at Slimbridge today. They can now be found in captivity in multiple WWT centres. Successful introductions include Haleakalā and Piiholo ranches on Maui. NatureServe considers the species Imperiled.
Biology and health sciences
Anseriformes
Animals
150116
https://en.wikipedia.org/wiki/Black%20pepper
Black pepper
Black pepper (Piper nigrum) is a flowering vine in the family Piperaceae, cultivated for its fruit (the peppercorn), which is usually dried and used as a spice and seasoning. The fruit is a drupe (stonefruit) which is about in diameter (fresh and fully mature), dark red, and contains a stone which encloses a single pepper seed. Peppercorns and the ground pepper derived from them may be described simply as pepper, or more precisely as black pepper (cooked and dried unripe fruit), green pepper (dried unripe fruit), or white pepper (ripe fruit seeds). Black pepper is native to the Malabar Coast of India, and the Malabar pepper is extensively cultivated there and in other tropical regions. Ground, dried, and cooked peppercorns have been used since antiquity, both for flavour and as a traditional medicine. Black pepper is the world's most traded spice, and is one of the most common spices added to cuisines around the world. Its spiciness is due to the chemical compound piperine, which is a different kind of spiciness from that of capsaicin characteristic of chili peppers. It is ubiquitous in the Western world as a seasoning, and is often paired with salt and available on dining tables in shakers or mills. Etymology The word pepper derives from Old English pipor, Latin piper, and . The Greek likely derives from Dravidian pippali, meaning "long pepper". Sanskrit pippali shares the same meaning. In the 16th century, people began using pepper to also mean the New World chili pepper (genus Capsicum), which is not closely related. Varieties Processed peppercorns come in a variety of colours, any one of which may be used in food preparation, especially common peppercorn sauce. Black pepper Black pepper is produced from the still-green, unripe drupe of the pepper plant. The drupes are cooked briefly in hot water, both to clean them and to prepare them for drying. The heat ruptures cell walls in the pepper, accelerating enzymes that cause browning during drying. The pepper drupes can also be dried in the sun or by machine for several days, during which the pepper skin around the seed shrinks and darkens into a thin, wrinkled black layer containing melanoidin. Once dry, the spice is called black peppercorn. After the peppercorns are dried, pepper powder for culinary uses is obtained by crushing the berries, which may also yield an essential oil by extraction. White pepper White pepper consists solely of the seed of the ripe fruit of the pepper plant, with the thin darker-coloured skin (flesh) of the fruit removed. This is usually accomplished by a process known as retting, where fully ripe red pepper berries are soaked in water for about a week so the flesh of the peppercorn softens and decomposes; rubbing then removes what remains of the fruit, and the naked seed is dried. Sometimes the outer layer is removed from the seed through other mechanical, chemical, or biological methods. Ground white pepper is commonly used in Chinese, Thai, and Portuguese cuisines. It finds occasional use in other cuisines in salads, light-coloured sauces, and mashed potatoes as a substitute for black pepper, because black pepper would visibly stand out. However, white pepper lacks certain compounds present in the outer layer of the drupe, resulting in a different overall flavour. Green pepper Green pepper, like black pepper, is made from unripe drupes. Dried green peppercorns are treated in a way that retains the green colour, such as with sulfur dioxide, canning, or freeze-drying. Pickled peppercorns, also green, are unripe drupes preserved in brine or vinegar. Fresh, unpreserved green pepper drupes are used in some cuisines like Thai cuisine and Tamil cuisine. Their flavour has been described as "spicy and fresh", with a "bright aroma." They decay quickly if not dried or preserved, making them unsuitable for international shipping. Red peppercorns Red peppercorns usually consist of ripe peppercorn drupes preserved in brine and vinegar. Ripe red peppercorns can also be dried using the same colour-preserving techniques used to produce green pepper. Pink pepper and other plants Pink peppercorns are the fruits of the Peruvian pepper tree, Schinus molle, or its relative, the Brazilian pepper tree, Schinus terebinthifolius, plants from a different family (Anacardiaceae). As they are members of the cashew family, they may cause allergic reactions, including anaphylaxis, for persons with a tree nut allergy. The bark of Drimys winteri ("canelo" or "winter's bark") is used as a substitute for pepper in cold and temperate regions of Chile and Argentina, where it is easily found and readily available. In New Zealand, the seeds of kawakawa (Piper excelsum), a relative of black pepper, are sometimes used as pepper; the leaves of Pseudowintera colorata ("mountain horopito") are another replacement for pepper. Several plants in the United States are also used as pepper substitutes, such as field pepperwort, least pepperwort, shepherd's purse, horseradish, and field pennycress. Plants The pepper plant is a perennial woody vine growing up to in height on supporting trees, poles, or trellises. It is a spreading vine, rooting readily where trailing stems touch the ground. The leaves are alternate, entire, long and across. The flowers are small, produced on pendulous spikes long at the leaf nodes, the spikes lengthening up to as the fruit matures. Pepper can be grown in soil that is neither too dry nor susceptible to flooding, moist, well-drained, and rich in organic matter. The vines do not do well over an altitude of above sea level. The plants are propagated by cuttings about long, tied up to neighbouring trees or climbing frames at distances of about apart; trees with rough bark are favoured over those with smooth bark, as the pepper plants climb rough bark more readily. Competing plants are cleared away, leaving only sufficient trees to provide shade and permit free ventilation. The roots are covered in leaf mulch and manure, and the shoots are trimmed twice a year. On dry soils, the young plants require watering every other day during the dry season for the first three years. The plants bear fruit from the fourth or fifth year, and then typically for seven years. The cuttings are usually cultivars, selected both for yield and quality of fruit. A single stem bears 20 to 30 fruiting spikes. The harvest begins as soon as one or two fruits at the base of the spikes begin to turn red, and before the fruit is fully mature, and still hard; if allowed to ripen completely, the fruits lose pungency, and ultimately fall off and are lost. The spikes are collected and spread out to dry in the sun, then the peppercorns are stripped off the spikes. Black pepper is native either to Southeast Asia or South Asia. Within the genus Piper, it is most closely related to other Asian species such as P. caninum. Wild pepper grows in the Western Ghats region of India. Into the 19th century, the forests contained expansive wild pepper vines, as recorded by the Scottish physician Francis Buchanan (also a botanist and geographer) in his book A journey from Madras through the countries of Mysore, Canara and Malabar (Volume III). However, deforestation resulted in wild pepper growing in more limited forest patches from Goa to Kerala, with the wild source gradually decreasing as the quality and yield of the cultivated variety improved. No successful grafting of commercial pepper on wild pepper has been achieved to date. Production and trade In 2020, Vietnam was the world's largest producer and exporter of black peppercorns, producing 270,192 tonnes or 36% of the world total (table). Other major producers were Brazil, Indonesia, India, Sri Lanka, China, and Malaysia. Global pepper production varies annually according to crop management, disease, and weather. Peppercorns are among the most widely traded spice in the world, accounting for 20% of all spice imports. History Black pepper is native to South Asia and Southeast Asia, and has been known to Indian cooking since at least 2000 BCE. J. Innes Miller notes that while pepper was grown in southern Thailand and in Malaysia, its most important source was India, particularly the Malabar Coast, in what is now the state of Kerala. The lost ancient port city of Muziris of the Chera Dynasty, famous for exporting black pepper and various other spices, is mentioned in a number of classical historical sources for its trade with Roman Empire, Egypt, Mesopotamia, Levant, and Yemen. Peppercorns were a much-prized trade good, often referred to as "black gold" and used as a form of commodity money. The legacy of this trade remains in some Western legal systems that recognize the term "peppercorn rent" as a token payment for something that is, essentially, a gift. The ancient history of black pepper is often interlinked with (and confused with) that of long pepper, the dried fruit of closely related Piper longum. The Romans knew of both and often referred to either as just piper. In fact, the popularity of long pepper did not entirely decline until the discovery of the New World and of chili peppers. Chili peppers—some of which, when dried, are similar in shape and taste to long pepper—were easier to grow in a variety of locations more convenient to Europe. Before the 16th century, pepper was being grown in Java, Sunda, Sumatra, Madagascar, Malaysia, and everywhere in Southeast Asia. These areas traded mainly with China, or used the pepper locally. Ports in the Malabar area also served as a stop-off point for much of the trade in other spices from farther east in the Indian Ocean. The Maluku Islands, historically known as the "Spice Islands," are a region in Indonesia known for producing nutmeg, mace, cloves, and pepper, and were a major source of these spices in the world. The presence of these spices in the Maluku Islands sparked European interest to buy them directly in the 16th century. Ancient times Black peppercorns were found stuffed in the nostrils of Ramesses II, placed there as part of the mummification rituals shortly after his death in 1213 BCE. Little else is known about the use of pepper in ancient Egypt and how it reached the Nile from the Malabar Coast of India. Pepper (both long and black) was known in Greece at least as early as the fourth century BCE, though it was probably an uncommon and expensive item that only the very rich could afford. By the time of the early Roman Empire, especially after Rome's conquest of Egypt in 30 BCE, open-ocean crossing of the Arabian Sea direct to Chera dynasty southern India's Malabar Coast was near routine. Details of this trading across the Indian Ocean have been passed down in the Periplus of the Erythraean Sea. According to the Greek geographer Strabo, the early empire sent a fleet of around 120 ships on an annual trip to India and back. The fleet timed its travel across the Arabian Sea to take advantage of the predictable monsoon winds. Returning from India, the ships travelled up the Red Sea, from where the cargo was carried overland or via the Nile-Red Sea canal to the Nile River, barged to Alexandria, and shipped from there to Italy and Rome. The rough geographical outlines of this same trade route would dominate the pepper trade into Europe for a millennium and a half to come. With ships sailing directly to the Malabar coast, Malabar black pepper was now travelling a shorter trade route than long pepper, and the prices reflected it. Pliny the Elder's Natural History tells us the prices in Rome around 77 CE: "Long pepper ... is 15 denarii per pound, while that of white pepper is seven, and of black, four." Pliny also complains, "There is no year in which India does not drain the Roman Empire of 50 million sesterces", and further moralizes on pepper: He does not state whether the 50 million was the actual amount of money which found its way to India or the total retail cost of the items in Rome, and, elsewhere, he cites a figure of 100 million sesterces. Black pepper was a well-known and widespread, if expensive, seasoning in the Roman Empire. Apicius' De re coquinaria, a third-century cookbook probably based at least partly on one from the first century CE, includes pepper in a majority of its recipes. Edward Gibbon wrote, in The History of the Decline and Fall of the Roman Empire, that pepper was "a favorite ingredient of the most expensive Roman cookery". Postclassical Europe Pepper was so valuable that it was often used as collateral or even currency. The taste for pepper (or the appreciation of its monetary value) was passed on to those who would see Rome fall. Alaric, king of the Visigoths, included 3,000 pounds of pepper as part of the ransom he demanded from Rome when he besieged the city in the fifth century. After the fall of Rome, others took over the middle legs of the spice trade, first the Persians and then the Arabs; Innes Miller cites the account of Cosmas Indicopleustes, who travelled east to India, as proof that "pepper was still being exported from India in the sixth century". By the end of the Early Middle Ages, the central portions of the spice trade were firmly under Islamic control. Once into the Mediterranean, the trade was largely monopolized by Italian powers, especially Venice and Genoa. The rise of these city-states was funded in large part by the spice trade. A riddle authored by Saint Aldhelm, a seventh-century Bishop of Sherborne, sheds some light on black pepper's role in England at that time: It is commonly believed that during the Middle Ages, pepper was often used to conceal the taste of partially rotten meat. No evidence supports this claim, and historians view it as highly unlikely; in the Middle Ages, pepper was a luxury item, affordable only to the wealthy, who certainly had unspoiled meat available, as well. In addition, people of the time certainly knew that eating spoiled food would make them sick. Similarly, the belief that pepper was widely used as a preservative is questionable; it is true that piperine, the compound that gives pepper its spiciness, has some antimicrobial properties, but at the concentrations present when pepper is used as a spice, the effect is small. Salt is a much more effective preservative, and salt-cured meats were common fare, especially in winter. However, pepper and other spices certainly played a role in improving the taste of long-preserved meats. Archaeological evidence of pepper consumption in late medieval Northern Europe comes from excavations on the Danish-Norwegian flagship, Gribshunden, which sank in the summer of 1495. In 2021, archaeologists recovered more than 2000 peppercorns from the wreck, along with a variety of other spices and exotic foodstuffs including clove, ginger, saffron, and almond. The ship was carrying King Hans to a political summit at the time of its loss. The spices were likely intended for feasts at the summit, which would have included the Danish, Norwegian, and Swedish Councils of State. Its exorbitant price during the Middle Ages – and the monopoly on the trade held by Venice – was one of the inducements that led the Portuguese to seek a sea route to India. In 1498, Vasco da Gama became the first person to reach India by sailing around Africa (see Age of Discovery); asked by Arabs in Calicut (who spoke Spanish and Italian) why they had come, his representative replied, "we seek Christians and spices". Though this first trip to India by way of the southern tip of Africa was only a modest success, the Portuguese quickly returned in greater numbers and eventually gained much greater control of trade on the Arabian Sea. The 1494 Treaty of Tordesillas granted Portugal exclusive rights to the half of the world where black pepper originated. However, the Portuguese monopolized the spice trade for 150 years. Portuguese even became the lingua franca of the then known world. The spice trade made Portugal rich. However in the 17th century, the Portuguese lost most of their valuable Indian Ocean trade to the Dutch and the English, who, taking advantage of the Spanish rule over Portugal during the Iberian Union (1580–1640), occupied by force almost all Portuguese interests in the area. The pepper ports of Malabar began to trade increasingly with the Dutch in the period 1661–1663.7 As pepper supplies into Europe increased, the price of pepper declined (though the total value of the import trade generally did not). Pepper, which in the early Middle Ages had been an item exclusively for the rich, started to become more of an everyday seasoning among those of more average means. Today, pepper accounts for one-fifth of the world's spice trade. China It is possible that black pepper was known in China in the second century BCE, if poetic reports regarding an explorer named Tang Meng (唐蒙) are correct. Sent by Emperor Wu to what is now south-west China, Tang Meng is said to have come across something called jujiang or "sauce-betel". He was told it came from the markets of Shu, an area in what is now the Sichuan province. The traditional view among historians is that "sauce-betel" is a sauce made from betel leaves, but arguments have been made that it actually refers to pepper, either long or black. In the third century CE, black pepper made its first definite appearance in Chinese texts, as hujiao or "foreign pepper". It does not appear to have been widely known at the time, failing to appear in a fourth-century work describing a wide variety of spices from beyond China's southern border, including long pepper. By the 12th century, however, black pepper had become a popular ingredient in the cuisine of the wealthy and powerful, sometimes taking the place of China's native Sichuan pepper (the tongue-numbing dried fruit of an unrelated plant). Marco Polo testifies to pepper's popularity in 13th-century China, when he relates what he is told of its consumption in the city of Kinsay (Hangzhou): "... Messer Marco heard it stated by one of the Great Kaan's officers of customs that the quantity of pepper introduced daily for consumption into the city of Kinsay amounted to 43 loads, each load being equal to 223 lbs." During the course of the Ming treasure voyages in the early 15th century, Admiral Zheng He and his expeditionary fleets returned with such a large amount of black pepper that the once-costly luxury became a common commodity. Traditional medicine, phytochemicals, and research Like many eastern spices, pepper was historically both a seasoning and a traditional medicine. Pepper appears in the Buddhist Samaññaphala Sutta, chapter five, as one of the few medicines a monk is allowed to carry. Long pepper, being stronger, was often the preferred medication, but both were used. Black pepper (or perhaps long pepper) was believed to cure several illnesses, such as constipation, insomnia, oral abscesses, sunburn, and toothaches, among others. Pepper contains phytochemicals, including amides, piperidines, and pyrrolidines. Pepper is known to cause sneezing. Some sources say that piperine, a substance present in black pepper, irritates the nostrils, causing the sneezing. Few, if any, controlled studies have been carried out to answer the question. Nutrition One tablespoon (6 grams) of ground black pepper contains moderate amounts of vitamin K (13% of the daily value or DV), iron (10% DV), and manganese (18% DV), with trace amounts of other essential nutrients, protein, and dietary fibre. Flavour Pepper gets its spicy heat mostly from piperine derived from both the outer fruit and the seed. Black pepper contains between 4.6 and 9.7% piperine by mass, and white pepper slightly more than that. Refined piperine, by weight, is about one percent as hot as the capsaicin found in chili peppers. The outer fruit layer, left on black pepper, also contains aroma-contributing terpenes, including germacrene (11%), limonene (10%), pinene (10%), alpha-phellandrene (9%), and beta-caryophyllene (7%), which give citrusy, woody, and floral notes. These scents are mostly missing in white pepper, as the fermentation and other processing removes the fruit layer (which also contains some of the spicy piperine). Other flavours also commonly develop in this process, some of which are described as off-flavours when in excess: Primarily 3-methylindole (pig manure-like), 4-methylphenol (horse manure), 3-methylphenol (phenolic), and butyric acid (cheese). The aroma of pepper is attributed to rotundone (3,4,5,6,7,8-Hexahydro-3α,8α-dimethyl-5α-(1-methylethenyl)azulene-1(2H)-one), a sesquiterpene originally discovered in the tubers of Cyperus rotundus, which can be detected in concentrations of 0.4 nanograms/l in water and in wine: rotundone is also present in marjoram, oregano, rosemary, basil, thyme, and geranium, as well as in some Shiraz wines. Pepper loses flavour and aroma through evaporation, so airtight storage helps preserve its spiciness longer. Pepper can also lose flavour when exposed to light, which can transform piperine into nearly tasteless isochavicine. Once ground, pepper's aromatics can evaporate quickly; most culinary sources recommend grinding whole peppercorns immediately before use for this reason. Handheld pepper mills or grinders, which mechanically grind or crush whole peppercorns, are used for this as an alternative to pepper shakers that dispense ground pepper. Spice mills such as pepper mills were found in European kitchens as early as the 14th century, but the mortar and pestle used earlier for crushing pepper have remained a popular method for centuries, as well. Enhancing the flavour profile of peppercorns (including piperine and essential oils), prior to processing, has been attempted through the postharvest application of ultraviolet-C light (UV-C).
Biology and health sciences
Magnoliids
null
150159
https://en.wikipedia.org/wiki/Noether%27s%20theorem
Noether's theorem
Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems (see Noether's second theorem) published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space. Noether's theorem is used in theoretical physics and the calculus of variations. It reveals the fundamental relation between the symmetries of a physical system and the conservation laws. It also made modern theoretical physicists much more focused on symmetries of physical systems. A generalization of the formulations on constants of motion in Lagrangian and Hamiltonian mechanics (developed in 1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian alone (e.g., systems with a Rayleigh dissipation function). In particular, dissipative systems with continuous symmetries need not have a corresponding conservation law. Basic illustrations and background As an illustration, if a physical system behaves the same regardless of how it is oriented in space (that is, it's invariant), its Lagrangian is symmetric under continuous rotation: from this symmetry, Noether's theorem dictates that the angular momentum of the system be conserved, as a consequence of its laws of motion. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry. It is the laws of its motion that are symmetric. As another example, if a physical process exhibits the same outcomes regardless of place or time, then its Lagrangian is symmetric under continuous translations in space and time respectively: by Noether's theorem, these symmetries account for the conservation laws of linear momentum and energy within this system, respectively. Noether's theorem is important, both because of the insight it gives into conservation laws, and also as a practical calculational tool. It allows investigators to determine the conserved quantities (invariants) from the observed symmetries of a physical system. Conversely, it allows researchers to consider whole classes of hypothetical Lagrangians with given invariants, to describe a physical system. As an illustration, suppose that a physical theory is proposed which conserves a quantity X. A researcher can calculate the types of Lagrangians that conserve X through a continuous symmetry. Due to Noether's theorem, the properties of these Lagrangians provide further criteria to understand the implications and judge the fitness of the new theory. There are numerous versions of Noether's theorem, with varying degrees of generality. There are natural quantum counterparts of this theorem, expressed in the Ward–Takahashi identities. Generalizations of Noether's theorem to superspaces also exist. Informal statement of the theorem All fine technical points aside, Noether's theorem can be stated informally as: A more sophisticated version of the theorem involving fields states that: The word "symmetry" in the above statement refers more precisely to the covariance of the form that a physical law takes with respect to a one-dimensional Lie group of transformations satisfying certain technical criteria. The conservation law of a physical quantity is usually expressed as a continuity equation. The formal proof of the theorem utilizes the condition of invariance to derive an expression for a current associated with a conserved physical quantity. In modern terminology, the conserved quantity is called the Noether charge, while the flow carrying that charge is called the Noether current. The Noether current is defined up to a solenoidal (divergenceless) vector field. In the context of gravitation, Felix Klein's statement of Noether's theorem for action I stipulates for the invariants: Brief illustration and overview of the concept The main idea behind Noether's theorem is most easily illustrated by a system with one coordinate and a continuous symmetry (gray arrows on the diagram). Consider any trajectory (bold on the diagram) that satisfies the system's laws of motion. That is, the action governing this system is stationary on this trajectory, i.e. does not change under any local variation of the trajectory. In particular it would not change under a variation that applies the symmetry flow on a time segment and is motionless outside that segment. To keep the trajectory continuous, we use "buffering" periods of small time to transition between the segments gradually. The total change in the action now comprises changes brought by every interval in play. Parts, where variation itself vanishes, i.e outside bring no . The middle part does not change the action either, because its transformation is a symmetry and thus preserves the Lagrangian and the action . The only remaining parts are the "buffering" pieces. In these regions both the coordinate and velocity change, but changes by , and the change in the coordinate is negligible by comparison since the time span of the buffering is small (taken to the limit of 0), so . So the regions contribute mostly through their "slanting" . That changes the Lagrangian by , which integrates to These last terms, evaluated around the endpoints and , should cancel each other in order to make the total change in the action be zero, as would be expected if the trajectory is a solution. That is meaning the quantity is conserved, which is the conclusion of Noether's theorem. For instance if pure translations of by a constant are the symmetry, then the conserved quantity becomes just , the canonical momentum. More general cases follow the same idea: Historical context A conservation law states that some quantity X in the mathematical description of a system's evolution remains constant throughout its motion – it is an invariant. Mathematically, the rate of change of X (its derivative with respect to time) is zero, Such quantities are said to be conserved; they are often called constants of motion (although motion per se need not be involved, just evolution in time). For example, if the energy of a system is conserved, its energy is invariant at all times, which imposes a constraint on the system's motion and may help in solving for it. Aside from insights that such constants of motion give into the nature of a system, they are a useful calculational tool; for example, an approximate solution can be corrected by finding the nearest state that satisfies the suitable conservation laws. The earliest constants of motion discovered were momentum and kinetic energy, which were proposed in the 17th century by René Descartes and Gottfried Leibniz on the basis of collision experiments, and refined by subsequent researchers. Isaac Newton was the first to enunciate the conservation of momentum in its modern form, and showed that it was a consequence of Newton's laws of motion. According to general relativity, the conservation laws of linear momentum, energy and angular momentum are only exactly true globally when expressed in terms of the sum of the stress–energy tensor (non-gravitational stress–energy) and the Landau–Lifshitz stress–energy–momentum pseudotensor (gravitational stress–energy). The local conservation of non-gravitational linear momentum and energy in a free-falling reference frame is expressed by the vanishing of the covariant divergence of the stress–energy tensor. Another important conserved quantity, discovered in studies of the celestial mechanics of astronomical bodies, is the Laplace–Runge–Lenz vector. In the late 18th and early 19th centuries, physicists developed more systematic methods for discovering invariants. A major advance came in 1788 with the development of Lagrangian mechanics, which is related to the principle of least action. In this approach, the state of the system can be described by any type of generalized coordinates q; the laws of motion need not be expressed in a Cartesian coordinate system, as was customary in Newtonian mechanics. The action is defined as the time integral I of a function known as the Lagrangian L where the dot over q signifies the rate of change of the coordinates q, Hamilton's principle states that the physical path q(t)—the one actually taken by the system—is a path for which infinitesimal variations in that path cause no change in I, at least up to first order. This principle results in the Euler–Lagrange equations, Thus, if one of the coordinates, say qk, does not appear in the Lagrangian, the right-hand side of the equation is zero, and the left-hand side requires that where the momentum is conserved throughout the motion (on the physical path). Thus, the absence of the ignorable coordinate qk from the Lagrangian implies that the Lagrangian is unaffected by changes or transformations of qk; the Lagrangian is invariant, and is said to exhibit a symmetry under such transformations. This is the seed idea generalized in Noether's theorem. Several alternative methods for finding conserved quantities were developed in the 19th century, especially by William Rowan Hamilton. For example, he developed a theory of canonical transformations which allowed changing coordinates so that some coordinates disappeared from the Lagrangian, as above, resulting in conserved canonical momenta. Another approach, and perhaps the most efficient for finding conserved quantities, is the Hamilton–Jacobi equation. Emmy Noether's work on the invariance theorem began in 1915 when she was helping Felix Klein and David Hilbert with their work related to Albert Einstein's theory of general relativity By March 1918 she had most of the key ideas for the paper which would be published later in the year. Mathematical expression Simple form using perturbations The essence of Noether's theorem is generalizing the notion of ignorable coordinates. One can assume that the Lagrangian L defined above is invariant under small perturbations (warpings) of the time variable t and the generalized coordinates q. One may write where the perturbations δt and δq are both small, but variable. For generality, assume there are (say) N such symmetry transformations of the action, i.e. transformations leaving the action unchanged; labelled by an index r = 1, 2, 3, ..., N. Then the resultant perturbation can be written as a linear sum of the individual types of perturbations, where εr are infinitesimal parameter coefficients corresponding to each: generator Tr of time evolution, and generator Qr of the generalized coordinates. For translations, Qr is a constant with units of length; for rotations, it is an expression linear in the components of q, and the parameters make up an angle. Using these definitions, Noether showed that the N quantities are conserved (constants of motion). Examples I. Time invariance For illustration, consider a Lagrangian that does not depend on time, i.e., that is invariant (symmetric) under changes t → t + δt, without any change in the coordinates q. In this case, N = 1, T = 1 and Q = 0; the corresponding conserved quantity is the total energy H II. Translational invariance Consider a Lagrangian which does not depend on an ("ignorable", as above) coordinate qk; so it is invariant (symmetric) under changes qk → qk + δqk. In that case, N = 1, T = 0, and Qk = 1; the conserved quantity is the corresponding linear momentum pk In special and general relativity, these two conservation laws can be expressed either globally (as it is done above), or locally as a continuity equation. The global versions can be united into a single global conservation law: the conservation of the energy-momentum 4-vector. The local versions of energy and momentum conservation (at any point in space-time) can also be united, into the conservation of a quantity defined locally at the space-time point: the stress–energy tensor(this will be derived in the next section). III. Rotational invariance The conservation of the angular momentum L = r × p is analogous to its linear momentum counterpart. It is assumed that the symmetry of the Lagrangian is rotational, i.e., that the Lagrangian does not depend on the absolute orientation of the physical system in space. For concreteness, assume that the Lagrangian does not change under small rotations of an angle δθ about an axis n; such a rotation transforms the Cartesian coordinates by the equation Since time is not being transformed, T = 0, and N = 1. Taking δθ as the ε parameter and the Cartesian coordinates r as the generalized coordinates q, the corresponding Q variables are given by Then Noether's theorem states that the following quantity is conserved, In other words, the component of the angular momentum L along the n axis is conserved. And if n is arbitrary, i.e., if the system is insensitive to any rotation, then every component of L is conserved; in short, angular momentum is conserved. Field theory version Although useful in its own right, the version of Noether's theorem just given is a special case of the general version derived in 1915. To give the flavor of the general theorem, a version of Noether's theorem for continuous fields in four-dimensional space–time is now given. Since field theory problems are more common in modern physics than mechanics problems, this field theory version is the most commonly used (or most often implemented) version of Noether's theorem. Let there be a set of differentiable fields defined over all space and time; for example, the temperature would be representative of such a field, being a number defined at every place and time. The principle of least action can be applied to such fields, but the action is now an integral over space and time (the theorem can be further generalized to the case where the Lagrangian depends on up to the nth derivative, and can also be formulated using jet bundles). A continuous transformation of the fields can be written infinitesimally as where is in general a function that may depend on both and . The condition for to generate a physical symmetry is that the action is left invariant. This will certainly be true if the Lagrangian density is left invariant, but it will also be true if the Lagrangian changes by a divergence, since the integral of a divergence becomes a boundary term according to the divergence theorem. A system described by a given action might have multiple independent symmetries of this type, indexed by so the most general symmetry transformation would be written as with the consequence For such systems, Noether's theorem states that there are conserved current densities (where the dot product is understood to contract the field indices, not the index or index). In such cases, the conservation law is expressed in a four-dimensional way which expresses the idea that the amount of a conserved quantity within a sphere cannot change unless some of it flows out of the sphere. For example, electric charge is conserved; the amount of charge within a sphere cannot change unless some of the charge leaves the sphere. For illustration, consider a physical system of fields that behaves the same under translations in time and space, as considered above; in other words, is constant in its third argument. In that case, N = 4, one for each dimension of space and time. An infinitesimal translation in space, (with denoting the Kronecker delta), affects the fields as : that is, relabelling the coordinates is equivalent to leaving the coordinates in place while translating the field itself, which in turn is equivalent to transforming the field by replacing its value at each point with the value at the point "behind" it which would be mapped onto by the infinitesimal displacement under consideration. Since this is infinitesimal, we may write this transformation as The Lagrangian density transforms in the same way, , so and thus Noether's theorem corresponds to the conservation law for the stress–energy tensor Tμν, where we have used in place of . To wit, by using the expression given earlier, and collecting the four conserved currents (one for each ) into a tensor , Noether's theorem gives with (we relabelled as at an intermediate step to avoid conflict). (However, the obtained in this way may differ from the symmetric tensor used as the source term in general relativity; see Canonical stress–energy tensor.) The conservation of electric charge, by contrast, can be derived by considering Ψ linear in the fields φ rather than in the derivatives. In quantum mechanics, the probability amplitude ψ(x) of finding a particle at a point x is a complex field φ, because it ascribes a complex number to every point in space and time. The probability amplitude itself is physically unmeasurable; only the probability p = |ψ|2 can be inferred from a set of measurements. Therefore, the system is invariant under transformations of the ψ field and its complex conjugate field ψ* that leave |ψ|2 unchanged, such as a complex rotation. In the limit when the phase θ becomes infinitesimally small, δθ, it may be taken as the parameter ε, while the Ψ are equal to iψ and −iψ*, respectively. A specific example is the Klein–Gordon equation, the relativistically correct version of the Schrödinger equation for spinless particles, which has the Lagrangian density In this case, Noether's theorem states that the conserved (∂ ⋅ j = 0) current equals which, when multiplied by the charge on that species of particle, equals the electric current density due to that type of particle. This "gauge invariance" was first noted by Hermann Weyl, and is one of the prototype gauge symmetries of physics. Derivations One independent variable Consider the simplest case, a system with one independent variable, time. Suppose the dependent variables q are such that the action integral is invariant under brief infinitesimal variations in the dependent variables. In other words, they satisfy the Euler–Lagrange equations And suppose that the integral is invariant under a continuous symmetry. Mathematically such a symmetry is represented as a flow, φ, which acts on the variables as follows where ε is a real variable indicating the amount of flow, and T is a real constant (which could be zero) indicating how much the flow shifts time. The action integral flows to which may be regarded as a function of ε. Calculating the derivative at ε = 0 and using Leibniz's rule, we get Notice that the Euler–Lagrange equations imply Substituting this into the previous equation, one gets Again using the Euler–Lagrange equations we get Substituting this into the previous equation, one gets From which one can see that is a constant of the motion, i.e., it is a conserved quantity. Since φ[q, 0] = q, we get and so the conserved quantity simplifies to To avoid excessive complication of the formulas, this derivation assumed that the flow does not change as time passes. The same result can be obtained in the more general case. Field-theoretic derivation Noether's theorem may also be derived for tensor fields where the index A ranges over the various components of the various tensor fields. These field quantities are functions defined over a four-dimensional space whose points are labeled by coordinates xμ where the index μ ranges over time (μ = 0) and three spatial dimensions (μ = 1, 2, 3). These four coordinates are the independent variables; and the values of the fields at each event are the dependent variables. Under an infinitesimal transformation, the variation in the coordinates is written whereas the transformation of the field variables is expressed as By this definition, the field variations result from two factors: intrinsic changes in the field themselves and changes in coordinates, since the transformed field αA depends on the transformed coordinates ξμ. To isolate the intrinsic changes, the field variation at a single point xμ may be defined If the coordinates are changed, the boundary of the region of space–time over which the Lagrangian is being integrated also changes; the original boundary and its transformed version are denoted as Ω and Ω’, respectively. Noether's theorem begins with the assumption that a specific transformation of the coordinates and field variables does not change the action, which is defined as the integral of the Lagrangian density over the given region of spacetime. Expressed mathematically, this assumption may be written as where the comma subscript indicates a partial derivative with respect to the coordinate(s) that follows the comma, e.g. Since ξ is a dummy variable of integration, and since the change in the boundary Ω is infinitesimal by assumption, the two integrals may be combined using the four-dimensional version of the divergence theorem into the following form The difference in Lagrangians can be written to first-order in the infinitesimal variations as However, because the variations are defined at the same point as described above, the variation and the derivative can be done in reverse order; they commute Using the Euler–Lagrange field equations the difference in Lagrangians can be written neatly as Thus, the change in the action can be written as Since this holds for any region Ω, the integrand must be zero For any combination of the various symmetry transformations, the perturbation can be written where is the Lie derivative of in the Xμ direction. When is a scalar or , These equations imply that the field variation taken at one point equals Differentiating the above divergence with respect to ε at ε = 0 and changing the sign yields the conservation law where the conserved current equals Manifold/fiber bundle derivation Suppose we have an n-dimensional oriented Riemannian manifold, M and a target manifold T. Let be the configuration space of smooth functions from M to T. (More generally, we can have smooth sections of a fiber bundle T over M.) Examples of this M in physics include: In classical mechanics, in the Hamiltonian formulation, M is the one-dimensional manifold , representing time and the target space is the cotangent bundle of space of generalized positions. In field theory, M is the spacetime manifold and the target space is the set of values the fields can take at any given point. For example, if there are m real-valued scalar fields, , then the target manifold is . If the field is a real vector field, then the target manifold is isomorphic to . Now suppose there is a functional called the action. (It takes values into , rather than ; this is for physical reasons, and is unimportant for this proof.) To get to the usual version of Noether's theorem, we need additional restrictions on the action. We assume is the integral over M of a function called the Lagrangian density, depending on , its derivative and the position. In other words, for in Suppose we are given boundary conditions, i.e., a specification of the value of at the boundary if M is compact, or some limit on as x approaches ∞. Then the subspace of consisting of functions such that all functional derivatives of at are zero, that is: and that satisfies the given boundary conditions, is the subspace of on shell solutions. (See principle of stationary action) Now, suppose we have an infinitesimal transformation on , generated by a functional derivation, Q such that for all compact submanifolds N or in other words, for all x, where we set If this holds on shell and off shell, we say Q generates an off-shell symmetry. If this only holds on shell, we say Q generates an on-shell symmetry. Then, we say Q is a generator of a one parameter symmetry Lie group. Now, for any N, because of the Euler–Lagrange theorem, on shell (and only on-shell), we have Since this is true for any N, we have But this is the continuity equation for the current defined by: which is called the Noether current associated with the symmetry. The continuity equation tells us that if we integrate this current over a space-like slice, we get a conserved quantity called the Noether charge (provided, of course, if M is noncompact, the currents fall off sufficiently fast at infinity). Comments Noether's theorem is an on shell theorem: it relies on use of the equations of motion—the classical path. It reflects the relation between the boundary conditions and the variational principle. Assuming no boundary terms in the action, Noether's theorem implies that The quantum analogs of Noether's theorem involving expectation values (e.g., ) probing off shell quantities as well are the Ward–Takahashi identities. Generalization to Lie algebras Suppose we have two symmetry derivations Q1 and Q2. Then, [Q1, Q2] is also a symmetry derivation. Let us see this explicitly. Let us say and Then, where f12 = Q1[f2μ] − Q2[f1μ]. So, This shows we can extend Noether's theorem to larger Lie algebras in a natural way. Generalization of the proof This applies to any local symmetry derivation Q satisfying QS ≈ 0, and also to more general local functional differentiable actions, including ones where the Lagrangian depends on higher derivatives of the fields. Let ε be any arbitrary smooth function of the spacetime (or time) manifold such that the closure of its support is disjoint from the boundary. ε is a test function. Then, because of the variational principle (which does not apply to the boundary, by the way), the derivation distribution q generated by q[ε][Φ(x)] = ε(x)Q[Φ(x)] satisfies q[ε][S] ≈ 0 for every ε, or more compactly, q(x)[S] ≈ 0 for all x not on the boundary (but remember that q(x) is a shorthand for a derivation distribution, not a derivation parametrized by x in general). This is the generalization of Noether's theorem. To see how the generalization is related to the version given above, assume that the action is the spacetime integral of a Lagrangian that only depends on and its first derivatives. Also, assume Then, for all . More generally, if the Lagrangian depends on higher derivatives, then Examples Example 1: Conservation of energy Looking at the specific case of a Newtonian particle of mass m, coordinate x, moving under the influence of a potential V, coordinatized by time t. The action, S, is: The first term in the brackets is the kinetic energy of the particle, while the second is its potential energy. Consider the generator of time translations Q = d/dt. In other words, . The coordinate x has an explicit dependence on time, whilst V does not; consequently: so we can set Then, The right hand side is the energy, and Noether's theorem states that (i.e. the principle of conservation of energy is a consequence of invariance under time translations). More generally, if the Lagrangian does not depend explicitly on time, the quantity (called the Hamiltonian) is conserved. Example 2: Conservation of center of momentum Still considering 1-dimensional time, let for Newtonian particles where the potential only depends pairwise upon the relative displacement. For , consider the generator of Galilean transformations (i.e. a change in the frame of reference). In other words, And This has the form of so we can set Then, where is the total momentum, M is the total mass and is the center of mass. Noether's theorem states: Example 3: Conformal transformation Both examples 1 and 2 are over a 1-dimensional manifold (time). An example involving spacetime is a conformal transformation of a massless real scalar field with a quartic potential in (3 + 1)-Minkowski spacetime. For Q, consider the generator of a spacetime rescaling. In other words, The second term on the right hand side is due to the "conformal weight" of . And This has the form of (where we have performed a change of dummy indices) so set Then Noether's theorem states that (as one may explicitly check by substituting the Euler–Lagrange equations into the left hand side). If one tries to find the Ward–Takahashi analog of this equation, one runs into a problem because of anomalies. Applications Application of Noether's theorem allows physicists to gain powerful insights into any general theory in physics, by just analyzing the various transformations that would make the form of the laws involved invariant. For example: Invariance of an isolated system with respect to spatial translation (in other words, that the laws of physics are the same at all locations in space) gives the law of conservation of linear momentum (which states that the total linear momentum of an isolated system is constant) Invariance of an isolated system with respect to time translation (i.e. that the laws of physics are the same at all points in time) gives the law of conservation of energy (which states that the total energy of an isolated system is constant) Invariance of an isolated system with respect to rotation (i.e., that the laws of physics are the same with respect to all angular orientations in space) gives the law of conservation of angular momentum (which states that the total angular momentum of an isolated system is constant) Invariance of an isolated system with respect to Lorentz boosts (i.e., that the laws of physics are the same with respect to all inertial reference frames) gives the center-of-mass theorem (which states that the center-of-mass of an isolated system moves at a constant velocity). In quantum field theory, the analog to Noether's theorem, the Ward–Takahashi identity, yields further conservation laws, such as the conservation of electric charge from the invariance with respect to a change in the phase factor of the complex field of the charged particle and the associated gauge of the electric potential and vector potential. The Noether charge is also used in calculating the entropy of stationary black holes.
Physical sciences
Particle physics: General
Physics
3716784
https://en.wikipedia.org/wiki/Gravitational%20metric%20system
Gravitational metric system
The gravitational metric system (original French term ) is a non-standard system of units, which does not comply with the International System of Units (SI). It is built on the three base quantities length, time and force with base units metre, second and kilopond respectively. Internationally used abbreviations of the system are MKpS, MKfS or MKS (from French or ). However, the abbreviation MKS is also used for the MKS system of units, which, like the SI, uses mass in kilogram as a base unit. Disadvantages Nowadays, the mass as a property of an object and its weight, which depends on the gravity of the Earth at its position are strictly distinguished. However historically, the kilopond was also called kilogram, and only later the kilogram-mass (today's kilogram) was separated from the kilogram-force (today's kilopond). A kilopond originally referred to the weight of a mass of one kilogram. Since the gravitational acceleration on the surface of the Earth can differ, one gets different values for the unit kilopond and its derived units at different locations. To avoid this, the kilopond was first defined at sea level and a latitude of 45 degrees, since 1902 via the standard gravity of . Further disadvantages are inconsistencies in the definition of derived units such as horsepower (1 PS = 75 kp⋅m/s) and the missing link to electric, magnetic or thermodynamic units. In Germany, the kilopond lost its legal status as a unit of force on 1 January 1978, when for legal purposes the SI unit system was adopted. A kilopond can be converted to the SI unit newton by multiplication with the standard acceleration gn: 1 kp = gn ⋅ 1 kg = = Units Force In English contexts the unit of force is usually formed by simply appending the suffix "force" to the name of the unit of mass, thus gram-force (gf) or kilogram-force (kgf), which follows the tradition of pound-force (lbf). In other, international, contexts the special name pond (p) or kilopond (kp) respectively is more frequent. 1 p = 1 gf = 1 g ⋅ gn = 9.80665 g⋅m/s2 = 980.665 g⋅cm/s2 = 980.665 dyn 1 kp = 1 kgf = 1 kg ⋅ gn = 9.80665 kg⋅m/s2 = 980665 g⋅cm/s2 Mass The hyl, metric slug (mug), or TME (), is the mass that accelerates at 1 m/s2 under a force of 1 kgf. The unit, long obsolete, has also been used as the unit of mass in a metre–gram-force–second (mgfs) system. 1 TME = 1 kp / 1 m/s2 = 1 kp⋅s2/m = 1 hyl = 1 kp⋅s2/m = or 1 hyl (alternate definition – mgfs) = 1 p⋅s2/m = Pressure The gravitational unit of pressure is the technical atmosphere (at). It is the gravitational force of one kilogram, i.e. 1 kgf, exerted on an area of one square centimetre. 1 at = 1 kp/cm2 = 10 000 × gn kg/m2 = 98 066.5 kg/(m⋅s2) = 98.066 5 kPa Energy There is no dedicated name for the unit of energy, "metre" is simply appended to "kilopond", but usually the symbol of the kilopond-metre is written without the middle dot. 1 kpm = 1 kp⋅m = gn kg⋅m = 9.806 65 kg⋅m2/s2 = 9.806 65 J Power In 19th-century France there was as a unit of power, the poncelet, which was defined as the power required to raise a mass of 1 quintal (1 q = 100 kg) at a velocity of 1 m/s. The German or metric horsepower (PS, Pferdestärke) is arbitrarily selected to be three quarters thereof. 1 pq = 1 qf⋅m/s = 100 kp⋅m/s = 100 × gn kg⋅m/s = 980.665 kg⋅m2/s3 = 0.980 665 kW 1 PS = pq = 75 kp⋅m/s = 75 × gn kg⋅m/s = 735.498 75 kg⋅m2/s3 = 0.735 498 75 kW
Physical sciences
Measurement systems
Basics and measurement
3720055
https://en.wikipedia.org/wiki/Road%20junction
Road junction
A junction is where two or more roads meet. History Roads began as a means of linking locations of interest: towns, forts and geographic features such as river fords. Where roads met outside of an existing settlement, these junctions often led to a new settlement. Scotch Corner is an example of such a location. In the United Kingdom and other countries, the practice of giving names to junctions emerged, to help travellers find their way. Junctions took the name of a prominent nearby business or a point of interest. As of the road networks increased in density and traffic flows followed suit, managing the flow of traffic across the junction became of increasing importance, to minimize delays and improve safety. The first innovation was to add traffic control devices, such as stop signs and traffic lights that regulated traffic flow. Next came lane controls that limited what each lane of traffic was allowed to do while crossing. Turns across oncoming traffic might be prohibited, or allowed only when oncoming and crossing traffic was stopped. This was followed by specialized junction designs that incorporated information about traffic volumes, speeds, driver intent and many other factors. Types The most basic distinction among junction types is whether or not the roads cross at the same or different elevations. More expensive, grade-separated interchanges generally offer higher throughput at higher cost. Single-grade intersections are lower cost and lower throughput. Each main type comes in many variants. Interchange At interchanges, roads pass above or below each other, using grade separation and slip roads. The terms motorway junction and highway interchange typically refer to this layout. They can be further subdivided into those with and without signal controls. Signalized (traffic-light controlled) interchanges include such "diamond" designs as the diverging diamond, Michigan urban diamond, three-level diamond, and tight diamond. Others include center-turn overpass, contraflow left, single loop, and single-point urban overpass. Non-signalized designs include the cloverleaf, contraflow left, dogbone (restricted dumbbell), double crossover merging, dumbbell (grade-separated bowtie), echelon, free-flow interchange, partial cloverleaf, raindrop, single and double roundabouts (grade-separated roundabout), single-point urban, stack, and windmill. (literally "autobahn cross"), short form , and abbreviated as AK, is a four-way interchange on the German autobahn network. (literally "autobahn triangle"), short form , and abbreviated as AD, is a three-way interchange on the German autobahn network. Intersection At intersections, roads cross at-grade. They also can be further subdivided into those with and without signal controls. Signalized designs include advanced stop line, bowtie, box junction, continuous-flow intersection, continuous Green-T, double-wide, hook turn, jughandle, median u-turn, Michigan left, paired, quadrant, seagulls, slip lane, split, staggered, superstreet, Texas T, Texas U-turn and turnarounds. Non-signalized designs include unsignalized variations on continuous-flow 3 and 4-leg, median u-turn and superstreet, along with Maryland T/J, roundabout and traffic circle. Safety In the EU it is estimated that around 5,000 out of 26,100 people who are killed in car crashes are killed in a junction collision, in 2015, while it was around 8,000 in 2006. During the 2006–2015 decade, this means around 20% of road fatalities occur at junctions. By kind of users junctions fatalities are car users, 34%; pedestrians, 23%; motorcycle, 21%; pedal-cycle 12%; and other road users, the remaining. Causes of fatalities It has been considered that several causes might lead to fatalities; for instance: Observation missed – the largest category, encompassing all factors that cause a driver or rider to not notice something: Physical factors: Temporary obstruction to view Permanent obstruction to view Permanent sight obstruction Human factors: Faulty diagnosis – a misunderstanding of another road user's actions or the road conditions Distraction Inadequate plan – the details of the situation, as interpreted by the road user, are lacking in quantity and/or quality (including their correspondence to reality) Inattention Faulty diagnosis (not leading to observation missed) Information failure – the road user judged the situation incorrectly and made a decision based upon the incorrect judgement (e.g. thinking that another vehicle is moving when it is not, and thus colliding with it) Communication failure – a miscommunication between road users Inadequate plan (not leading to observation missed) Insufficient knowledge Protected intersections Bicycles A number of features make this protected intersection much safer. A corner refuge island, a setback crossing of the pedestrians and cyclists, generally between 1.5–7 metres of setback, a forward stop bar, which allows cyclists to stop for a traffic light well ahead of motor traffic who must stop behind the crosswalk. Separate signal staging or at least an advance green for cyclists and pedestrians is used to give cyclists and pedestrians no conflicts or a head start over traffic. The design makes a right turn on red, and sometimes left on red depending on the geometry of the intersection in question, possible in many cases, often without stopping. Cyclists ideally have a protected bike lane on the approach to the intersection, separated by a concrete median with splay kerbs if possible, and have a protected bike lane width of at least 2 metres if possible (one way). In the Netherlands, most one way cycle paths are at least 2.5 metres wide. Bicycle traffic can be accommodated with the low grade bike lanes in the roadway or higher grade and much safer protected bicycle paths that are physically separated from the roadway. In Manchester, UK, traffic engineers have designed a protected junction known as the Cycle-Optimised Signal (CYCLOPS) Junction. This design places a circulatory cycle track around the edge of the junction, with pedestrian crossing on the inside. This design allows for an all-red pedestrian / cyclist phase with reduced conflicts. Traffic signals are timed to allow cyclists to make a right turn (across oncoming traffic) in one turn). It also allows for diagonal crossings (pedestrian scramble) and reduces crossing distances for pedestrians. Pedestrians Intersections generally must manage pedestrian as well as vehicle traffic. Pedestrian aids include crosswalks, pedestrian-directed traffic signals ("walk light") and over/underpasses. Walk lights may be accompanied by audio signals to aid the visually impaired. Medians can offer pedestrian islands, allowing pedestrians to divide their crossings into a separate segment for each traffic direction, possibly with a separate signal for each.
Technology
Road infrastructure
null
3720257
https://en.wikipedia.org/wiki/Quaternary%20glaciation
Quaternary glaciation
The Quaternary glaciation, also known as the Pleistocene glaciation, is an alternating series of glacial and interglacial periods during the Quaternary period that began 2.58 Ma (million years ago) and is ongoing. Although geologists describe this entire period up to the present as an "ice age", in popular culture this term usually refers to the most recent glacial period, or to the Pleistocene epoch in general. Since Earth still has polar ice sheets, geologists consider the Quaternary glaciation to be ongoing, though currently in an interglacial period. During the Quaternary glaciation, ice sheets appeared, expanding during glacial periods and contracting during interglacial periods. Since the end of the last glacial period, only the Antarctic and Greenland ice sheets have survived, while other sheets formed during glacial periods, such as the Laurentide Ice Sheet, have completely melted. The major effects of the Quaternary glaciation have been the continental erosion of land and the deposition of material; the modification of river systems; the formation of millions of lakes, including the development of pluvial lakes far from the ice margins; changes in sea level; the isostatic adjustment of the Earth's crust; flooding; and abnormal winds. The ice sheets, by raising the albedo (the ratio of solar radiant energy reflected from Earth back into space), generated significant feedback to further cool the climate. These effects have shaped land and ocean environments and biological communities. Long before the Quaternary glaciation, land-based ice appeared and then disappeared during at least four other ice ages. The Quaternary glaciation can be considered a part of a Late Cenozoic Ice Age that began 33.9 Ma and is ongoing. Discovery Evidence for the Quaternary glaciation was first understood in the 18th and 19th centuries as part of the scientific revolution. Over the last century, extensive field observations have provided evidence that continental glaciers covered large parts of Europe, North America, and Siberia. Maps of glacial features were compiled after many years of fieldwork by hundreds of geologists who mapped the location and orientation of drumlins, eskers, moraines, striations, and glacial stream channels to reveal the extent of the ice sheets, the direction of their flow, and the systems of meltwater channels. They also allowed scientists to decipher a history of multiple advances and retreats of the ice. Even before the theory of worldwide glaciation was generally accepted, many observers recognized that more than a single advance and retreat of the ice had occurred. Description To geologists, an ice age is defined by the presence of large amounts of land-based ice. Prior to the Quaternary glaciation, land-based ice formed during at least four earlier geologic periods: the late Paleozoic (360–260 Ma), Andean-Saharan (450–420 Ma), Cryogenian (720–635 Ma) and Huronian (2,400–2,100 Ma). Within the Quaternary ice age, there were also periodic fluctuations of the total volume of land ice, the sea level, and global temperatures. During the colder episodes (referred to as glacial periods or glacials) large ice sheets at least thick at their maximum covered parts of Europe, North America, and Siberia. The shorter warm intervals between glacials, when continental glaciers retreated, are referred to as interglacials. These are evidenced by buried soil profiles, peat beds, and lake and stream deposits separating the unsorted, unstratified deposits of glacial debris. Initially the glacial/interglacial cycle length was about 41,000 years, but following the Mid-Pleistocene Transition about 1 Ma, it slowed to about 100,000 years, as evidenced most clearly by ice cores for the past 800,000 years and marine sediment cores for the earlier period. Over the past 740,000 years there have been eight glacial cycles. The entire Quaternary period, starting 2.58 Ma, is referred to as an ice age because at least one permanent large ice sheet—the Antarctic ice sheet—has existed continuously. There is uncertainty over how much of Greenland was covered by ice during each interglacial. Currently, Earth is in an interglacial period, the Holocene epoch beginning 11,700 years ago; this has caused the ice sheets from the Last Glacial Period to slowly melt. The remaining glaciers, now occupying about 10% of the world's land surface, cover Greenland, Antarctica and some mountainous regions. During the glacial periods, the present (i.e., interglacial) hydrologic system was completely interrupted throughout large areas of the world and was considerably modified in others. The volume of ice on land resulted in a sea level about lower than present. Causes Earth's history of glaciation is a product of the internal variability of Earth's climate system (e.g., ocean currents, carbon cycle), interacting with external forcing by phenomena outside the climate system (e.g., changes in Earth's orbit, volcanism, and changes in solar output). Astronomical cycles The role of Earth's orbital changes in controlling climate was first advanced by James Croll in the late 19th century. Later, the Serbian geophysicist Milutin Milanković elaborated on the theory and calculated that these irregularities in Earth's orbit could cause the climatic cycles now known as Milankovitch cycles. They are the result of the additive behavior of several types of cyclical changes in Earth's orbital properties. Firstly, changes in the orbital eccentricity of Earth occur on a cycle of about 100,000 years. Secondly, the inclination or tilt of Earth's axis varies between 22° and 24.5° in a cycle 41,000 years long. The tilt of Earth's axis is responsible for the seasons; the greater the tilt, the greater the contrast between summer and winter temperatures. Thirdly, precession of the equinoxes, or wobbles in the Earth's rotation axis, have a periodicity of 26,000 years. According to the Milankovitch theory, these factors cause a periodic cooling of Earth, with the coldest part in the cycle occurring about every 40,000 years. The main effect of the Milankovitch cycles is to change the contrast between the seasons, not the annual amount of solar heat Earth receives. The result is less ice melting than accumulating, and glaciers build up. Milankovitch worked out the ideas of climatic cycles in the 1920s and 1930s, but it was not until the 1970s that a sufficiently long and detailed chronology of the Quaternary temperature changes was worked out to test the theory adequately. Studies of deep-sea cores and their fossils indicate that the fluctuation of climate during the last few hundred thousand years is remarkably close to that predicted by Milankovitch. Atmospheric composition One theory holds that decreases in atmospheric , an important greenhouse gas, started the long-term cooling trend that eventually led to the formation of continental ice sheets in the Arctic. Geological evidence indicates a decrease of more than 90% in atmospheric since the middle of the Mesozoic Era. An analysis of reconstructions from alkenone records shows that in the atmosphere declined before and during Antarctic glaciation, and supports a substantial decrease as the primary cause of Antarctic glaciation. Decreasing carbon dioxide levels during the late Pliocene may have contributed substantially to global cooling and the onset of Northern Hemisphere glaciation. This decrease in atmospheric carbon dioxide concentrations may have come about by way of the decreasing ventilation of deep water in the Southern Ocean. levels also play an important role in the transitions between interglacials and glacials. High contents correspond to warm interglacial periods, and low to glacial periods. However, studies indicate that may not be the primary cause of the interglacial-glacial transitions, but instead acts as a feedback. The explanation for this observed variation "remains a difficult attribution problem". Plate tectonics and ocean currents An important component in the development of long-term ice ages is the positions of the continents. These can control the circulation of the oceans and the atmosphere, affecting how ocean currents carry heat to high latitudes. Throughout most of geologic time, the North Pole appears to have been in a broad, open ocean that allowed major ocean currents to move unabated. Equatorial waters flowed into the polar regions, warming them. This produced mild, uniform climates that persisted throughout most of geologic time. But during the Cenozoic Era, the large North American and South American continental plates drifted westward from the Eurasian Plate. This interlocked with the development of the Atlantic Ocean, running north–south, with the North Pole in the small, nearly landlocked basin of the Arctic Ocean. The Drake Passage opened 33.9 million years ago (the Eocene-Oligocene transition), severing Antarctica from South America. The Antarctic Circumpolar Current could then flow through it, isolating Antarctica from warm waters and triggering the formation of its huge ice sheets. The weakening of the North Atlantic Current (NAC) around 3.65 to 3.5 million years ago resulted in cooling and freshening of the Arctic Ocean, nurturing the development of Arctic sea ice and preconditioning the formation of continental glaciers later in the Pliocene. A dinoflagellate cyst turnover in the eastern North Atlantic approximately ~2.60 Ma, during MIS 104, has been cited as evidence that the NAC shifted significantly to the south at this time, causing an abrupt cooling of the North Sea and northwestern Europe by reducing heat transport to high latitude waters of the North Atlantic. The Isthmus of Panama developed at a convergent plate margin about 2.6 million years ago and further separated oceanic circulation, closing the last strait, outside the polar regions, that had connected the Pacific and Atlantic Oceans. This increased poleward salt and heat transport, strengthening the North Atlantic thermohaline circulation, which supplied enough moisture to Arctic latitudes to initiate the Northern Hemisphere glaciation. The change in the biogeography of the nannofossil Coccolithus pelagicus around 2.74 Ma is believed to reflect this onset of glaciation. However, model simulations suggest reduced ice volume due to increased ablation at the edge of the ice sheet under warmer conditions. Collapse of permanent El Niño A permanent El Niño state existed in the early-mid-Pliocene. Warmer temperature in the eastern equatorial Pacific caused an increased water vapor greenhouse effect and reduced the area covered by highly reflective stratus clouds, thus decreasing the albedo of the planet. Propagation of the El Niño effect through planetary waves may have warmed the polar region and delayed the onset of glaciation in the Northern Hemisphere. Therefore, the appearance of cold surface water in the east equatorial Pacific around 3 million years ago may have contributed to global cooling and modified the global climate’s response to Milankovitch cycles. Rise of mountains The elevation of continental surface, often as mountain formation, is thought to have contributed to cause the Quaternary glaciation. The gradual movement of the bulk of Earth's landmasses away from the tropics in addition to increased mountain formation in the Late Cenozoic meant more land at high altitude and high latitude, favouring the formation of glaciers. For example, the Greenland ice sheet formed in connection to the uplift of the west Greenland and east Greenland uplands in two phases, 10 and 5 Ma, respectively. These mountains constitute passive continental margins. Uplift of the Rocky Mountains and Greenland’s west coast has been speculated to have cooled the climate due to jet stream deflection and increased snowfall due to higher surface elevation. Computer models show that such uplift would have enabled glaciation through increased orographic precipitation and cooling of surface temperatures. For the Andes it is known that the Principal Cordillera had risen to heights that allowed for the development of valley glaciers about 1 Ma. Effects The presence of so much ice upon the continents had a profound effect upon almost every aspect of Earth's hydrologic system. Most obvious are the spectacular mountain scenery and other continental landscapes fashioned both by glacial erosion and deposition instead of running water. Entirely new landscapes covering millions of square kilometers were formed in a relatively short period of geologic time. In addition, the vast bodies of glacial ice affected Earth well beyond the glacier margins. Directly or indirectly, the effects of glaciation were felt in every part of the world. Lakes The Quaternary glaciation produced more lakes than all other geologic processes combined. The reason is that a continental glacier completely disrupts the preglacial drainage system. The surface over which the glacier moved was scoured and eroded by the ice, leaving many closed, undrained depressions in the bedrock. These depressions filled with water and became lakes. Very large lakes were formed along the glacial margins. The ice on both North America and Europe was about thick near the centers of maximum accumulation, but it tapered toward the glacier margins. Ice weight caused crustal subsidence, which was greatest beneath the thickest accumulation of ice. As the ice melted, rebound of the crust lagged behind, producing a regional slope toward the ice. This slope formed basins that have lasted for thousands of years. These basins became lakes or were invaded by the ocean. The Baltic Sea and the Great Lakes of North America were formed primarily in this way. The numerous lakes of the Canadian Shield, Sweden, and Finland are thought to have originated at least partly from glaciers' selective erosion of weathered bedrock. Pluvial lakes The climatic conditions that cause glaciation had an indirect effect on arid and semiarid regions far removed from the large ice sheets. The increased precipitation that fed the glaciers also increased the runoff of major rivers and intermittent streams, resulting in the growth and development of large pluvial lakes. Most pluvial lakes developed in relatively arid regions where there typically was insufficient rain to establish a drainage system leading to the sea. Instead, stream runoff flowed into closed basins and formed playa lakes. With increased rainfall, the playa lakes enlarged and overflowed. Pluvial lakes were most extensive during glacial periods. During interglacial stages, with less rain, the pluvial lakes shrank to form small salt flats. Isostatic adjustment Major isostatic adjustments of the lithosphere during the Quaternary glaciation were caused by the weight of the ice, which depressed the continents. In Canada, a large area around Hudson Bay was depressed below (modern) sea level, as was the area in Europe around the Baltic Sea. The land has been rebounding from these depressions since the ice melted. Some of these isostatic movements triggered large earthquakes in Scandinavia about 9,000 years ago. These earthquakes are unique in that they are not associated with plate tectonics. Studies have shown that the uplift has taken place in two distinct stages. The initial uplift following deglaciation was rapid (called "elastic"), and took place as the ice was being unloaded. After this "elastic" phase, uplift proceed by "slow viscous flow" so the rate decreased exponentially after that. Today, typical uplift rates are of the order of 1 cm per year or less, except in areas of North America, especially Alaska, where the rate of uplift is 2.54 cm per year (1 inch or more). In northern Europe, this is clearly shown by the GPS data obtained by the BIFROST GPS network. Studies suggest that rebound will continue for at least another 10,000 years. The total uplift from the end of deglaciation depends on the local ice load and could be several hundred meters near the center of rebound. Winds The presence of ice over so much of the continents greatly modified patterns of atmospheric circulation. Winds near the glacial margins were strong and persistent because of the abundance of dense, cold air coming off the glacier fields. These winds picked up and transported large quantities of loose, fine-grained sediment brought down by the glaciers. This dust accumulated as loess (wind-blown silt), forming irregular blankets over much of the Missouri River valley, central Europe, and northern China. Sand dunes were much more widespread and active in many areas during the early Quaternary period. A good example is the Sand Hills region in Nebraska which covers an area of about . This region was a large, active dune field during the Pleistocene epoch but today is largely stabilized by grass cover. Ocean currents Thick glaciers were heavy enough to reach the sea bottom in several important areas, which blocked the passage of ocean water and affected ocean currents. In addition to these direct effects, it also caused feedback effects, as ocean currents contribute to global heat transfer. Gold deposits Moraines and till deposited by Quaternary glaciers have contributed to the formation of valuable placer deposits of gold. This is the case of southernmost Chile where reworking of Quaternary moraines have concentrated gold offshore. Records of prior glaciation Glaciation has been a rare event in Earth's history, but there is evidence of widespread glaciation during the late Paleozoic Era (300 to 200 Ma) and the late Precambrian (i.e., the Neoproterozoic Era, 800 to 600 Ma). Before the current ice age, which began 2 to 3 Ma, Earth's climate was typically mild and uniform for long periods of time. This climatic history is implied by the types of fossil plants and animals and by the characteristics of sediments preserved in the stratigraphic record. There are, however, widespread glacial deposits, recording several major periods of ancient glaciation in various parts of the geologic record. Such evidence suggests major periods of glaciation prior to the current Quaternary glaciation. One of the best documented records of pre-Quaternary glaciation, called the Karoo Ice Age, is found in the late Paleozoic rocks in South Africa, India, South America, Antarctica, and Australia. Exposures of ancient glacial deposits are numerous in these areas. Deposits of even older glacial sediment exist on every continent except South America. These indicate that two other periods of widespread glaciation occurred during the late Precambrian, producing the Snowball Earth during the Cryogenian period. Next glacial period The warming trend following the Last Glacial Maximum, since about 20,000 years ago, has resulted in a sea level rise by about . This warming trend subsided about 6,000 years ago, and sea level has been comparatively stable since the Neolithic. The present interglacial period (the Holocene climatic optimum) has been stable and warm compared to the preceding ones, which were interrupted by numerous cold spells lasting hundreds of years. This stability might have allowed the Neolithic Revolution and by extension human civilization. Based on orbital models, the cooling trend initiated about 6,000 years ago will continue for another 23,000 years. Slight changes in the Earth's orbital parameters may, however, indicate that, even without any human contribution, there will not be another glacial period for the next 50,000 years. It is possible that the current cooling trend might be interrupted by an interstadial phase (a warmer period) in about 60,000 years, with the next glacial maximum reached only in about 100,000 years. Based on past estimates for interglacial durations of about 10,000 years, in the 1970s there was some concern that the next glacial period would be imminent. However, slight changes in the eccentricity of Earth's orbit around the Sun suggest a lengthy interglacial period lasting about another 50,000 years. Other models, based on periodic variations in solar output, give a different projection of the start of the next glacial period at around 10,000 years from now. Additionally, human impact is now seen as possibly extending what would already be an unusually long warm period. Projection of the timeline for the next glacial maximum depend crucially on the amount of in the atmosphere. Models assuming increased levels at 750 parts per million (ppm; current levels are at 417 ppm) have estimated the persistence of the current interglacial period for another 50,000 years. However, more recent studies concluded that the amount of heat trapping gases emitted into Earth's oceans and atmosphere will prevent the next glacial (ice age), which otherwise would begin in around 50,000 years, and likely more glacial cycles.
Physical sciences
Events
Earth science
3723097
https://en.wikipedia.org/wiki/Puddle
Puddle
A puddle is a small accumulation of liquid, usually water, on a surface. It can form either by pooling in a depression on the surface, or by surface tension upon a flat surface. Puddles are often characterized by murky water or mud due to the disturbance and dissolving of surrounding sediment, primarily due to precipitation. A puddle is generally shallow enough to walk through, and too small to traverse with a boat or raft. Small wildlife may be attracted to puddles. Natural puddles and wildlife Puddles in natural landscapes and habitats, when not resulting from precipitation, can indicate the presence of a seep or spring. Small seasonal riparian plants, grasses, and wildflowers can germinate with the ephemeral "head start" of moisture provided by a puddle. Small wildlife, such as birds and insects, can use puddles as a source of essential moisture or for bathing. Raised constructed puddles, bird baths, are a part of domestic and wildlife gardens as a garden ornament and "micro-habitat" restoration. Swallows use the damp loam which gathers in puddles as a form of cement to help to build their nests. Many butterfly species and some other insects, but particularly male butterflies, need puddles for nutrients they can contain, such as salts and amino acids. In a behaviour known as puddling they seek out the damp mud that can be found around the edge of the puddles. For some smaller forms of life, such as tadpoles or mosquito larvae, a puddle can form an entire habitat. Puddles that do not evaporate quickly can become standing water, which can become polluted by decaying organisms and are often home to breeding mosquitos, which can act as vectors for diseases such as malaria and, of more recent concern in certain areas of the world, West Nile virus. Puddles on roads Puddles commonly form during rain, and can cause problems for transport. Due to the angle of the road, puddles tend to be forced by gravity to gather on the edges of the road. This can cause splashing as cars drive through the puddles, which causes water to be sprayed onto pedestrians on the pavement. Irresponsible drivers may do this deliberately, which, in some countries, can lead to prosecution for careless driving. Puddles commonly form in potholes in a dirt road, or in any other space with a shallow depression and dirt. In such cases, these are sometimes referred to as mud puddles, because mud tends to form in the bottoms, resulting in dirtied wheels or boots when disturbed. In order to deal with puddles, roads and pavements are often built with a camber (technically called 'crowning'), being slightly convex in nature, to force puddles to drain into the gutter, which has storm drain grates to allow the water to drain into the sewers. In addition, some surfaces are made to be porous, allowing the water to drain through the surface to the aquifer below. Physics Due to the action of surface tension, small puddles can also form if a liquid is spilt on a level surface. Puddles like this are common on kitchen floors. Puddles tend to evaporate quickly due to the high surface-area-to-volume ratio. In cold conditions puddles can form patches of ice which are slippery and difficult to see and can be a hazard to road vehicles and pedestrians. Children Puddles are a source of recreation for children, who often like jumping in puddles as an "up-side" to rain. A children's nursery rhyme records the story of Doctor Foster and his encounter with a puddle in Gloucester. Muddy puddles, and the pleasures of splashing mud in them, are a repeated theme in the children's animation Peppa Pig, to the extent of selling character-branded wellington boots. In legend Medieval legend spoke of one man who was desperate to find building materials for his house, so he stole cobblestones from the road surface. The remaining hole filled with water and a horseman who later walked through the 'puddle' found himself drowning. A similar legend, of a young boy drowning in a puddle that formed in a pothole in a major street in the early years of Seattle, Washington, is told as part of the Seattle Underground Tour.
Physical sciences
Hydrology
Earth science
22685412
https://en.wikipedia.org/wiki/Sagittarius%20A
Sagittarius A
Sagittarius A (Sgr A) is a complex radio source at the center of the Milky Way, which contains a supermassive black hole. It is located between Scorpius and Sagittarius, and is hidden from view at optical wavelengths by large clouds of cosmic dust in the spiral arms of the Milky Way. The dust lane that obscures the Galactic Center from a vantage point around the Sun causes the Great Rift through the bright bulge of the galaxy. The radio source consists of three components: the supernova remnant Sagittarius A East, the spiral structure Sagittarius A West, and a very bright compact radio source at the center of the spiral, Sagittarius A* (read "A-star"). These three overlap: Sagittarius A East is the largest, West appears off-center within East, and A* is at the center of West. Discovery In April 1933, Karl Jansky, considered one of the fathers of radio astronomy, discovered that a radio signal was coming from a location in the direction of the constellation of Sagittarius, towards the center of the Milky Way. His observations did not extend quite as far south as we now know to be the Galactic Center. Observations by Jack Piddington and Harry Minnett using the CSIRO radio telescope at Potts Hill Reservoir, in Sydney discovered a discrete and bright "Sagittarius-Scorpius" radio source, which after further observation with the CSIRO radio telescope at Dover Heights was identified in a letter to Nature as the probable Galactic Center. The name Sagittarius A was first used in 1954 by John D. Kraus, Hsien-Ching Ko, and Sean Matt when they included the object in the list of radio sources found with the Ohio State University radio telescope at 250 MHz. As was common practice at the time, sources were named by constellation with capital letters in order of brightness within each constellation, with A denoting the brightest radio source within the constellation. Sagittarius A East This feature is approximately 25 light-years in width and has the attributes of a supernova remnant from an explosive event that occurred between 35,000 and 100,000 YBP. However, it would take 50 to 100 times more energy than a standard supernova explosion to create a structure of this size and energy. It is conjectured that Sgr A East is the remnant of the explosion of a star that was gravitationally compressed as it made a close approach to the central black hole. Sagittarius A West Sgr A West has the appearance of a three-arm spiral, from the point of view of the Earth. For this reason, it is also known as the "Minispiral". This appearance and nickname are misleading, though: the three-dimensional structure of the Minispiral is not that of a spiral. It is made of several dust and gas clouds, which orbit and fall onto Sagittarius A* at velocities as high as 1,000 kilometers per second. The surface layer of these clouds is ionized. The source of ionisation is the population of massive stars (more than one hundred OB stars have been identified so far) that also occupy the central parsec. Sgr A West is surrounded by a massive, clumpy torus of cooler molecular gas, the Circumnuclear Disk (CND). The nature and kinematics of the Northern Arm cloud of Sgr A West suggest that it once was a clump in the CND, which fell due to some perturbation, perhaps the supernova explosion responsible for Sgr A East. The Northern Arm appears as a very bright North—South ridge of emission, but it extends far to the East and can be detected as a dim extended source. The Western Arc (outside the field of view of the image shown in the right) is interpreted as the ionized inner surface of the CND. The Eastern Arm and the Bar seem to be two additional large clouds similar to the Northern Arm, although they do not share the same orbital plane. They have been estimated to amount for about 20 solar masses each. On top of these large scale structures (of the order of a few light-years in size), many smaller cloudlets and holes inside the large clouds can be seen. The most prominent of these perturbations is the Minicavity, which is interpreted as a bubble blown inside the Northern Arm by the stellar wind of a massive star, which is not clearly identified. Sagittarius A* Astronomers now have evidence that there is a supermassive black hole at the center of the galaxy. Sagittarius A* (abbreviated Sgr A*) is agreed to be the most plausible candidate for the location of this supermassive black hole. The Very Large Telescope at Chile and Keck Telescope at Hawaii have detected stars orbiting Sgr A* at speeds greater than that of any other stars in the galaxy. One star, designated S2, was calculated to orbit Sgr A* at speeds of over 5,000 kilometers per second at its closest approach. A gas cloud, G2, passed through the Sagittarius A* region in 2014 and managed to do so without disappearing beyond the event horizon, as theorists predicted would happen. Rather, it disintegrated, suggesting that G2 and a previous gas cloud, G1, were star remnants with larger gravitational fields than gas clouds. In September 2019, scientists found that Sagittarius A* had been consuming nearby matter at a much faster rate than usual over the previous year. Researchers speculated that this could mean that the black hole is entering a new phase, or that Sagittarius A* had stripped the outer layer of G2 when it passed through. Popular culture In the 2014 space-sim videogame Elite: Dangerous, players are able to travel to Sagittarius A*, with an achievement tied to it in the Xbox One and PlayStation versions of the game. In the television show Community, Pierce Hawthorne mentions that in his opinion, Sagittarius A* is the only black hole worth studying. In the final arc of the Sailor Moon manga series, "Sagittarius Zero Star" is the location of the Galaxy Cauldron, a fictional artifact that serves as the birthplace of all life in the Milky Way.
Physical sciences
Milky Way
Astronomy
22688491
https://en.wikipedia.org/wiki/Musa%20acuminata
Musa acuminata
Musa acuminata is a species of banana native to Southern Asia, its range comprising the Indian Subcontinent and Southeast Asia. Many of the modern edible dessert bananas are from this species, although some are hybrids with Musa balbisiana. First cultivated by humans around 8000 BCE, it is one of the early examples of domesticated plants. Description Musa acuminata is classified by botanists as an herbaceous plant and an evergreen and a perennial, but not as a tree. The trunk (known as the pseudostem) is made of tightly packed layers of leaf sheaths emerging from completely or partially buried corms. The leaves are at the top of the leaf sheaths, or petioles and in the subspecies M. a. truncata the blade or lamina is up to in length and wide. The inflorescence grows horizontally or obliquely from the trunk. The individual flowers are white to yellowish-white in color and are negatively geotropic (that is, growing upwards and away from the ground). Both male and female flowers are present in a single inflorescence. Female flowers are located near the base (and develop into fruit), and the male flowers located at the tipmost top-shaped bud in between leathery bracts. The rather slender fruits are berries, the size of each depends on the number of seeds they contain. Each fruit can have 15 to 62 seeds. Each fruit bunch can have an average of 161.76 ± 60.62 fingers with each finger around in size. The seeds of wild M. acuminata are around in diameter. They are subglobose or angular in shape and very hard. The tiny embryo is located at the end of the micropyle. Each seed of M. acuminata typically produces around four times its size in edible starchy pulp (the parenchyma, the portion of the bananas eaten), around . Wild M. acuminata is diploid with 2n=2x=22 chromosomes, while cultivated varieties (cultivars) are mostly triploid (2n=3x=33) and parthenocarpic, meaning producing fruit without seeds. The most familiar dessert banana cultivars belong to the Cavendish subgroup. These high yielding cultivars were produced through selection of the natural mutations resulting from the normal vegetative propagation of banana farming. The ratio of pulp to seeds increases dramatically in "seedless" edible cultivars: the small and largely sterile seeds are now surrounded by 23 times their size in edible pulp. The seeds themselves are reduced to tiny black specks along the central axis of the fruit. Taxonomy Musa acuminata belongs to section Musa (formerly Eumusa) of the genus Musa. It belongs to the Family Musaceae of the Order Zingiberales. It is divided into several subspecies (see section below). M. acuminata was first described by the Italian botanist Luigi Aloysius Colla in the book Memorie della Reale Accademia delle Scienze di Torino (1820). Although other authorities have published various names for this species and its hybrids mistaken for different species (notably Musa sapientum by Linnaeus which is now known to be a hybrid of M. acuminata and Musa balbisiana), Colla's publication is the oldest name for the species and thus has precedence over the others from the rules of the International Code of Botanical Nomenclature. Colla also was the first authority to recognize that both Musa acuminata and Musa balbisiana were wild ancestral species, even though the specimen he described was a naturally occurring seedless polyploid like cultivated bananas. Subspecies Musa acuminata is highly variable and the number of subspecies accepted can vary from six to nine between different authorities. The following are the most commonly accepted subspecies: Musa acuminata subsp. burmannica Simmonds = Musa acuminata subsp. burmannicoides De Langhe Found in Burma, southern India, and Sri Lanka. Musa acuminata subsp. errans Argent = Musa errans Teodoro, Musa troglodyatarum L. var. errans, Musa errans Teodoro var. botoan Known as saging matsing and saging chonggo (both meaning 'monkey banana'), saging na ligao ('wild banana'), and agutay in Filipino. Found in the Philippines. It is a significant maternal ancestor of many modern dessert bananas (AA and AAA groups). It is an attractive subspecies with blue-violet inflorescence and very pale green unripe fruits. Musa acuminata subsp. malaccensis (Ridley) Simmonds = Musa malaccensis Ridley Found in peninsular Malaysia and Sumatra. It is the paternal parent of the latundan banana. Musa acuminata subsp. microcarpa (Beccari) Simmonds = Musa microcarpa Beccari Found in Borneo. It is the ancestor of the cultivar 'Viente Cohol'Musa acuminata subsp. siamea Simmonds Found in Cambodia, Laos, and Thailand. Musa acuminata subsp. truncata (Ridley) Kiew Musa acuminata subsp. zebrina (Van Houtte) R. E. Nasution Commonly known as blood bananas. Native to Java. It is cultivated as an ornamental plant for the dark red patches of color on their predominantly dark green leaves. It has very slender pseudostems with fruits containing seeds like those of grapes. It is one of the earliest bananas spread eastwards to the Pacific and westward towards Africa, where it became the paternal parent of the East African Highland bananas (the Mutika/Lujugira subgroup of the AAA group). In Hawaii it is known as the mai'a 'oa, and is of cultural and folk medicinal significance as the only seeded banana to be introduced to the islands before European contact. Distribution Musa acuminata is native to the biogeographical region of Malesia and most of mainland Indochina. M. acuminata favors wet tropical climates in contrast to the hardier M. balbisiana, the species it hybridized extensively with to provide almost all modern cultivars of edible bananas. Subsequent spread of the species outside of its native region is thought to be purely the result of human intervention. Early farmers introduced M. acuminata into the native range of M. balbisiana resulting in hybridization and the development of modern edible clones. AAB cultivars were spread from somewhere around the Philippines about 4 kya (2000 BCE) and resulted in the distinct banana cultivars known as the Maia Maoli or Popoulo group bananas in the Pacific islands. They may have been introduced as well to South America during Precolumbian times from contact with early Polynesian sailors, although evidence of this is debatable. Westward spread included Africa which already had evidence of M. acuminata × M. balbisiana hybrid cultivation from as early as 1000 to 400 BCE. They were probably introduced first to Madagascar from Indonesia. From West Africa, they were introduced to the Canary islands by the Portuguese in the 16th century, and from there were introduced to Hispaniola (modern Haiti and the Dominican Republic) in 1516. Ecology Wild Musa acuminata is propagated sexually by seeds or asexually by suckers. Edible parthenocarpic cultivars are usually cultivated by suckers in plantations or cloned by tissue culture. Seeds are also still used in research for developing new cultivars. M. acuminata is a pioneer species. It rapidly exploits newly disturbed areas, like areas recently subjected to forest fires. It is also considered a 'keystone species' in certain ecosystems, paving the way for greater wildlife diversity once they have established themselves in an area. It is particularly important as a food source for wildlife due to its rapid regeneration. M. acuminata bears flowers that by their very structure, makes it difficult to self-pollinate. It takes about four months for the flowers to develop into fruits, with the fruit clusters at the bases ripening sooner than those at the tip. A large variety of wildlife feeds on the fruits. These include frugivorous bats, birds, squirrels, tree shrews, civets, rats, mice, monkeys, and apes. These animals are also important for seed dispersal. Mature seeds germinate readily 2 to 3 weeks after sowing. Unsprouted, they can remain viable from a few months to two years of storage. Nevertheless, studies show that clone plantlets are much more likely to survive than seedlings germinated from seeds. Domestication In 1955, Norman Simmonds and Ken Shepherd revised the classification of modern edible bananas based on their genetic origins. Their classification depends on how many of the characteristics of the two ancestral species (Musa acuminata and Musa balbisiana) are exhibited by the cultivars. Most banana cultivars which exhibit purely or mostly Musa acuminata genomes are dessert bananas, while hybrids of M. acuminata and M. balbisiana are mostly cooking bananas or plantains. Musa acuminata is one of the earliest plants to be domesticated by humans for agriculture, 7,000 years ago in New Guinea and Wallacea. It has been suggested that M. acuminata may have originally been domesticated for parts other than the fruit. Either for fiber, for construction materials, or for its edible male bud. They were selected early for parthenocarpy and seed sterility in their fruits, a process that might have taken thousands of years. This initially led to the first 'human-edible' banana diploid clones (modern AA cultivars). Diploid clones are still able to produce viable seeds when pollinated by wild species. This resulted in the development of triploid clones which were conserved for their larger fruit. M. acuminata was later introduced into mainland Indochina into the range of another ancestral wild banana species – Musa balbisiana, a hardier species of lesser genetic diversity than M. acuminata. Hybridization between the two resulted in drought-resistant edible cultivars. Modern edible banana and plantain cultivars are derived from permutations of hybridization and polyploidy of the two. Ornamental M. acuminata is one of several banana species cultivated as an ornamental plant, for its striking shape and foliage. In temperate regions it requires protection in winter, as it does not tolerate temperatures below . The cultivar M. acuminata 'Dwarf Cavendish' (AAA Group) has gained the Royal Horticultural Society's Award of Garden Merit. Genome D'Hont et al., 2012 finds 3 whole genome duplications in the evolutionary history of this species. Their analysis is consistent with timing in the evolution of the genus, prior to M. acuminatas speciation.
Biology and health sciences
Tropical and tropical-like fruit
Plants
22689597
https://en.wikipedia.org/wiki/Warm%E2%80%93hot%20intergalactic%20medium
Warm–hot intergalactic medium
The warm–hot intergalactic medium (WHIM) is the sparse, warm-to-hot (105 to 107 K) plasma that cosmologists believe to exist in the spaces between galaxies and to contain 40–50% of the baryonic 'normal matter' in the universe at the current epoch. The WHIM can be described as a web of hot, diffuse gas stretching between galaxies, and consists of plasma, as well as atoms and molecules, in contrast to dark matter. The WHIM is a proposed solution to the missing baryon problem, where the observed amount of baryonic matter does not match theoretical predictions from cosmology. Much of what is known about the warmhot intergalactic medium comes from computer simulations of the cosmos. The WHIM is expected to form a filamentary structure of tenuous, highly ionized baryons with a density of 1−10 particles per cubic meter. Within the WHIM, gas shocks are created as a result of active galactic nuclei, along with the gravitationally-driven processes of merging and accretion. Part of the gravitational energy supplied by these effects is converted into thermal emissions of the matter by collisionless shock heating. Because of the high temperature of the medium, the expectation is that it is most easily observed from the absorption or emission of ultraviolet and low energy X-ray radiation. To locate the WHIM, researchers examined X-ray observations of a rapidly growing supermassive black hole known as an active galactic nucleus, or AGN. Oxygen atoms in the WHIM were seen to absorb X-rays passing through the medium. In May 2010, a giant reservoir of WHIM was detected by the Chandra X-ray Observatory lying along the wall-shaped structure of galaxies (Sculptor Wall) some 400 million light-years from Earth. In 2018, observations of highly-ionized extragalactic oxygen atoms appeared to confirm simulations of the WHIM mass distribution. Observations for dispersion from fast radio bursts in 2020, further appeared to confirm the missing baryonic mass to be located at the WHIM. Circumgalactic medium Conceptually similar to WHIM, circumgalactic medium (CGM) is a halo of gas between the ISM and virial radii surrounding galaxies that is diffuse, and nearly invisible. Current thinking is that the CGM is an important source of star-forming material, and that it regulates a galaxy’s gas supply. If visible, the CGM of the Andromeda Galaxy (1.3-2 million ly) would stretch 3 times the size of the width of the Big Dipper—easily the biggest feature on the nighttime sky, and even bump into our own CGM, though that isn't fully known because we reside in it. There are two layered parts to Andromeda CGM: an inner shell of gas is nested inside an outer shell. The inner shell (0.5 million ly) is more dynamic and is thought to be more dynamic and turbulent because of outflows from supernova, and the outer shell is hotter and smoother.
Physical sciences
Basics_2
Astronomy
20130936
https://en.wikipedia.org/wiki/Fetus
Fetus
A fetus or foetus (; : fetuses, foetuses, rarely feti or foeti) is the unborn mammalian offspring that develops from an embryo. Following the embryonic stage, the fetal stage of development takes place. Prenatal development is a continuum, with no clear defining feature distinguishing an embryo from a fetus. However, in general a fetus is characterized by the presence of all the major body organs, though they will not yet be fully developed and functional, and some may not yet be situated in their final anatomical location. In human prenatal development, fetal development begins from the ninth week after fertilization (which is the eleventh week of gestational age) and continues until the birth of a newborn. Etymology The word fetus (plural fetuses or rarely feti) comes from Latin fētus 'offspring, bringing forth, hatching of young'. The Latin plural fetūs is not used in English; occasionally the plural feti is used in English by analogy with second-declension Latin nouns. The predominant British, Irish, and Commonwealth spelling is foetus, except in medical usage, where fetus is preferred. The -oe- spelling is first attested in 1594 and arose in Late Latin by analogy with classical Latin words like amoenus. Development in humans Weeks 9 to 16 (2 to 3.6 months) In humans, the fetal stage starts nine weeks after fertilization. At this time the fetus is typically about in length from crown to rump, and weighs about 8 grams. The head makes up nearly half of the size of the fetus. Breathing-like movements of the fetus are necessary for the stimulation of lung development, rather than for obtaining oxygen. The heart, hands, feet, brain, and other organs are present, but are only at the beginning of development and have minimal operation. Uncontrolled movements and twitches occur as muscles, the brain, and pathways begin to develop. Weeks 17 to 25 (3.6 to 6.6 months) A woman pregnant for the first time (nulliparous) typically feels fetal movements at about 21 weeks, whereas a woman who has given birth before will typically feel movements by 20 weeks. By the end of the fifth month, the fetus is about long. Weeks 26 to 38 (6.6 to 8.6 months) The amount of body fat rapidly increases. Lungs are not fully mature. Neural connections between the sensory cortex and thalamus develop as early as 24 weeks of gestational age, but the first evidence of their function does not occur until around 30 weeks. Bones are fully developed but are still soft and pliable. Iron, calcium, and phosphorus become more abundant. Fingernails reach the end of the fingertips. The lanugo, or fine hair, begins to disappear until it is gone except on the upper arms and shoulders. Small breast buds are present in both sexes. Head hair becomes coarse and thicker. Birth is imminent and occurs around the 38th week after fertilization. The fetus is considered full-term between weeks 37 and 40 when it is sufficiently developed for life outside the uterus. It may be in length when born. Control of movement is limited at birth, and purposeful voluntary movements continue to develop until puberty. Variation in growth There is much variation in the growth of the human fetus. When the fetal size is less than expected, the condition is known as intrauterine growth restriction also called fetal growth restriction; factors affecting fetal growth can be maternal, placental, or fetal. Maternal factors include maternal weight, body mass index, nutritional state, emotional stress, toxin exposure (including tobacco, alcohol, heroin, and other drugs which can also harm the fetus in other ways), and uterine blood flow. Placental factors include size, microstructure (densities and architecture), umbilical blood flow, transporters and binding proteins, nutrient utilization, and nutrient production. Fetal factors include the fetal genome, nutrient production, and hormone output. Also, female fetuses tend to weigh less than males, at full term. Fetal growth is often classified as follows: small for gestational age (SGA), appropriate for gestational age (AGA), and large for gestational age (LGA). SGA can result in low birth weight, although premature birth can also result in low birth weight. Low birth weight increases the risk for perinatal mortality (death shortly after birth), asphyxia, hypothermia, polycythemia, hypocalcemia, immune dysfunction, neurologic abnormalities, and other long-term health problems. SGA may be associated with growth delay, or it may instead be associated with absolute stunting of growth. Viability Fetal viability refers to a point in fetal development at which the fetus may survive outside the womb. The lower limit of viability is approximately months gestational age and is usually later. There is no sharp limit of development, age, or weight at which a fetus automatically becomes viable. According to data from 2003 to 2005, survival rates are 20–35% for babies born at 23 weeks of gestation ( months); 50–70% at 24–25 weeks (6 – months); and >90% at 26–27 weeks ( – months) and over. It is rare for a baby weighing less than to survive. When such premature babies are born, the main causes of mortality are that neither the respiratory system nor the central nervous system are completely differentiated. If given expert postnatal care, some preterm babies weighing less than may survive, and are referred to as extremely low birth weight or immature infants. Preterm birth is the most common cause of infant mortality, causing almost 30 percent of neonatal deaths. At an occurrence rate of 5% to 18% of all deliveries, it is also more common than postmature birth, which occurs in 3% to 12% of pregnancies. Circulatory system Before birth The heart and blood vessels of the circulatory system form relatively early during embryonic development, but continue to grow and develop in complexity in the growing fetus. A functional circulatory system is a biological necessity since mammalian tissues can not grow more than a few cell layers thick without an active blood supply. The prenatal circulation of blood is different from postnatal circulation, mainly because the lungs are not in use. The fetus obtains oxygen and nutrients from the mother through the placenta and the umbilical cord. Blood from the placenta is carried to the fetus by the umbilical vein. About half of this enters the fetal ductus venosus and is carried to the inferior vena cava, while the other half enters the liver proper from the inferior border of the liver. The branch of the umbilical vein that supplies the right lobe of the liver first joins with the portal vein. The blood then moves to the right atrium of the heart. In the fetus, there is an opening between the right and left atrium (the foramen ovale), and most of the blood flows from the right into the left atrium, thus bypassing pulmonary circulation. The majority of blood flow is into the left ventricle from where it is pumped through the aorta into the body. Some of the blood moves from the aorta through the internal iliac arteries to the umbilical arteries and re-enters the placenta, where carbon dioxide and other waste products from the fetus are taken up and enter the mother's circulation. Some of the blood from the right atrium does not enter the left atrium, but enters the right ventricle and is pumped into the pulmonary artery. In the fetus, there is a special connection between the pulmonary artery and the aorta, called the ductus arteriosus, which directs most of this blood away from the lungs (which are not being used for respiration at this point as the fetus is suspended in amniotic fluid). Postnatal development With the first breath after birth, the system changes suddenly. Pulmonary resistance is reduced dramatically, prompting more blood to move into the pulmonary arteries from the right atrium and ventricle of the heart and less to flow through the foramen ovale into the left atrium. The blood from the lungs travels through the pulmonary veins to the left atrium, producing an increase in pressure that pushes the septum primum against the septum secundum, closing the foramen ovale and completing the separation of the newborn's circulatory system into the standard left and right sides. Thereafter, the foramen ovale is known as the fossa ovalis. The ductus arteriosus normally closes within one or two days of birth, leaving the ligamentum arteriosum, while the umbilical vein and ductus venosus usually closes within two to five days after birth, leaving, respectively, the liver's ligamentum teres and ligamentum venosus. Immune system The placenta functions as a maternal-fetal barrier against the transmission of microbes. When this is insufficient, mother-to-child transmission of infectious diseases can occur. Maternal IgG antibodies cross the placenta, giving the fetus passive immunity against those diseases for which the mother has antibodies. This transfer of antibodies in humans begins as early as the fifth month (gestational age) and certainly by the sixth month. Developmental problems A developing fetus is highly susceptible to anomalies in its growth and metabolism, increasing the risk of birth defects. One area of concern is the lifestyle choices made during pregnancy. Diet is especially important in the early stages of development. Studies show that supplementation of the person's diet with folic acid reduces the risk of spina bifida and other neural tube defects. Another dietary concern is whether breakfast is eaten. Skipping breakfast could lead to extended periods of lower than normal nutrients in the maternal blood, leading to a higher risk of prematurity, or birth defects. Alcohol consumption may increase the risk of the development of fetal alcohol syndrome, a condition leading to intellectual disability in some infants. Smoking during pregnancy may also lead to miscarriages and low birth weight (. Low birth weight is a concern for medical providers due to the tendency of these infants, described as "premature by weight", to have a higher risk of secondary medical problems. X-rays are known to have possible adverse effects on the development of the fetus, and the risks need to be weighed against the benefits. Congenital disorders are acquired before birth. Infants with certain congenital heart defects can survive only as long as the ductus remains open: in such cases the closure of the ductus can be delayed by the administration of prostaglandins to permit sufficient time for the surgical correction of the anomalies. Conversely, in cases of patent ductus arteriosus, where the ductus does not properly close, drugs that inhibit prostaglandin synthesis can be used to encourage its closure, so that surgery can be avoided. Other heart birth defects include ventricular septal defect, pulmonary atresia, and tetralogy of Fallot. An abdominal pregnancy can result in the death of the fetus and where this is rarely not resolved it can lead to its formation into a lithopedion. Fetal pain The existence and implications of fetal pain are debated politically and academically. According to the conclusions of a review published in 2005, "Evidence regarding the capacity for fetal pain is limited but indicates that fetal perception of pain is unlikely before the third trimester." However, developmental neurobiologists argue that the establishment of thalamocortical connections (at about months) is an essential event with regard to fetal perception of pain. Nevertheless, the perception of pain involves sensory, emotional and cognitive factors and it is "impossible to know" when pain is experienced, even if it is known when thalamocortical connections are established. Some authors argue that fetal pain is possible from the second half of pregnancy. Evidence suggests that the perception of pain in the fetus occurs well before late gestation. Whether a fetus has the ability to feel pain and suffering is part of the abortion debate. In the United States, for example, anti-abortion advocates have proposed legislation that would require providers of abortions to inform pregnant women that their fetuses may feel pain during the procedure and that would require each person to accept or decline anesthesia for the fetus. Legal and social issues Abortion of a human pregnancy is legal and/or tolerated in most countries, although with gestational time limits that normally prohibit late-term abortions. Other animals A fetus is a stage in the prenatal development of viviparous organisms. This stage lies between embryogenesis and birth. Many vertebrates have fetal stages, ranging from most mammals to many fish. In addition, some invertebrates bear live young, including some species of onychophora and many arthropods. The fetuses of most mammals are situated similarly to the human fetus within their mothers. However, the anatomy of the area surrounding a fetus is different in litter-bearing animals compared to humans: each fetus of a litter-bearing animal is surrounded by placental tissue and is lodged along one of two long uteri instead of the single uterus found in a human female. Development at birth varies considerably among animals, and even among mammals. Altricial species are relatively helpless at birth and require considerable parental care and protection. In contrast, precocial animals are born with open eyes, have hair or down, have large brains, and are immediately mobile and somewhat able to flee from, or defend themselves against, predators. Primates are precocial at birth, with the exception of humans. The duration of gestation in placental mammals varies from 18 days in jumping mice to 23 months in elephants. Generally speaking, fetuses of larger land mammals require longer gestation periods. The benefits of a fetal stage means that young are more developed when they are born. Therefore, they may need less parental care and may be better able to fend for themselves. However, carrying fetuses exerts costs on the mother, who must take on extra food to fuel the growth of her offspring, and whose mobility and comfort may be affected (especially toward the end of the fetal stage). In some instances, the presence of a fetal stage may allow organisms to time the birth of their offspring to a favorable season.
Biology and health sciences
Animal ontogeny
null
526224
https://en.wikipedia.org/wiki/Indo-Australian%20plate
Indo-Australian plate
The Indo-Australian plate is or was a major tectonic plate. It is in the process of separation into three plates, and may be currently separated into more than one plate. It contains the continent of Australia, its surrounding ocean and extends north-west to include the Indian subcontinent and the adjacent waters. Formation It was formed by the fusion of the then Indian and the then Australian plates approximately 43 million years ago. The fusion happened when the mid-ocean ridge in the Indian Ocean, which separated the two plates, ceased spreading. Regions Australia-New Guinea (Mainland Australia, New Guinea, and Tasmania), the Indian subcontinent, and Zealandia (New Caledonia, New Zealand, and Norfolk Island) are all fragments of the ancient supercontinent of Gondwana. As the ocean floor broke apart, these land masses fragmented from one another, and for a time these centers were thought to be dormant and fused into a single plate. However, research in the early 21st century indicates plate separation of the Indo-Australian plate may have already occurred. Characteristics The eastern side of the plate is the convergent boundary with the Pacific plate. The Pacific plate sinks below the Australian plate and forms the Kermadec Trench and the island arcs of Tonga and Kermadec. New Zealand is situated along the southeastern boundary of the plate, which with New Caledonia makes up the southern and northern ends of the ancient landmass of Zealandia, which separated from Australia 85 million years ago. The central part of Zealandia sank under the sea. The southern margin of the plate forms a divergent boundary with the Antarctic plate. The western side is subdivided by the Indian plate that borders the Arabian plate to the north and the African plate to the south. The northern margin of the Indian plate forms a convergent boundary with the Eurasian plate, which constitutes the active orogenic process of the Himalayas and the Hindukush mountains. The northeast side of the Australian plate forms a subduction boundary with the Eurasian plate in the Indian Ocean between the borders of Bangladesh and Burma and to the southwest of the Indonesian islands of Sumatra and Borneo. Along the northern Ninety East Ridge under the Indian Ocean there appears to be a weakness zone where the Indian and Australian plates are going different ways. The subsidence boundary through Indonesia is reflected in the Wallace line. Plate movements The eastern part (Australian plate) is moving northward at the rate of per year while the western part (Indian plate) is moving only at the rate of per year due to the impediment of the Himalayas. In terms of the middle of India and Australia's landmasses, Australia is moving northward at per year relative to India. This differential movement has resulted in the compression of the former plate near its centre at Sumatra and the division into the separate Indian and Australian plates again. A third plate, known as the Capricorn plate, may also be separating off the western side of the Indian plate as part of the continued breakup of the Indo-Australian plate. Separation There is good evidence that the Indo-Australian plate is in the process of separation into new plates. Recent studies, and evidence from seismic events such as the 2012 Indian Ocean earthquakes, suggest that the Indo-Australian plate may have already broken up into two or three separate plates due primarily to stresses induced by the collision of the Indo-Australian plate with Eurasia along what later became the Himalayas, and that the Indian plate and Australian plate may have been separate for at least . Contemporary models suggest at present there is a deformation zone between the Indian and Australian plates, with both earthquake and global satellite navigation system data indicating that India and Australia are not moving on the same vectors northward. In due course, some expect a well defined localized boundary to reform between the Indian and Australian plates. Studies show the Ninetyeast Ridge has active faulting along its whole length so that while the simplest explanation is that the Indian and Australian plates have already separated here, it remains possible that only the Capricorn plate has separated from them.
Physical sciences
Tectonic plates
Earth science
526237
https://en.wikipedia.org/wiki/Twilight
Twilight
Twilight is sunlight illumination produced by diffuse sky radiation when the Sun is below the horizon as sunlight from the upper atmosphere is scattered in a way that illuminates both the Earth's lower atmosphere and also the Earth's surface. Twilight also is any period when this illumination occurs. The lower the Sun is beneath the horizon, the dimmer the sky (other factors such as atmospheric conditions being equal). When the Sun reaches 18° below the horizon, the illumination emanating from the sky is nearly zero, and evening twilight becomes nighttime. When the Sun approaches re-emergence, reaching 18° below the horizon, nighttime becomes morning twilight. Owing to its distinctive quality, primarily the absence of shadows and the appearance of objects silhouetted against the lit sky, twilight has long been popular with photographers and painters, who often refer to it as the blue hour, after the French expression . By analogy with evening twilight, sometimes twilight is used metaphorically to imply that something is losing strength and approaching its end. For example, very old people may be said to be "in the twilight of their lives". The collateral adjective for twilight is crepuscular, which may be used to describe the behavior of animals that are most active during this period. Definitions by geometry Twilight occurs according to the solar elevation angle θs, which is the position of the geometric center of the Sun relative to the horizon. There are three established and widely accepted subcategories of twilight: civil twilight (nearest the horizon), nautical twilight, and astronomical twilight (farthest from the horizon). Civil twilight Civil twilight is the period of time for which the geometric center of the Sun is between the horizon and 6° below the horizon. Civil twilight is the period when enough natural light remains so that artificial light in towns and cities is not needed. In the United States' military, the initialisms BMCT (begin morning civil twilight, i.e., civil dawn) and EECT (end evening civil twilight, i.e., civil dusk) are used to refer to the start of morning civil twilight and the end of evening civil twilight, respectively. Civil dawn is preceded by morning nautical twilight and civil dusk is followed by evening nautical twilight. Under clear weather conditions, civil twilight approximates the limit at which solar illumination suffices for the human eye to clearly distinguish terrestrial objects. Enough illumination renders artificial sources unnecessary for most outdoor activities. At civil dawn and at civil dusk, sunlight clearly defines the horizon while the brightest stars and planets can appear. As observed from the Earth (see apparent magnitude), sky-gazers know Venus, the brightest planet, as the "morning star" or "evening star" because they can see it during civil twilight. Although civil dawn marks the time of the first appearance of civil twilight before sunrise, and civil dusk marks the time of the first disappearance of civil twilight after sunset, civil twilight statutes typically denote a fixed period after sunset or before sunrise (most commonly 20–30 minutes) rather than how many degrees the Sun is below the horizon. Examples include when drivers of automobiles must turn on their headlights (called lighting-up time in the UK), when hunting is restricted, or when the crime of burglary is to be treated as nighttime burglary, which carries stiffer penalties in some jurisdictions. The period may affect when extra equipment, such as anti-collision lights, is required for aircraft to operate. In the US, civil twilight for aviation is defined in Part 1.1 of the Federal Aviation Regulations (FARs) as the time listed in the American Air Almanac. Nautical twilight Nautical twilight occurs when the geometric center of the Sun is between 12° and 6° below the horizon. After nautical dusk and before nautical dawn, sailors cannot navigate via the horizon at sea as they cannot clearly see the horizon. At nautical dawn and nautical dusk, the human eye finds it difficult, if not impossible, to discern traces of illumination near the sunset or sunrise point of the horizon (first light after nautical dawn but before civil dawn and nightfall after civil dusk but before nautical dusk). Sailors can take reliable star sightings of well-known stars, during the stage of nautical twilight when they can distinguish a visible horizon for reference (i.e. after astronomic dawn or before astronomic dusk). Under good atmospheric conditions with the absence of other illumination, during nautical twilight, the human eye may distinguish general outlines of ground objects but cannot participate in detailed outdoor operations. Nautical twilight has military considerations as well. The initialisms BMNT (begin morning nautical twilight, i.e. nautical dawn) and EENT (end evening nautical twilight, i.e. nautical dusk) are used and considered when planning military operations. A military unit may treat BMNT and EENT with heightened security, e.g. by "standing to", for which everyone assumes a defensive position. Astronomical twilight Astronomical twilight is defined as when the geometric center of the Sun is between 18° and 12° below the horizon. During astronomical twilight, the sky is dark enough to permit astronomical observation of point sources of light such as stars, except in regions with more intense skyglow due to light pollution, moonlight, auroras, and other sources of light. Some critical observations, such as of faint diffuse items such as nebulae and galaxies, may require observation beyond the limit of astronomical twilight. Theoretically, the faintest stars detectable by the naked eye (those of approximately the sixth magnitude) will become visible in the evening at astronomical dusk, and become invisible at astronomical dawn. Times of occurrence Between day and night Observers within about 48°34' of the Equator can view twilight twice each day on every date of the year between astronomical dawn, nautical dawn, or civil dawn, and sunrise as well as between sunset and civil dusk, nautical dusk, or astronomical dusk. This also occurs for most observers at higher latitudes on many dates throughout the year, except those around the summer solstice. However, at latitudes closer than 8°35' (between 81°25’ and 90°) to either Pole, the Sun cannot rise above the horizon nor sink more than 18° below it on the same day on any date, so this example of twilight cannot occur because the angular difference between solar noon and solar midnight is less than 17°10’. Observers within 63°26' of the Equator can view twilight twice each day on every date between the month of the autumnal equinox and the month of vernal equinox between astronomical dawn, nautical dawn, or civil dawn, and sunrise as well as between sunset and civil dusk, nautical dusk, or astronomical dusk, i.e., from September 1 to March 31 of the following year in the Northern Hemisphere and from March 1 to September 30 in the Southern Hemisphere. The nighttime/twilight boundary solar midnight's latitude varies depending on the month: In January or July, astronomical dawn to sunrise or sunset to astronomical dusk occurs at latitudes less than 48°50' North or South, because then the Sun's declination is less than 23°10' from the Equator; In February or August, astronomical dawn to sunrise or sunset to astronomical dusk occurs at latitudes less than 53°47' North or South, because then the Sun's declination is less than 18°13' from the Equator; In March or September before the equinoxes, astronomical dawn to sunrise or sunset to astronomical dusk occurs at latitudes less than 63°26' North or South, because before the equinoxes the Sun's declination is then less than 8°34' from the Equator; During the equinoxes, astronomical dawn to sunrise or sunset to astronomical dusk occurs at latitudes less than 72°00' North or South, because during the equinoxes the Sun is crossing the Equator line; In March or September after the equinoxes, astronomical dawn to sunrise or sunset to astronomical dusk occurs at latitudes less than 67°45' North or South, because after the equinoxes the Sun's declination is then less than 4°15' from the Equator; In April or October, astronomical dawn to sunrise or sunset to astronomical dusk occurs at latitudes less than 57°09' North or South, because the Sun's declination is then less than 14°51' from the Equator; In May or November, astronomical dawn to sunrise or sunset to astronomical dusk occurs at latitudes less than 50°03' North or South, because the Sun's declination is then less than 21°57' from the Equator; In June or December, astronomical dawn to sunrise or sunset to astronomical dusk occurs at latitudes less than 48°34' North or South, because in June the Sun crosses the Tropic of Cancer (about 23°26' North) and in December the Sun crosses the Tropic of Capricorn (about 23°26' South). Lasting from one day to the next At latitudes greater than about 48°34' North or South, on dates near the summer solstice (June 21 in the Northern Hemisphere or December 21 in the Southern Hemisphere), twilight can last from sunset to sunrise, since the Sun does not sink more than 18 degrees below the horizon, so complete darkness does not occur even at solar midnight. These latitudes include many densely populated regions of the Earth, including the entire United Kingdom and other countries in northern Europe and even parts of central Europe. This also occurs in the Southern Hemisphere, but occurs on December 21. This type of twilight also occurs between one day and the next at latitudes within the polar circles shortly before and shortly after the period of midnight sun. The summer solstice in the Northern Hemisphere is on June 21st, while the summer solstice in the Southern Hemisphere is on December 21st. Civil twilight: between about 60°34' and 65°44' north or south. In the northern hemisphere, this includes the center of Alaska, Iceland, Finland, Sweden, Norway, Faroe Islands and Shetland. In the southern hemisphere this includes parts of the Southern Ocean and the northern tip of the Antarctic Peninsula. When civil twilight lasts all night, this is also referred as a white night. Nautical twilight: between about 54°34' and 60°34' north or south. In the northern hemisphere this includes the center of Alaska, Russia, Canada, Estonia, Latvia, Scotland, Norway, Sweden, Finland, Lithuania, and Denmark. In the southern hemisphere this includes the southernmost point of South America, Ushuaia in Argentina and Puerto Williams in Chile. When nautical twilight lasts all night, this is also referred as a white night. Astronomical twilight: between about 48°34' and 54°34' north or south. In the northern hemisphere, this includes the center of Isle of Man, Aleutian Islands, United Kingdom, Belarus, Ireland, Netherlands, Poland, Germany, Belgium, Czech Republic, Bellingham, Washington, Orcas Island, Washington, Vancouver, British Columbia, Paris, France, Luxembourg, Guernsey, Ukraine, Slovakia and Hungary. In the southern hemisphere this includes the center of South Georgia and the South Sandwich Islands, Bouvet Island, Heard Island, Falkland Islands. It also includes El Calafate and Río Gallegos in Argentina, and Puerto Natales in Chile. When astronomical twilight lasts all night, this does not constitute a white night. This phenomenon is known as the grey nights, nights when it does not get dark enough for astronomers to do their observations of the deep sky. Between one night and the next In Arctic and Antarctic latitudes in wintertime, the polar night only rarely produces complete darkness for 24 hours each day. This can occur only at locations within about 5.5 degrees of latitude of the Pole, and there only on dates close to the winter solstice. At all other latitudes and dates, the polar night includes a daily period of twilight, when the Sun is not far below the horizon. Around winter solstice, when the solar declination changes slowly, complete darkness lasts several weeks at the Pole itself, e.g., from May 11 to July 31 at Amundsen–Scott South Pole Station. North Pole has the experience of this from November 13 to January 29. Solar noon at civil twilight during a polar night: between about 67°24' and 72°34' north or south. Solar noon at nautical twilight during a polar night: between about 72°34' and 78°34' north or south. Solar noon at astronomical twilight during a polar night: between about 78°34' and 84°34' north or south. Solar noon at night during a polar night: between approximately 84°34' and exactly 90° north or south. Lasting for 24 hours At latitudes greater than 81°25' North or South, as the Sun's angular elevation difference is less than 18 degrees, twilight can last for the entire 24 hours. This occurs for one day at latitudes near 8°35’ from the Pole and extends up to several weeks the further toward the Pole one goes. This happens both near the North Pole and near the South Pole. The only permanent settlement to experience this condition is Alert, Nunavut, Canada, where it occurs from February 22–26, and again from October 15–19. Duration The duration of twilight depends on the latitude and the time of the year. The apparent travel of the Sun occurs at the rate of 15 degrees per hour (360° per day), but sunrise and sunset happen typically at oblique angles to the horizon and the actual duration of any twilight period will be a function of that angle, being longer for more oblique angles. This angle of the Sun's motion with respect to the horizon changes with latitude as well as the time of year (affecting the angle of the Earth's axis with respect to the Sun). At Greenwich, England (51.5°N), the duration of civil twilight will vary from 33 minutes to 48 minutes, depending on the time of year. At the equator, civil twilight can last as little as 24 minutes. This is true because at low latitudes the Sun's apparent movement is perpendicular to the observer's horizon. But at the poles, civil twilight can be as long as 2–3 weeks. In the Arctic and Antarctic regions, twilight (if there is any) can last for several hours. There is no astronomical twilight at the poles near the winter solstice (for about 74 days at the North Pole and about 80 days at the South Pole). As one gets closer to the Arctic and Antarctic circles, the Sun's disk moves toward the observer's horizon at a lower angle. The observer's earthly location will pass through the various twilight zones less directly, taking more time. Within the polar circles, twenty-four-hour daylight is encountered in summer, and in regions very close to the poles, twilight can last for weeks on the winter side of the equinoxes. Outside the polar circles, where the angular distance from the polar circle is less than the angle which defines twilight (see above), twilight can continue through local midnight near the summer solstice. The precise position of the polar circles, and the regions where twilight can continue through local midnight, varies slightly from year to year with Earth's axial tilt. The lowest latitudes at which the various twilights can continue through local midnight are approximately 60.561° (60°33′43″) for civil twilight, 54.561° (54°33′43″) for nautical twilight and 48.561° (48°33′43″) for astronomical twilight. These are the largest cities of their respective countries where the various twilights can continue through local solar midnight: Civil twilight (or white night) from sunset to sunrise: Tampere, Oulu, Umeå, Trondheim, Tórshavn, Reykjavík, Nuuk, Whitehorse, Yellowknife, Anchorage, Fairbanks, Arkhangelsk, Yakutsk and Baltasound. In the Southern Hemisphere, the only minor permanent settlement to experience this is Villa Las Estrellas, on the northern tip of the Antarctic Peninsula, politically part of Chile. Nautical twilight (or if brighter, white night) from civil dusk to civil dawn: Saint Petersburg, Moscow, Vitebsk, Vilnius, Riga, Tallinn, Wejherowo, Flensburg, Helsinki, Stockholm, Copenhagen, Oslo, Newcastle upon Tyne, Edinburgh, Glasgow, Belfast, Letterkenny, Petropavl, Nanortalik, Grande Prairie, Juneau, Ushuaia, and Puerto Williams. Astronomical twilight (or grey night) from nautical dusk to nautical dawn: Hulun Buir, Erdenet, Nur-Sultan, Samara, Kyiv, Minsk, Alytus, Warsaw, Košice, Paris, Dublin, Zwettl, Prague, Stanley (Falkland Islands), Berlin, Hamburg, Luxembourg City, Brussels, Amsterdam, London, Cardiff, Vancouver, Calgary, Edmonton, Unalaska, Bellingham (largest in the continental USA), Rio Gallegos, and Punta Arenas. Major cities that near astronomical twilight (or grey night) from nautical dusk to nautical dawn: Khabarovsk (48°29'0"N), Dnipro (48°27'0"N), Victoria (48°25'42"N), Saguenay (48°25′0"N), Brest (48°23′26"N), Thunder Bay (48°22′56″N), Vienna (48°12′30″N), Bratislava (48°8′38″N), Munich (48°8'0"N), Seattle (47°36’35"N) Although Helsinki, Oslo, Stockholm, Tallinn, and Saint Petersburg also enter into nautical twilight after sunset, they do have noticeably lighter skies at night during the summer solstice than other locations mentioned in their category above, because they do not go far into nautical twilight. A white night is a night with only civil twilight which lasts from sunset to sunrise. At the winter solstice within the polar circle, twilight can extend through solar noon at latitudes below 72.561° (72°33′43″) for civil twilight, 78.561° (78°33′43″) for nautical twilight, and 84.561° (84°33′43″) for astronomical twilight. On other planets Twilight on Mars is longer than on Earth, lasting for up to two hours before sunrise or after sunset. Dust high in the atmosphere scatters light to the night side of the planet. Similar twilights are seen on Earth following major volcanic eruptions. In religion Christianity In Christian practice, "vigil" observances often occur during twilight on the evening before major feast days or holidays. For example, the Easter Vigil is held in the hours of darkness between sunset on Holy Saturday and sunrise on Easter Day – most commonly in the evening of Holy Saturday or midnight – and is the first celebration of Easter, days traditionally being considered to begin at sunset. Hinduism Hinduism prescribes the observance of certain practices during twilight, a period generally called . The period is also called by the poetic form of in Sanskrit, literally 'cow dust', referring to the time cows returned from the fields after grazing, kicking up dust in the process. Many rituals, such as Sandhyavandanam and puja, are performed at the twilight hour. Consuming food is not advised during this time. According to some adherents, asuras are regarded to be active during these hours. One of the avatars of Vishnu, Narasimha, is closely associated with the twilight period. According to Hindu scriptures, an asura king, Hiranyakashipu, performed penance and obtained a boon from Brahma that he could not be killed during day or night, neither by human nor animal, neither inside his house nor outside. Vishnu appeared in a half-man half-lion form (neither human nor animal), and ended Hiranyakashipu's life at twilight (neither day nor night) while he was placed in the threshold of his house (neither inside nor outside). Islam Twilight is important in Islam as it determines when certain universally obligatory prayers are to be recited. Morning twilight is when morning prayers (Fajr) are done, while evening twilight is the time for evening prayers (Maghrib prayer). Also during Ramadhan, the time for (morning meal before fasting) ends at morning twilight, while fasting ends after sunset. There is also an important discussion in Islamic jurisprudence between "true dawn" and "false dawn". Judaism In Judaism, twilight is considered neither day nor night; consequently it is treated as a safeguard against encroachment upon either. It can be considered a liminal time. For example, the twilight of Friday is reckoned as Sabbath eve, and that of Saturday as Sabbath day; and the same rule applies to festival days.
Physical sciences
Celestial mechanics
Astronomy
526459
https://en.wikipedia.org/wiki/Power%20loom
Power loom
A power loom is a mechanized loom, and was one of the key developments in the industrialization of weaving during the early Industrial Revolution. The first power loom was designed and patented in 1785 by Edmund Cartwright. It was refined over the next 47 years until a design by the Howard and Bullough company made the operation completely automatic. This device was designed in 1834 by James Bullough and William Kenworthy, and was named the Lancashire loom. By the year 1850, there were a total of around 260,000 power loom operations in England. Two years later came the Northrop loom which replenished the shuttle when it was empty. This replaced the Lancashire loom. Shuttle looms The main components of the loom are the warp beam, heddles, harnesses, shuttle, reed, and takeup roll. In the loom, yarn processing includes shedding, picking, battening and taking-up operations. Shedding. Shedding is the raising of the warp yarns to form a loop through which the filling yarn, carried by the shuttle, can be inserted. The shed is the vertical space between the raised and unraised warp yarns. On the modern loom, simple and intricate shedding operations are performed automatically by the heddle or heald frame, also known as a harness. This is a rectangular frame to which a series of wires, called heddles or healds, are attached. The yarns are passed through the eye holes of the heddles, which hang vertically from the harnesses. The weave pattern determines which harness controls which warp yarns, and the number of harnesses used depends on the complexity of the weave. Two common methods of controlling the heddles are dobbies and a Jacquard Head. Picking. As the harnesses raise the heddles or healds, which raise the warp yarns, the shed is created. The filling yarn is inserted through the shed by a small carrier device called a shuttle. The shuttle is normally pointed at each end to allow passage through the shed. In a traditional shuttle loom, the filling yarn is wound onto a quill, which in turn is mounted in the shuttle. The filling yarn emerges through a hole in the shuttle as it moves across the loom. A single crossing of the shuttle from one side of the loom to the other is known as a pick. As the shuttle moves back and forth across the shed, it weaves an edge, or selvage, on each side of the fabric to prevent the fabric from raveling. Battening. As the shuttle moves across the loom laying down the fill yarn, it also passes through openings in another frame called a reed (which resembles a comb). With each picking operation, the reed presses or battens each filling yarn against the portion of the fabric that has already been formed. The point where the fabric is formed is called the fell. Conventional shuttle looms can operate at speeds of about 150 to 200 picks per minute With each weaving operation, the newly constructed fabric must be wound on a cloth beam. This process is called taking up. At the same time, the warp yarns must be let off or released from the warp beams. To become fully automatic, a loom needs a filling stop motion which will brake the loom, if the weft thread breaks. Operation Operation of weaving in a textile mill is undertaken by a specially trained operator known as a weaver. Weavers are expected to uphold high industry standards and are tasked with monitoring anywhere from ten to as many as thirty separate looms at any one time. During their operating shift, weavers will first utilize a wax pencil or crayon to sign their initials onto the cloth to mark a shift change, and then walk along the cloth side (front) of the looms they tend, gently touching the fabric as it comes from the reed. This is done to feel for any broken "picks" or filler thread. Should broken picks be detected, the weaver will disable the machine and undertake to correct the error, typically by replacing the bobbin of filler thread in as little time as possible. They are trained that, ideally, no machine should stop working for more than one minute, with faster turnaround times being preferred. Operation of this needs more than 2 people because of the way it works. History The first ideas for an automatic loom were developed in 1784 by M. de Gennes in Paris and by Vaucanson in 1745, but these designs were never developed and were forgotten. In 1785 Edmund Cartwright patented a power loom which used water power to speed up the weaving process, the predecessor to the modern power loom. His ideas were licensed first by Grimshaw of Manchester who built a small steam-powered weaving factory in Manchester in 1790, but the factory burnt down. Cartwright's was not a commercially successful machine; his looms had to be stopped to dress the warp. Over the next decades, Cartwright's ideas were modified into a reliable automatic loom. These designs followed John Kay's invention of the flying shuttle, and they passed the shuttle through the shed using levers. With the increased speed of weaving, weavers were able to use more thread than spinners could produce. Series of initial inventors A series of inventors incrementally improved all aspects of the three principal processes and the ancillary processes. Grimshaw of Manchester (1790): dressing the warp Austin (1789, 1790): dressing the warp, 200 looms produced for Monteith of Pollockshaws 1800 Thomas Johnson of Bredbury (1803): dressing frame, factory for 200 steam looms on Manchester in 1806, and two factories at Stockport in 1809. One at Westhoughton, Lancashire (1809). William Radcliffe of Stockport (1802): improved take up mechanism John Todd of Burnley (1803): a heald roller and new shedding arrangements, the healds were corded to treadles actuated by cams on the second shaft. William Horrocks of Stockport (1803): The frame was still wooden but the lathe was pendant from the frame and operated by cams on the first shaft, the shedding was operated by cams on the second shaft, the take up motion was copied from Radcliffe. Peter Marsland (1806): improvements to the lathe motion to counteract poor picking William Cotton (1810): improvements to the letting off motion William Horrocks (1813): Horrocks loom, modifications to the lathe motion, improving on Marsland Peter Ewart (1813): a use of pneumatics Joseph and Peter Taylor (1815): double beat foot lathe for heavy cloths Paul Moody (1815): produces the first power loom in North America. Exporting a UK loom would have been illegal. John Capron and Sons (1820): installed the first power looms for woolens in North America at Uxbridge, Massachusetts. William Horrocks (1821): a system to wet the warp and weft during use, improving the effectiveness of the sizing Richard Roberts (1830): Roberts Loom, These improvements were a geared take up wheel and tappets to operate multiple heddles Stanford, Pritchard and Wilkinson: patented a method to stop on the break of weft or warp. It was not used. William Dickinson of Blackburn: Blackburn Loom, the modern overpick Further useful improvements There now appear a series of useful improvements that are contained in patents for useless devices Horny, Kenworthy and Bullough of Blackburn (1834): the vibrating or fly reed John Ramsbottom and Richard Holt of Todmorden (1834): a new automatic weft stopping motion James Bullough of Blackburn (1835): improved automatic weft stopping motion and taking up and letting off arrangements Andrew Parkinson (1836): improved stretcher (temple). William Kenworthy and James Bullough (1841): trough and roller temple (became the standard), A simple stop-motion. At this point the loom has become automatic except for refilling weft pirns. The Cartwight loom weaver could work one loom at 120-130 picks per minute- with a Kenworthy and Bullough's Lancashire Loom, a weaver can run four or more looms working at 220-260 picks per minute- thus giving eight (or more) times more throughput. James Henry Northrop (1894) invented a self-threading shuttle and shuttle spring jaws to hold a bobbin by means of rings on the butt. This paved the way to his automatic filling and changing battery of 1891, the basic feature of the Northrop Loom. The principal advantage of the Northrop loom was that it was fully automatic; when a warp thread broke, the loom stopped until it was fixed. When the shuttle ran out of thread, Northrop's mechanism ejected the depleted pirn and loaded a new full one without stopping. A loom operative could work 16 or more looms whereas previously they could only operate eight. Thus, the labor cost was halved. Mill owners had to decide whether the labor saving was worth the capital investment in a new loom. In all 700,000 looms were sold. By 1914, Northrop looms made up 40% of American looms. Northrop was responsible for several hundred weaving related patents. Looms and the Manchester context The development of the power loom in and around Manchester was not a coincidence. Manchester had been a centre for Fustians by 1620 and acted as a hub for other Lancashire towns, so developing a communication network with them. It was an established point of export using the meandering River Mersey, and by 1800 it had a thriving canal network, with links to the Ashton Canal, Rochdale Canal the Peak Forest Canal and Manchester Bolton & Bury Canal. The fustian trade gave the towns a skilled workforce that was used to the complicated Dutch looms, and was perhaps accustomed to industrial discipline. While Manchester became a spinning town, the towns around were weaving towns producing cloth by the putting out system. The business was dominated by a few families, who had the capital needed to invest in new mills and to buy hundreds of looms. Mills were built along the new canals, so immediately had access to their markets. Spinning developed first and, until 1830, the handloom was still more important economically than the power loom when the roles reversed. Because of the economic growth of Manchester, a new industry of precision machine tool engineering was born and here were the skills needed to build the precision mechanisms of a loom. Adoption {| class=wikitable |+Number of Looms in UK |Year||1803||1820||1829||1833||1857 |- |Looms||2,400||14,650||55,500||100,000||250,000 |} Draper' strategy was to standardize on a couple of Northrop Loom models which it mass-produced. The lighter E-model of 1909 was joined in the 1930 by the heavier X-model. Continuous fibre machines, say for rayon, which was more break-prone, needed a specialist loom. This was provided by the purchase of the Stafford Loom Co. in 1932, and using their patents a third loom the XD, was added to the range. Because of their mass production techniques they were reluctant and slow to retool for new technologies such as shuttleless looms. Decline and reinvention Originally, power looms used a shuttle to throw the weft across, but in 1927 the faster and more efficient shuttleless loom came into use. Sulzer Brothers, a Swiss company had the exclusive rights to shuttleless looms in 1942, and licensed the American production to Warner & Swasey. Draper licensed the slower rapier loom. Today, advances in technology have produced a variety of looms designed to maximise production for specific types of material. The most common of these are Sulzer shuttleless weaving machines, rapier looms, air-jet looms and water-jet looms. Social and economic implications Power looms reduced demand for skilled handweavers, initially causing reduced wages and unemployment. Protests followed their introduction. For example, in 1816 two thousand rioting Calton weavers tried to destroy power loom mills and stoned the workers. In the longer term, by making cloth more affordable the power loom increased demand and stimulated exports, causing a growth in industrial employment, albeit low-paid. The power loom also opened up opportunities for women mill workers. A darker side of the power loom's impact was the growth of employment of children in power loom mills. Dangers There are a number of inherent dangers in the machines, to which inattentive or poorly trained weavers can fall victim. The most obvious is the moving reed, the frames which hold the heddles and the "pinch" or "sand" roll utilized to keep the cloth tight as it passes over the front of the machine and onto the doff roll. The most common injury in weaving is pinched fingers from distracted or bored workers, though this is not the only such injury found. There are numerous accounts of weavers with long hair getting it tangled in the warp itself and having their scalp pulled away from the skull, or large chunks of hair pulled off. As a result of this, it has become industry standard for companies to require weavers to either keep hair up and tied, or to keep their hair short so as not to allow it to become tangled. Also, due to possible pinch points on the front of machines, loose, baggy clothing is prohibited. In addition, there is a risk of the shuttle flying out of the loom at a high-speed (200+ mph/322 kmh) and striking a worker if the moving reed encounters a thread/yarn or other mechanical jam/error. One complication for weavers, in the terms of safety, is the loud nature in which weave mills operate (115dB+). Because of this, it is nearly impossible to hear a person calling for help when entangled. This has led OSHA to outline specific guidelines for companies to mitigate the chances of such accidents occurring. However, even with such guidelines in place, injuries in textile production due to the machines themselves, are still commonplace.
Technology
Weaving
null
527046
https://en.wikipedia.org/wiki/Hyperfine%20structure
Hyperfine structure
In atomic physics, hyperfine structure is defined by small shifts in otherwise degenerate electronic energy levels and the resulting splittings in those electronic energy levels of atoms, molecules, and ions, due to electromagnetic multipole interaction between the nucleus and electron clouds. In atoms, hyperfine structure arises from the energy of the nuclear magnetic dipole moment interacting with the magnetic field generated by the electrons and the energy of the nuclear electric quadrupole moment in the electric field gradient due to the distribution of charge within the atom. Molecular hyperfine structure is generally dominated by these two effects, but also includes the energy associated with the interaction between the magnetic moments associated with different magnetic nuclei in a molecule, as well as between the nuclear magnetic moments and the magnetic field generated by the rotation of the molecule. Hyperfine structure contrasts with fine structure, which results from the interaction between the magnetic moments associated with electron spin and the electrons' orbital angular momentum. Hyperfine structure, with energy shifts typically orders of magnitudes smaller than those of a fine-structure shift, results from the interactions of the nucleus (or nuclei, in molecules) with internally generated electric and magnetic fields. History The first theory of atomic hyperfine structure was given in 1930 by Enrico Fermi for an atom containing a single valence electron with an arbitrary angular momentum. The Zeeman splitting of this structure was discussed by S. A. Goudsmit and R. F. Bacher later that year. In 1935, H. Schüler and Theodor Schmidt proposed the existence of a nuclear quadrupole moment in order to explain anomalies in the hyperfine structure of Europium, Cassiopium(older name for Lutetium), Indium, Antimony, and Mercury. Theory The theory of hyperfine structure comes directly from electromagnetism, consisting of the interaction of the nuclear multipole moments (excluding the electric monopole) with internally generated fields. The theory is derived first for the atomic case, but can be applied to each nucleus in a molecule. Following this there is a discussion of the additional effects unique to the molecular case. Atomic hyperfine structure Magnetic dipole The dominant term in the hyperfine Hamiltonian is typically the magnetic dipole term. Atomic nuclei with a non-zero nuclear spin have a magnetic dipole moment, given by: where is the g-factor and is the nuclear magneton. There is an energy associated with a magnetic dipole moment in the presence of a magnetic field. For a nuclear magnetic dipole moment, μI, placed in a magnetic field, B, the relevant term in the Hamiltonian is given by: In the absence of an externally applied field, the magnetic field experienced by the nucleus is that associated with the orbital (ℓ) and spin (s) angular momentum of the electrons: Electron orbital magnetic field Electron orbital angular momentum results from the motion of the electron about some fixed external point that we shall take to be the location of the nucleus. The magnetic field at the nucleus due to the motion of a single electron, with charge –e at a position r relative to the nucleus, is given by: where −r gives the position of the nucleus relative to the electron. Written in terms of the Bohr magneton, this gives: Recognizing that mev is the electron momentum, p, and that is the orbital angular momentum in units of ħ, ℓ, we can write: For a many-electron atom this expression is generally written in terms of the total orbital angular momentum, , by summing over the electrons and using the projection operator, , where . For states with a well defined projection of the orbital angular momentum, , we can write , giving: Electron spin magnetic field The electron spin angular momentum is a fundamentally different property that is intrinsic to the particle and therefore does not depend on the motion of the electron. Nonetheless, it is angular momentum and any angular momentum associated with a charged particle results in a magnetic dipole moment, which is the source of a magnetic field. An electron with spin angular momentum, s, has a magnetic moment, μs, given by: where gs is the electron spin g-factor and the negative sign is because the electron is negatively charged (consider that negatively and positively charged particles with identical mass, travelling on equivalent paths, would have the same angular momentum, but would result in currents in the opposite direction). The magnetic field of a point dipole moment, μs, is given by: Electron total magnetic field and contribution The complete magnetic dipole contribution to the hyperfine Hamiltonian is thus given by: The first term gives the energy of the nuclear dipole in the field due to the electronic orbital angular momentum. The second term gives the energy of the "finite distance" interaction of the nuclear dipole with the field due to the electron spin magnetic moments. The final term, often known as the Fermi contact term relates to the direct interaction of the nuclear dipole with the spin dipoles and is only non-zero for states with a finite electron spin density at the position of the nucleus (those with unpaired electrons in s-subshells). It has been argued that one may get a different expression when taking into account the detailed nuclear magnetic moment distribution. The inclusion of the delta function is an admission that the singularity in the magnetic induction B owing to a magnetic dipole moment at a point is not integrable. It is B which mediates the interaction between the Pauli spinors in non-relativistic quantum mechanics. Fermi (1930) avoided the difficulty by working with the relativistic Dirac wave equation, according to which the mediating field for the Dirac spinors is the four-vector potential (V,A). The component  V is the Coulomb potential. The component A is the three-vector magnetic potential (such that B = curl A), which for the point dipole is integrable. For states with this can be expressed in the form where: If hyperfine structure is small compared with the fine structure (sometimes called IJ-coupling by analogy with LS-coupling), I and J are good quantum numbers and matrix elements of can be approximated as diagonal in I and J. In this case (generally true for light elements), we can project N onto J (where is the total electronic angular momentum) and we have: This is commonly written as with being the hyperfine-structure constant which is determined by experiment. Since (where is the total angular momentum), this gives an energy of: In this case the hyperfine interaction satisfies the Landé interval rule. Electric quadrupole Atomic nuclei with spin have an electric quadrupole moment. In the general case this is represented by a rank-2 tensor, , with components given by: where i and j are the tensor indices running from 1 to 3, xi and xj are the spatial variables x, y and z depending on the values of i and j respectively, δij is the Kronecker delta and ρ(r) is the charge density. Being a 3-dimensional rank-2 tensor, the quadrupole moment has 32 = 9 components. From the definition of the components it is clear that the quadrupole tensor is a symmetric matrix () that is also traceless (), giving only five components in the irreducible representation. Expressed using the notation of irreducible spherical tensors we have: The energy associated with an electric quadrupole moment in an electric field depends not on the field strength, but on the electric field gradient, confusingly labelled , another rank-2 tensor given by the outer product of the del operator with the electric field vector: with components given by: Again it is clear this is a symmetric matrix and, because the source of the electric field at the nucleus is a charge distribution entirely outside the nucleus, this can be expressed as a 5-component spherical tensor, , with: where: The quadrupolar term in the Hamiltonian is thus given by: A typical atomic nucleus closely approximates cylindrical symmetry and therefore all off-diagonal elements are close to zero. For this reason the nuclear electric quadrupole moment is often represented by . Molecular hyperfine structure The molecular hyperfine Hamiltonian includes those terms already derived for the atomic case with a magnetic dipole term for each nucleus with and an electric quadrupole term for each nucleus with . The magnetic dipole terms were first derived for diatomic molecules by Frosch and Foley, and the resulting hyperfine parameters are often called the Frosch and Foley parameters. In addition to the effects described above, there are a number of effects specific to the molecular case. Direct nuclear spin–spin Each nucleus with has a non-zero magnetic moment that is both the source of a magnetic field and has an associated energy due to the presence of the combined field of all of the other nuclear magnetic moments. A summation over each magnetic moment dotted with the field due to each other magnetic moment gives the direct nuclear spin–spin term in the hyperfine Hamiltonian, . where α and α are indices representing the nucleus contributing to the energy and the nucleus that is the source of the field respectively. Substituting in the expressions for the dipole moment in terms of the nuclear angular momentum and the magnetic field of a dipole, both given above, we have Nuclear spin–rotation The nuclear magnetic moments in a molecule exist in a magnetic field due to the angular momentum, T (R is the internuclear displacement vector), associated with the bulk rotation of the molecule, thus Small molecule hyperfine structure A typical simple example of the hyperfine structure due to the interactions discussed above is in the rotational transitions of hydrogen cyanide (1H12C14N) in its ground vibrational state. Here, the electric quadrupole interaction is due to the 14N-nucleus, the hyperfine nuclear spin-spin splitting is from the magnetic coupling between nitrogen, 14N (IN = 1), and hydrogen, 1H (IH = ), and a hydrogen spin-rotation interaction due to the 1H-nucleus. These contributing interactions to the hyperfine structure in the molecule are listed here in descending order of influence. Sub-doppler techniques have been used to discern the hyperfine structure in HCN rotational transitions. The dipole selection rules for HCN hyperfine structure transitions are , , where is the rotational quantum number and is the total rotational quantum number inclusive of nuclear spin (), respectively. The lowest transition () splits into a hyperfine triplet. Using the selection rules, the hyperfine pattern of transition and higher dipole transitions is in the form of a hyperfine sextet. However, one of these components () carries only 0.6% of the rotational transition intensity in the case of . This contribution drops for increasing J. So, from upwards the hyperfine pattern consists of three very closely spaced stronger hyperfine components (, ) together with two widely spaced components; one on the low frequency side and one on the high frequency side relative to the central hyperfine triplet. Each of these outliers carry ~ ( is the upper rotational quantum number of the allowed dipole transition) the intensity of the entire transition. For consecutively higher- transitions, there are small but significant changes in the relative intensities and positions of each individual hyperfine component. Measurements Hyperfine interactions can be measured, among other ways, in atomic and molecular spectra and in electron paramagnetic resonance spectra of free radicals and transition-metal ions. Applications Astrophysics As the hyperfine splitting is very small, the transition frequencies are usually not located in the optical, but are in the range of radio- or microwave (also called sub-millimeter) frequencies. Hyperfine structure gives the 21 cm line observed in H I regions in interstellar medium. Carl Sagan and Frank Drake considered the hyperfine transition of hydrogen to be a sufficiently universal phenomenon so as to be used as a base unit of time and length on the Pioneer plaque and later Voyager Golden Record. In submillimeter astronomy, heterodyne receivers are widely used in detecting electromagnetic signals from celestial objects such as star-forming core or young stellar objects. The separations among neighboring components in a hyperfine spectrum of an observed rotational transition are usually small enough to fit within the receiver's IF band. Since the optical depth varies with frequency, strength ratios among the hyperfine components differ from that of their intrinsic (or optically thin) intensities (these are so-called hyperfine anomalies, often observed in the rotational transitions of HCN). Thus, a more accurate determination of the optical depth is possible. From this we can derive the object's physical parameters. Nuclear spectroscopy In nuclear spectroscopy methods, the nucleus is used to probe the local structure in materials. The methods mainly base on hyperfine interactions with the surrounding atoms and ions. Important methods are nuclear magnetic resonance, Mössbauer spectroscopy, and perturbed angular correlation. Nuclear technology The atomic vapor laser isotope separation (AVLIS) process uses the hyperfine splitting between optical transitions in uranium-235 and uranium-238 to selectively photo-ionize only the uranium-235 atoms and then separate the ionized particles from the non-ionized ones. Precisely tuned dye lasers are used as the sources of the necessary exact wavelength radiation. Use in defining the SI second and meter The hyperfine structure transition can be used to make a microwave notch filter with very high stability, repeatability and Q factor, which can thus be used as a basis for very precise atomic clocks. The term transition frequency denotes the frequency of radiation corresponding to the transition between the two hyperfine levels of the atom, and is equal to , where is difference in energy between the levels and is the Planck constant. Typically, the transition frequency of a particular isotope of caesium or rubidium atoms is used as a basis for these clocks. Due to the accuracy of hyperfine structure transition-based atomic clocks, they are now used as the basis for the definition of the second. One second is now defined to be exactly cycles of the hyperfine structure transition frequency of caesium-133 atoms. On October 21, 1983, the 17th CGPM defined the meter as the length of the path travelled by light in a vacuum during a time interval of of a second. Precision tests of quantum electrodynamics The hyperfine splitting in hydrogen and in muonium have been used to measure the value of the fine-structure constant α. Comparison with measurements of α in other physical systems provides a stringent test of QED. Qubit in ion-trap quantum computing The hyperfine states of a trapped ion are commonly used for storing qubits in ion-trap quantum computing. They have the advantage of having very long lifetimes, experimentally exceeding ~10 minutes (compared to ~1s for metastable electronic levels). The frequency associated with the states' energy separation is in the microwave region, making it possible to drive hyperfine transitions using microwave radiation. However, at present no emitter is available that can be focused to address a particular ion from a sequence. Instead, a pair of laser pulses can be used to drive the transition, by having their frequency difference (detuning) equal to the required transition's frequency. This is essentially a stimulated Raman transition. In addition, near-field gradients have been exploited to individually address two ions separated by approximately 4.3 micrometers directly with microwave radiation.
Physical sciences
Atomic physics
Physics
527232
https://en.wikipedia.org/wiki/Hydrostatics
Hydrostatics
Fluid statics or hydrostatics is the branch of fluid mechanics that studies fluids at hydrostatic equilibrium and "the pressure in a fluid or exerted by a fluid on an immersed body". It encompasses the study of the conditions under which fluids are at rest in stable equilibrium as opposed to fluid dynamics, the study of fluids in motion. Hydrostatics is a subcategory of fluid statics, which is the study of all fluids, both compressible or incompressible, at rest. Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to geophysics and astrophysics (for example, in understanding plate tectonics and the anomalies of the Earth's gravitational field), to meteorology, to medicine (in the context of blood pressure), and many other fields. Hydrostatics offers physical explanations for many phenomena of everyday life, such as why atmospheric pressure changes with altitude, why wood and oil float on water, and why the surface of still water is always level according to the curvature of the earth. History Some principles of hydrostatics have been known in an empirical and intuitive sense since antiquity, by the builders of boats, cisterns, aqueducts and fountains. Archimedes is credited with the discovery of Archimedes' Principle, which relates the buoyancy force on an object that is submerged in a fluid to the weight of fluid displaced by the object. The Roman engineer Vitruvius warned readers about lead pipes bursting under hydrostatic pressure. The concept of pressure and the way it is transmitted by fluids was formulated by the French mathematician and philosopher Blaise Pascal in 1647. Hydrostatics in ancient Greece and Rome Pythagorean Cup The "fair cup" or Pythagorean cup, which dates from about the 6th century BC, is a hydraulic technology whose invention is credited to the Greek mathematician and geometer Pythagoras. It was used as a learning tool. The cup consists of a line carved into the interior of the cup, and a small vertical pipe in the center of the cup that leads to the bottom. The height of this pipe is the same as the line carved into the interior of the cup. The cup may be filled to the line without any fluid passing into the pipe in the center of the cup. However, when the amount of fluid exceeds this fill line, fluid will overflow into the pipe in the center of the cup. Due to the drag that molecules exert on one another, the cup will be emptied. Heron's fountain Heron's fountain is a device invented by Heron of Alexandria that consists of a jet of fluid being fed by a reservoir of fluid. The fountain is constructed in such a way that the height of the jet exceeds the height of the fluid in the reservoir, apparently in violation of principles of hydrostatic pressure. The device consisted of an opening and two containers arranged one above the other. The intermediate pot, which was sealed, was filled with fluid, and several cannula (a small tube for transferring fluid between vessels) connecting the various vessels. Trapped air inside the vessels induces a jet of water out of a nozzle, emptying all water from the intermediate reservoir. Pascal's contribution in hydrostatics Pascal made contributions to developments in both hydrostatics and hydrodynamics. Pascal's Law is a fundamental principle of fluid mechanics that states that any pressure applied to the surface of a fluid is transmitted uniformly throughout the fluid in all directions, in such a way that initial variations in pressure are not changed. Pressure in fluids at rest Due to the fundamental nature of fluids, a fluid cannot remain at rest under the presence of a shear stress. However, fluids can exert pressure normal to any contacting surface. If a point in the fluid is thought of as an infinitesimally small cube, then it follows from the principles of equilibrium that the pressure on every side of this unit of fluid must be equal. If this were not the case, the fluid would move in the direction of the resulting force. Thus, the pressure on a fluid at rest is isotropic; i.e., it acts with equal magnitude in all directions. This characteristic allows fluids to transmit force through the length of pipes or tubes; i.e., a force applied to a fluid in a pipe is transmitted, via the fluid, to the other end of the pipe. This principle was first formulated, in a slightly extended form, by Blaise Pascal, and is now called Pascal's law. Hydrostatic pressure In a fluid at rest, all frictional and inertial stresses vanish and the state of stress of the system is called hydrostatic. When this condition of is applied to the Navier–Stokes equations for viscous fluids or Euler equations (fluid dynamics) for ideal inviscid fluid, the gradient of pressure becomes a function of body forces only. The Navier-Stokes momentum equations are: By setting the flow velocity , they become simply: or: This is the general form of Stevin's law: the pressure gradient equals the body force force density field. Let us now consider two particular cases of this law. In case of a conservative body force with scalar potential : the Stevin equation becomes: That can be integrated to give: So in this case the pressure difference is the opposite of the difference of the scalar potential associated to the body force. In the other particular case of a body force of constant direction along z: the generalised Stevin's law above becomes: That can be integrated to give another (less-) generalised Stevin's law: where: is the hydrostatic pressure (Pa), is the fluid density (kg/m3), is gravitational acceleration (m/s2), is the height (parallel to the direction of gravity) of the test area (m), is the height of the zero reference point of the pressure (m) is the hydrostatic pressure field (Pa) along x and y at the zero reference point For water and other liquids, this integral can be simplified significantly for many practical applications, based on the following two assumptions. Since many liquids can be considered incompressible, a reasonable good estimation can be made from assuming a constant density throughout the liquid. The same assumption cannot be made within a gaseous environment. Also, since the height of the fluid column between and is often reasonably small compared to the radius of the Earth, one can neglect the variation of . Under these circumstances, one can transport out of the integral the density and the gravity acceleration and the law is simplified into the formula where is the height of the liquid column between the test volume and the zero reference point of the pressure. This formula is often called Stevin's law. One could arrive to the above formula also by considering the first particular case of the equation for a conservative body force field: in fact the body force field of uniform intensity and direction: is conservative, so one can write the body force density as: Then the body force density has a simple scalar potential: And the pressure difference follows another time the Stevin's law: The reference point should lie at or below the surface of the liquid. Otherwise, one has to split the integral into two (or more) terms with the constant and . For example, the absolute pressure compared to vacuum is where is the total height of the liquid column above the test area to the surface, and is the atmospheric pressure, i.e., the pressure calculated from the remaining integral over the air column from the liquid surface to infinity. This can easily be visualized using a pressure prism. Hydrostatic pressure has been used in the preservation of foods in a process called pascalization. Medicine In medicine, hydrostatic pressure in blood vessels is the pressure of the blood against the wall. It is the opposing force to oncotic pressure. In capillaries, hydrostatic pressure (also known as capillary blood pressure) is higher than the opposing “colloid osmotic pressure” in blood—a “constant” pressure primarily produced by circulating albumin—at the arteriolar end of the capillary. This pressure forces plasma and nutrients out of the capillaries and into surrounding tissues. Fluid and the cellular wastes in the tissues enter the capillaries at the venule end, where the hydrostatic pressure is less than the osmotic pressure in the vessel. Atmospheric pressure Statistical mechanics shows that, for a pure ideal gas of constant temperature in a gravitational field, T, its pressure, p will vary with height, h, as where is the acceleration due to gravity is the absolute temperature is Boltzmann constant is the molecular mass of the gas is the pressure is the height This is known as the barometric formula, and may be derived from assuming the pressure is hydrostatic. If there are multiple types of molecules in the gas, the partial pressure of each type will be given by this equation. Under most conditions, the distribution of each species of gas is independent of the other species. Buoyancy Any body of arbitrary shape which is immersed, partly or fully, in a fluid will experience the action of a net force in the opposite direction of the local pressure gradient. If this pressure gradient arises from gravity, the net force is in the vertical direction opposite that of the gravitational force. This vertical force is termed buoyancy or buoyant force and is equal in magnitude, but opposite in direction, to the weight of the displaced fluid. Mathematically, where is the density of the fluid, is the acceleration due to gravity, and is the volume of fluid directly above the curved surface. In the case of a ship, for instance, its weight is balanced by pressure forces from the surrounding water, allowing it to float. If more cargo is loaded onto the ship, it would sink more into the water – displacing more water and thus receive a higher buoyant force to balance the increased weight. Discovery of the principle of buoyancy is attributed to Archimedes. Hydrostatic force on submerged surfaces The horizontal and vertical components of the hydrostatic force acting on a submerged surface are given by the following formula: where is the pressure at the centroid of the vertical projection of the submerged surface is the area of the same vertical projection of the surface is the density of the fluid is the acceleration due to gravity is the volume of fluid directly above the curved surface Liquids (fluids with free surfaces) Liquids can have free surfaces at which they interface with gases, or with a vacuum. In general, the lack of the ability to sustain a shear stress entails that free surfaces rapidly adjust towards an equilibrium. However, on small length scales, there is an important balancing force from surface tension. Capillary action When liquids are constrained in vessels whose dimensions are small, compared to the relevant length scales, surface tension effects become important leading to the formation of a meniscus through capillary action. This capillary action has profound consequences for biological systems as it is part of one of the two driving mechanisms of the flow of water in plant xylem, the transpirational pull. Hanging drops Without surface tension, drops would not be able to form. The dimensions and stability of drops are determined by surface tension. The drop's surface tension is directly proportional to the cohesion property of the fluid.
Physical sciences
Fluid mechanics
Physics
527453
https://en.wikipedia.org/wiki/Flowchart
Flowchart
A flowchart is a type of diagram that represents a workflow or process. A flowchart can also be defined as a diagrammatic representation of an algorithm, a step-by-step approach to solving a task. The flowchart shows the steps as boxes of various kinds, and their order by connecting the boxes with arrows. This diagrammatic representation illustrates a solution model to a given problem. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields. Overview Flowcharts are used to design and document simple processes or programs. Like other types of diagrams, they help visualize the process. Two of the many benefits are flaws and bottlenecks may become apparent. Flowcharts typically use the following main symbols: A process step, usually called an activity, is denoted by a rectangular box. A decision is usually denoted by a diamond. A flowchart is described as "cross-functional" when the chart is divided into different vertical or horizontal parts, to describe the control of different organizational units. A symbol appearing in a particular part is within the control of that organizational unit. A cross-functional flowchart allows the author to correctly locate the responsibility for performing an action or making a decision, and to show the responsibility of each organizational unit for different parts of a single process. Flowcharts represent certain aspects of processes and are usually complemented by other types of diagram. For instance, Kaoru Ishikawa defined the flowchart as one of the seven basic tools of quality control, next to the histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram, and the scatter diagram. Similarly, in UML, a standard concept-modeling notation used in software development, the activity diagram, which is a type of flowchart, is just one of many different diagram types. Nassi-Shneiderman diagrams and Drakon-charts are an alternative notation for process flow. Common alternative names include: flow chart, process flowchart, functional flowchart, process map, process chart, functional process chart, business process model, process model, process flow diagram, work flow diagram, business flow diagram. The terms "flowchart" and "flow chart" are used interchangeably. The underlying graph structure of a flowchart is a flow graph, which abstracts away node types, their contents and other ancillary information. History The first structured method for documenting process flow, the "flow process chart", was introduced by Frank and Lillian Gilbreth in the presentation "Process Charts: First Steps in Finding the One Best Way to do Work", to members of the American Society of Mechanical Engineers (ASME) in 1921. The Gilbreths' tools quickly found their way into industrial engineering curricula. In the early 1930s, an industrial engineer, Allan H. Mogensen began to train business people in the use of some of the tools of industrial engineering at his Work Simplification Conferences in Lake Placid, New York. Art Spinanger, a 1944 graduate of Mogensen's class, took the tools back to Procter and Gamble where he developed their Deliberate Methods Change Program. Ben S. Graham, another 1944 graduate, Director of Formcraft Engineering at Standard Register Industrial, applied the flow process chart to information processing with his development of the multi-flow process chart, to present multiple documents and their relationships. In 1947, ASME adopted a symbol set derived from Gilbreth's original work as the "ASME Standard: Operation and Flow Process Charts." Douglas Hartree in 1949 explained that Herman Goldstine and John von Neumann had developed a flowchart (originally, diagram) to plan computer programs. His contemporary account was endorsed by IBM engineers and by Goldstine's personal recollections. The original programming flowcharts of Goldstine and von Neumann can be found in their unpublished report, "Planning and coding of problems for an electronic computing instrument, Part II, Volume 1" (1947), which is reproduced in von Neumann's collected works. The flowchart became a popular tool for describing computer algorithms, but its popularity decreased in the 1970s, when interactive computer terminals and third-generation programming languages became common tools for computer programming, since algorithms can be expressed more concisely as source code in such languages. Often pseudo-code is used, which uses the common idioms of such languages without strictly adhering to the details of a particular one. Also, flowcharts are not well-suited for new programming techniques such as recursive programming. Nevertheless, flowcharts were still used in the early 21st century for describing computer algorithms. Some techniques such as UML activity diagrams and Drakon-charts can be considered to be extensions of the flowchart. Types Sterneckert (2003) suggested that flowcharts can be modeled from the perspective of different user groups (such as managers, system analysts and clerks), and that there are four general types: Document flowcharts, showing controls over a document-flow through a system Data flowcharts, showing controls over a data-flow in a system System flowcharts, showing controls at a physical or resource level Program flowchart, showing the controls in a program within a system Notice that every type of flowchart focuses on some kind of control, rather than on the particular flow itself. However, there are some different classifications. For example, Andrew Veronis (1978) named three basic types of flowcharts: the system flowchart, the general flowchart, and the detailed flowchart. That same year Marilyn Bohl (1978) stated "in practice, two kinds of flowcharts are used in solution planning: system flowcharts and program flowcharts...". More recently, Mark A. Fryman (2001) identified more differences: "Decision flowcharts, logic flowcharts, systems flowcharts, product flowcharts, and process flowcharts are just a few of the different types of flowcharts that are used in business and government". In addition, many diagram techniques are similar to flowcharts but carry a different name, such as UML activity diagrams. Reversible flowcharts represent a paradigm in computing that focuses on the reversibility of computational processes. Unlike traditional computing models, where operations are often irreversible, reversible flowcharts ensure that any atomic computational step can be reversed. Reversible flowcharts are shown to be as expressive as reversible Turing machines, and are a theoretical foundation for structured reversible programming and energy-efficient reversible computing systems. Building blocks Common symbols The American National Standards Institute (ANSI) set standards for flowcharts and their symbols in the 1960s. The International Organization for Standardization (ISO) adopted the ANSI symbols in 1970. The current standard, ISO 5807, was published in 1985 and last reviewed in 2019. Generally, flowcharts flow from top to bottom and left to right. Other symbols The ANSI/ISO standards include symbols beyond the basic shapes. Some are: Parallel processing Parallel Mode is represented by two horizontal lines at the beginning or ending of simultaneous operations For parallel and concurrent processing the Parallel Mode horizontal lines or a horizontal bar indicate the start or end of a section of processes that can be done independently: At a fork, the process creates one or more additional processes, indicated by a bar with one incoming path and two or more outgoing paths. At a join, two or more processes continue as a single process, indicated by a bar with several incoming paths and one outgoing path. All processes must complete before the single process continues. Diagramming software Any drawing program can be used to create flowchart diagrams, but these will have no underlying data model to share data with databases or other programs such as project management systems or spreadsheet. Many software packages exist that can create flowcharts automatically, either directly from a programming language source code, or from a flowchart description language. There are several applications and visual programming languages that use flowcharts to represent and execute programs. Generally these are used as teaching tools for beginner students.
Technology
Software development: General
null
527639
https://en.wikipedia.org/wiki/Gemini%20Observatory
Gemini Observatory
The Gemini Observatory comprises two 8.1-metre (26.6 ft) telescopes, Gemini North and Gemini South, situated in Hawaii and Chile, respectively. These twin telescopes offer extensive coverage of the northern and southern skies and rank among the most advanced optical/infrared telescopes available to astronomers. (See List of largest optical reflecting telescopes). The observatory is owned and operated by the National Science Foundation (NSF) of the United States, the National Research Council of Canada, CONICYT of Chile, MCTI of Brazil, MCTIP of Argentina, and Korea Astronomy and Space Science Institute (KASI) of Republic of Korea. The NSF is the primary funding contributor, providing about 70% of the required resources. The Association of Universities for Research in Astronomy (AURA) manages the operations and maintenance of the observatory through a cooperative agreement with the NSF, acting as the Executive Agency on behalf of the international partners. NSF's NOIRLab is the US national center for ground-based, nighttime optical astronomy and operates Gemini as one of its programs. The Gemini telescopes are equipped with modern instruments and excel in optical and near-infrared performance. They utilize adaptive optics technology to counteract atmospheric blurring. Notably, Gemini leads in wide-field adaptive optics assisted infrared imaging and has recently commissioned the Gemini Planet Imager, enabling researchers to directly observe and study exoplanets with extreme faintness compared to their host stars. Gemini supports research across various domains of modern astronomy, including the Solar System, exoplanets, star formation and evolution, galaxy structure and dynamics, supermassive black holes, distant quasars, and the structure of the Universe on large scales. Previously, Australia and the United Kingdom were also involved in the Gemini Observatory partnership. However, the UK withdrew its funding at the end of 2012. In response, the observatory has significantly reduced operating costs, streamlined operations, and implemented energy-saving measures at both sites. Additionally, both telescopes are now operated remotely from Base Facility Operations centers located in Hilo, Hawaii, and La Serena, Chile. In 2018, KASI has signed an agreement to become a full participant of the Gemini Observatory. Overview The Gemini Observatory's international Headquarters and Northern Operations Center is located in Hilo, Hawaii at the University of Hawaii at Hilo University Park. The Southern Operations Center is located on the Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile. The "Gemini North" telescope, officially called the Frederick C. Gillett Gemini Telescope is located on Hawaii's Mauna Kea, along with many other telescopes. That location provides excellent viewing conditions due to the superb atmospheric conditions (stable, dry, and rarely cloudy) above the dormant volcano. It saw first light in 1999 and began scientific operations in 2000. The "Gemini South" telescope is located at over elevation on a mountain in the Chilean Andes called Cerro Pachón. Very dry air and negligible cloud cover make this another prime telescope location (again shared by several other observatories, including the Southern Astrophysical Research Telescope (SOAR) and Cerro Tololo Inter-American Observatory). Gemini South saw first light in 2000. Together, the two telescopes cover almost all of the sky except for two areas near the celestial poles: Gemini North cannot point north of declination +89 degrees, and Gemini South cannot point south of declination −89 degrees. Both Gemini telescopes employ a range of technologies to provide world-leading performance in optical and near-infrared astronomy, including laser guide stars, adaptive optics, multi conjugate adaptive optics, and multi-object spectroscopy. In addition, very high-quality infrared observations are possible due to the advanced protected silver coating applied to each telescope's mirrors, the small secondary mirrors in use (resulting in an f16 focal ratio), and the advanced ventilation systems installed at each site. History It is estimated that the two telescopes cost approximately US$187 million to construct, and a night on each Gemini telescope is worth tens of thousands of U.S. dollars. The two 8-meter mirror blanks, each weighing over , were fabricated from Corning's Ultra Low Expansion glass. Each blank was constructed by the fusing together of and subsequent sagging of a series of smaller hexagonal pieces. This work was performed at Corning's Canton Plant facility located in upstate New York. The blanks were then transported via ship to REOSC, located south of Paris for final grinding and polishing. One decision made during design to save money was eliminating the two Nasmyth platforms. This makes instruments like high resolution spectrographs and adaptive optics systems much more difficult to construct, due to the size and mass requirement inherent with Cassegrain instruments. A further challenge in designing large instruments is the requirement to have a specific mass and center-of-mass position to maintain the overall balance of the telescope. UK funding crisis In November 2007 it was announced that the UK's Science and Technology Facilities Council (STFC) had proposed that, to save £4 million annually, it would aim to leave the telescope's operating consortium. At a consortium meeting in January 2008, the conclusion was made that the UK would officially withdraw from the Gemini Partnership and the Gemini Observatory Agreement effective February 28, 2007. This decision significantly disrupted observatory budgets, and resulted in the cancellation of at least one instrument in development at that time, the Precision Radial Velocity Spectrograph. Since the reason for the UK breaking its part of the agreement seemed to be entirely financial, there was public outcry, including the "Save Astronomy" movement which asked citizens to speak up against the astronomy budget cuts. The UK rethought their decision to withdraw from Gemini, and requested reinstatement into the agreement, and were officially welcomed back on February 27, 2008. However, in December 2009 it was announced that the UK would indeed leave the Gemini partnership in 2012, as well as terminating several other international science partnerships, due to continuing funding limitations. Directorship The first director of Gemini was Matt Mountain, who after holding the post for eleven years left in September 2005 to become director of the Space Telescope Science Institute (STScI). He was succeeded by Jean-René Roy, who served for nine months, after which time Doug Simons held the directorship from June 2006 to May 2011. He in turn was succeeded by an interim appointment of the then-retired Fred Chaffee, former director of W. M. Keck Observatory. Chaffee was succeeded in August 2012 by Markus Kissler-Patig, who held the post until June 2017. Laura Ferrarese succeeded Kissler-Patig in July 2017 with an interim appointment. Jennifer Lotz took over as the directory on September 6, 2018, but left in 2024 to begin a 5 year appointment as Director of STScI. She was replaced by Scott Dahm as an interim director in January 2024. Governance and oversight The Observatory is governed by the Gemini Board, as defined by the Gemini International Agreement. The Board sets budgetary policy bounds for the Observatory and carries out broad oversight functions, with advice from a Science and Technology Advisory sub-Committee (the STAC) and a Finance sub-Committee. The U.S. holds six of the 13 voting seats on the Gemini Board. The U.S. members of the Board typically serve three year terms and are recruited and nominated by the National Science Foundation (NSF), which represents the US community in all aspects of Gemini operations and development. Gemini is currently managed by the Association of Universities for Research in Astronomy (AURA), Inc., on behalf of the partnership through an award from NSF. AURA has operated Gemini since its construction in the 1990s. NSF serves as the Executive Agency and acts on behalf of the international participants. NSF has one seat on the Gemini Board; an additional NSF staff member serves as the Executive Secretary to the board. Programmatic management is the responsibility of an NSF Program Officer. The Program Officer monitors operations and development activities at the Observatory, nominates U.S. scientists to Gemini advisory committees, conducts reviews on behalf of the partnership, and approves funding actions, reports, and contracts. Instrumentation Adaptive optics Both Gemini telescopes employ sophisticated state-of-the-art adaptive optics systems. Gemini-N routinely uses the ALTAIR system, built in Canada, which achieves a 30–45% Strehl ratio on a 22.5-arcsecond-square field and can feed NIRI, NIFS or GNIRS; it can use natural or laser guide stars. In conjunction with NIRI it was responsible for the discovery of HR8799b. At Gemini-S the Gemini Multi-Conjugate Adaptive Optics System (GeMS) may be used with the FLAMINGOS-2 near-infrared imager and spectrometry, or the Gemini South Adaptive Optics Imager (GSAOI), which provides uniform, diffraction-limited image quality to arcminute-scale fields of view. GeMS achieved first light on December 16, 2011. Using a constellation of five laser guide stars, it achieved FWHM of 0.08 arc-seconds in H band over a field of 87 arc-seconds square. An adaptive secondary mirror has been considered for Gemini, which would provide reasonable adaptive-optics corrections (equivalent to natural seeing at the 20th-percentile level for 80% of the time) to all instruments on the telescope to which it is attached. However, , there are no plans to implement such an upgrade to either telescope. Instruments In recent years the Gemini Board has directed the observatory to support only four instruments at each telescope. Because Gemini-N and Gemini-S are essentially identical, the observatory is able to move instruments between the two sites, and does so on a regular basis. Two of the most popular instruments are the Gemini Multi-Object Spectrographs (GMOS) on each of the telescopes. Built in Edinburgh, Scotland by the UK Astronomy Technology Centre, these instruments provide multi-object spectroscopy, long-slit spectroscopy, imaging, and integral field spectroscopy at optical wavelengths. The detectors in each instrument have recently been upgraded with Hamamatsu Photonics devices, which significantly improve performance in the far red part of the optical spectrum (700–1,000 nm). Near-infrared imaging and spectroscopy are provided by the NIRI, NIFS, GNIRS, FLAMINGOS-2, and GSAOI instruments. The availability and detailed descriptions of these instruments is documented on the Gemini Observatory Web site. One of the most exciting new instruments at Gemini is GPI, the Gemini Planet Imager. GPI was built by a consortium of US and Canadian institutions to fulfill the requirements of the ExAOC Extreme Adaptive Optics Coronagraph proposal. GPI is an extreme adaptive-optics imaging polarimeter/integral-field spectrometer, which provides diffraction-limited data between 0.9 and 2.4 microns. GPI is able to directly image planets around nearby stars that are one-millionth as bright as their host star. Gemini also supports a vigorous visitor instrument program. Instruments may be brought to either telescope for short periods of time and used for specific observing programs by the instrument teams. In return for access to Gemini, the instruments are then made available to the entire Gemini community, so that they may be used for other science projects. Instruments that have made use of this program include the Differential Speckle Survey Instrument (DSSI), the Phoenix near-infrared echelle spectrometer, and the TEXES mid-infrared spectrometer. The ESPaDOnS spectrograph situated in the basement of the Canada–France–Hawaii Telescope (CFHT) is also being used as a "visitor instrument", even though it never moves from CFHT. The instrument is connected to Gemini-North via a 270 meter long optic fibre. Known as GRACES, this arrangement provides very high resolution optical spectroscopy on an 8-meter class telescope. Gemini's silver coating and infrared optimization allow sensitive observations in the mid-infrared part of the spectrum (5–27 μm). Historically, mid-infrared observations have been obtained using T-ReCS at Gemini South and Michelle at Gemini North. Both instruments have imaging and spectroscopic capabilities, though neither is currently being used at Gemini. Instrumentation development issues The first phase of Gemini instrumentation development did not run smoothly; schedules slipped by several years, and budgets sometimes overran by as much as a factor of two. In 2003 the instrument-development process was re-analysed in the Aspen report; for example, an incentive program was introduced where instrument developers were guaranteed substantial allocations of telescope time if they delivered the instrument on time and lose it as the instrument is delayed. A wide-field multi-object spectrograph achieved substantial scientific support, but would have required major changes to the design of the telescope – effectively it would have required one of the telescopes to be devoted to that instrument. The project was terminated in 2009. Second-round instrumentation development In January 2012, the Gemini Observatory started a new round of instrumentation development. This process has since resulted in the development of a high-resolution optical spectrograph known as GHOST, with commissioning beginning in April 2022 and on-sky science commissioning planned for June 2022. Observing and community support The Gemini Observatory's primary mission is to serve the general astronomical communities in all of the participant countries; indeed, the Observatory provides the bulk of general access to large optical/infrared telescopes for many of the participants, and represents the only public-access 8 meter class facility in the U.S. The observatory reaches out to its community through National Gemini Offices (NGOs), the U.S. office being located in Tucson at the National Optical Astronomy Observatory. The NGOs provide general support to the users, from proposal preparation through data acquisition, reduction, and analysis. In any given year the two telescopes typically provided data for over 400 discrete science projects, over two-thirds of which are led by U.S. astronomers. About 50-70 percent of the top-ranked "Band 1" proposals reach 100 percent completion in any given year. Of order 90 percent of the available (clear weather) time is used for science, the rest being allocated to scheduled maintenance or lost to unforeseen technical faults. Gemini has in recent years developed innovative new observing modes. These include the ‘Large and Long’ program to support requests for large amounts of telescope time and the ‘Fast Turnaround’ program to provide quick access to the telescope. These and other modes have been approved by the Gemini Board of Directors and are proving popular with the user community. In 2015 up to 20 percent of available telescope time was used for Large and Long programs, which in terms of hours of observing attracted five times more user demand than could be accommodated. In the same period approximately 10 percent of telescope time was assigned to the Fast Turnaround program, which in the second half of 2015 was over-subscribed by a factor of 1.6. In 2015 the remaining U.S. time allocation on Gemini was over-subscribed by a factor of approximately 2, consistent with recent years. Prospects (2017 onwards) In 2010, the U.S. National Research Council (NRC) conducted its sixth decadal survey in astronomy and astrophysics to recommend key science questions and new initiatives for the current decade. Since both the NRC recommendations and current programs could not be accommodated within subsequent budget projections, the National Science Foundation's Division of Astronomical Sciences, through the Advisory Committee of the Directorate for Mathematical and Physical Sciences (MPS), conducted a community-based portfolio review to make implementation recommendations that would best respond to the decadal survey science questions. The resulting report, Advancing Astronomy in the Coming Decade: Opportunities and Challenges, was released in August 2012 and included recommendations related to all of the major telescope facilities funded by NSF. The Portfolio Review Committee report ranked Gemini Observatory as a critical component of the U.S.'s future astronomical research resources and recommended that the U.S. retain a majority share in the international partnership for at least the next several years. However, given the constraints that were considered, the Committee recommended that the U.S. contribution to Gemini operations be capped in 2017 and beyond. NSF has since commissioned a National Research Council study, titled "A Strategy to Optimize the U.S. Optical/Infrared System in the Era of the Large Synoptic Survey Telescope". The report made a recommendation that NSF work with its partners in Gemini to ensure that Gemini-South is well positioned for faint-object spectroscopy early in the era of the Large Synoptic Survey Telescope (LSST). Observatory support for the development of a next-generation medium-resolution spectrograph over the next 5–6 years addresses this recommendation directly. With the signing of the new International Agreement in late 2015, support from the five signatories (the U.S., Canada, Argentina, Brazil, and Chile) is secured for the period 2016–2021. Australia withdrew from the Gemini Observatory partnership in 2015, and Korea has joined the partnership in 2018. The currently effective International Agreement signed in 2020 November has the six signatories (Argentina, Brazil, Canada, Chile, Korea, and the US), and the Agreement is effective till the end of 2026. Observations and research The Gemini was one of the telescopes that observed the turn-on of a nuclear transient, along with the Swift space telescope (aka Neil Gehrels Swift Observatory since 2018) and the Hiltner telescope (MDM observatory). The transient event was called PS1-13cbe and was located in the Galaxy SDSS J222153.87+003054.2 Incidents On 22 October 2022, the 8.1m primary mirror of the Gemini North telescope was damaged when it touched an earthquake restraint while on a wash cart, being moved for stripping the silver coating before recoating. Two chips were created, on the bottom edge and at the margin of the main mirror. This has since been repaired after several months of downtime and was back observing the sky on 2 June 2023 with apparently no loss of performance or quality.
Technology
Ground-based observatories
null
527661
https://en.wikipedia.org/wiki/Hubble%20Ultra-Deep%20Field
Hubble Ultra-Deep Field
The Hubble Ultra-Deep Field (HUDF) is a deep-field image of a small region of space in the constellation Fornax, containing an estimated 10,000 galaxies. The original data for the image was collected by the Hubble Space Telescope from September 2003 to January 2004 and the first version of the image was released on March 9, 2004. It includes light from galaxies that existed about 13 billion years ago, some 400 to 800 million years after the Big Bang. The HUDF image was taken in a section of the sky with a low density of bright stars in the near-field, allowing much better viewing of dimmer, more distant objects. Located southwest of Orion in the southern-hemisphere constellation Fornax, the rectangular image is 2.4 arcminutes to an edge, or 3.4 arcminutes diagonally. This is about one-tenth of the angular diameter of a full moon viewed from Earth (less than 34 arcminutes), smaller than a 1 mm2 piece of paper held 1 m away, and equal to roughly one twenty-six-millionth of the total area of the sky. The image is oriented so that the upper left corner points toward north (−46.4°) on the celestial sphere. In August and September 2009, the HUDF field was observed at longer wavelengths (1.0 to 1.6 μm) using the infrared channel of the recently fitted Wide Field Camera 3 (WFC3). This additional data enabled astronomers to identify a new list of potentially very distant galaxies. On September 25, 2012, NASA released a new version of the Ultra-Deep Field dubbed the eXtreme Deep Field (XDF). The XDF reveals galaxies from 13.2 billion years ago, including one thought to have formed only 450 million years after the Big Bang. On June 3, 2014, NASA released the Hubble Ultra Deep Field 2014 image, the first HUDF image to use the full range of ultraviolet to near-infrared light. A composite of separate exposures taken in 2002 to 2012 with Hubble's Advanced Camera for Surveys and Wide Field Camera 3, it shows some 10,000 galaxies. On January 23, 2019, the Instituto de Astrofísica de Canarias released an even deeper version of the infrared images of the Hubble Ultra Deep Field obtained with the WFC3 instrument, named the ABYSS Hubble Ultra Deep Field. The new images improve the previous reduction of the WFC3/IR images, including careful sky background subtraction around the largest galaxies on the field of view. After this update, some galaxies were found to be almost twice as big as previously measured. Planning In the years since the original Hubble Deep Field, the Hubble Deep Field South and the GOODS sample were analyzed, providing increased statistics at the high redshifts probed by the HDF. When the Advanced Camera for Surveys (ACS) detector was installed on the HST, it was realized that an ultra-deep field could observe galaxy formation out to even higher redshifts than had currently been observed, as well as providing more information about galaxy formation at intermediate redshifts (z~2). A workshop on how to best carry out surveys with the ACS was held at STScI in late 2002. At the workshop Massimo Stiavelli advocated an Ultra Deep Field as a way to study the objects responsible for the reionization of the Universe. Following the workshop, the STScI Director Steven Beckwith decided to devote 400 orbits of Director's Discretionary time to the UDF and appointed Stiavelli as the lead of the Home Team implementing the observations. Unlike the Deep Fields, the HUDF does not lie in Hubble's Continuous Viewing Zone (CVZ). The earlier observations, using the Wide Field and Planetary Camera 2 (WFPC2) camera, were able to take advantage of the increased observing time on these zones by using wavelengths with higher noise to observe at times when earthshine contaminated the observations; however, ACS does not observe at these wavelengths, so the advantage was reduced. As with the earlier fields, this one was required to contain very little emission from our galaxy, with little Zodiacal dust. The field was also required to be in a range of declinations such that it could be observed both by southern hemisphere instruments, such as the Atacama Large Millimeter Array, and northern hemisphere ones, such as those located on Hawaii. It was ultimately decided to observe a section of the Chandra Deep Field South, due to existing deep X-ray observations from Chandra X-ray Observatory and two interesting objects already observed in the GOODS sample at the same location: a redshift 5.8 galaxy and a supernova. The coordinates of the field are right ascension , declination (J2000). The field is 200 arcseconds to a side, with a total area of 11 square arcminutes, and lies in the constellation of Fornax. Observations Four filters were used on the ACS, centered on 435, 606, 775 and 850 nm, with exposure times set to give equal sensitivity in all filters. These wavelength ranges match those used by the GOODS sample, allowing direct comparison between the two. As with the Deep Fields, the HUDF used Directors Discretionary Time. In order to get the best resolution possible, the observations were dithered by pointing the telescope at slightly different positions for each exposure—a process trialled with the Hubble Deep Field—so that the final image has a higher resolution than the pixels on their own would normally allow. The observations were done in two sessions, from September 23 to October 28, 2003, and December 4, 2003, to January 15, 2004. The total exposure time is just under 1 million seconds, from 400 orbits, with a typical exposure time of 1200 seconds. In total, 800 ACS exposures were taken over the course of 11.3 days, two per orbit; NICMOS observed for 4.5 days. All the individual ACS exposures were processed and combined by Anton Koekemoer into a set of scientifically useful images, each with a total exposure time ranging from 134,900 seconds to 347,100 seconds. To observe the whole sky to the same sensitivity, the HST would need to observe continuously for a million years. The sensitivity of the ACS limits its capability of detecting galaxies at high redshift to about 6. The deep NICMOS fields obtained in parallel to the ACS images could in principle be used to detect galaxies at redshift 7 or higher but they were lacking visible band images of similar depth. These are necessary to identify high redshift objects as they should not be seen in the visible bands. In order to obtain deep visible exposures on top of the NICMOS parallel fields a follow-up program, HUDF05, was approved and granted 204 orbits to observe the two parallel fields (GO-10632). The orientation of the HST was chosen so that further NICMOS parallel images would fall on top of the main UDF field. After the installation of WFC3 on Hubble in 2009, the HUDF09 programme (GO-11563) devoted 192 orbits to observations of three fields, including HUDF, using the newly available F105W, F125W and F160W infra-red filters (which correspond to the Y, J and H bands): Contents The HUDF is the deepest image of the universe ever taken and has been used to search for galaxies that existed between 400 and 800 million years after the Big Bang (redshifts between 7 and 12). Several galaxies in the HUDF are candidates, based on photometric redshifts, to be amongst the most distant astronomical objects. The red dwarf UDF 2457 at distance of 59,000 light-years is the furthest star resolved by the HUDF. The star near the center of the field is USNO-A2.0 0600–01400432 with apparent magnitude of 18.95. The field imaged by the ACS contains over 10,000 objects, the majority of which are galaxies, many at redshifts greater than 3, and some that probably have redshifts between 6 and 7. The NICMOS measurements may have discovered galaxies at redshifts up to 12. Scientific results The HUDF has revealed high rates of star formation during the very early stages of galaxy formation, within a billion years after the Big Bang. It has also enabled improved characterization of the distribution of galaxies, their numbers, sizes and luminosities at different epochs, aiding investigation into the evolution of galaxies. Galaxies at high redshifts have been confirmed to be smaller and less symmetrical than ones at lower redshifts, illuminating the rapid evolution of galaxies in the first couple of billion years after the Big Bang. Hubble eXtreme Deep Field The Hubble eXtreme Deep Field (HXDF), released on September 25, 2012, is an image of a portion of space in the center of the Hubble Ultra Deep Field image. Representing a total of two million seconds (about 23 days) of exposure time collected over 10 years, the image covers an area of 2.3 arcminutes by 2 arcminutes, or about 80% of the area of the HUDF. This represents about one thirty-two millionth of the sky. The HXDF contains about 5,500 galaxies, the oldest of which are seen as they were 13.2 billion years ago. The faintest galaxies are one ten-billionth the brightness of what the human eye can see. The red galaxies in the image are the remnants of galaxies after major collisions during their elderly years. Many of the smaller galaxies in the image are very young galaxies that eventually developed into major galaxies, similar to the Milky Way and other galaxies in our galactic neighborhood.
Physical sciences
Notable patches of universe
Astronomy
527756
https://en.wikipedia.org/wiki/Photophobia
Photophobia
Photophobia is a medical symptom of abnormal intolerance to visual perception of light. As a medical symptom, photophobia is not a morbid fear or phobia, but an experience of discomfort or pain to the eyes due to light exposure or by presence of actual physical sensitivity of the eyes, though the term is sometimes additionally applied to abnormal or irrational fear of light, such as heliophobia. The term photophobia comes . Causes Patients may develop photophobia as a result of several different medical conditions, related to the eye, the nervous system, genetic, or other causes. Photophobia may manifest itself in an increased response to light starting at any step in the visual system, such as: Too much light entering the eye. Too much light can enter the eye if it is damaged, such as with corneal abrasion and retinal damage, or if its pupil is unable to normally constrict (seen with damage to the oculomotor nerve). Due to albinism, the lack of pigment in the colored part of the eyes (irises) makes them somewhat translucent. This means that the irises cannot completely block light from entering the eye. Overstimulation of the photoreceptors in the retina Excessive electric impulses to the optic nerve Excessive response in the central nervous system Common causes of photophobia include migraine headaches, TMJ, cataracts, Sjögren syndrome, mild traumatic brain injury (MTBI), or severe ophthalmologic diseases such as uveitis or corneal abrasion. A more extensive list follows: Eye-related Causes of photophobia relating directly to the eye itself include: Achromatopsia Aniridia Anticholinergic drugs may cause photophobia by paralyzing the iris sphincter muscle Aphakia Blepharitis Buphthalmos Cataracts Coloboma Cone dystrophy Congenital abnormalities of the eye Viral conjunctivitis Corneal abrasion Corneal dystrophy Corneal ulcer Disruption of the corneal epithelium, such as that caused by a corneal foreign body or keratitis Ectopia lentis Endophthalmitis Eye trauma caused by disease, injury, or infection such as chalazion, episcleritis, keratoconus, or optic nerve hypoplasia Hydrophthalmos, or congenital glaucoma Iritis Isotretinoin has been associated with photophobia Optic neuritis Pigment dispersion syndrome Pupillary dilation (naturally or chemically induced) Retinal detachment Scarring of the cornea or sclera Uveitis Mustard gas exposure Nervous-system-related Neurological causes for photophobia include: Autism spectrum disorder Chiari malformation Dyslexia Encephalitis, including myalgic encephalomyelitis Meningitis Trigeminal disturbance causes central sensitization (hence, multiple other associated hypersensitivities). Causes can be bad bite, infected tooth, etc. Progressive supranuclear palsy, where photophobia can sometimes precede the clinical diagnosis by years Subarachnoid haemorrhage Tumor of the posterior cranial fossa Visual snow along with many symptoms Other causes Ankylosing spondylitis Albinism Ariboflavinosis Benzodiazepines Chemotherapy Chikungunya Cystinosis Drug withdrawal Ehlers–Danlos syndrome Infectious mononucleosis Influenza Magnesium deficiency Mercury poisoning Migraine Rabies Tyrosinemia type II Superior canal dehiscence syndrome Treatment Treatment for light sensitivity addresses the underlying cause, whether it be an eye, nervous system or other cause. If the triggering factor or underlying cause can be identified and treated, photophobia may disappear. Tinted glasses are sometimes used. Artificial light People with photophobia may feel eye pain from even moderate levels of artificial light and avert their eyes from artificial light sources. Ambient levels of artificial light may also be intolerable to persons afflicted with photophobia such that they dim or remove the light source, or go into a dimmer lit room, such a one lit by refraction of light from outside the room. Alternatively, they may wear dark sunglasses, sunglasses designed to filter peripheral light, precision tinted glasses, and/or wide-brimmed sun hats or baseball caps. Some types of photophobia may be helped with the use of precision tinted lenses which block the green-to-blue end of the light spectrum without blurring or impeding vision. Other strategies for relieving photophobia include the use of tinted contact lenses and/or the use of prescription eye drops that constrict the pupil, thus reducing the amount of light entering the eye. Such strategies may be limited by the amount of light needed for proper vision under given conditions, however. Dilating drops may also help relieve eye pain from muscle spasms or seizures triggered by lighting/migraine, allowing a person to "ride out the migraine" in a dark or dim room. A paper by Stringham and Hammond, published in the Journal of Food Science, reviews studies of effects of consuming Lutein and Zeaxanthin on visual performance, and notes a decrease in sensitivity to glare. Disability Photophobia may preclude or limit a person from working in places where lighting is used, unless the person is able to obtain a reasonable accommodation like being allowed to wear tinted glasses. Some people with photophobia may thereby be better able to work at night or be more easily accommodated in the workplace at night. Outdoor night lighting may be equally offensive for persons with photophobia, however, given the wide variety of bright lighting used for illuminating residential, commercial and industrial areas, such as LED (light-emitting diode) lamps. The increasing popularity of "overpoweringly intense" LED headlights being used on "pickups and S.U.V.s" has prompted more frequent reports of photophobia among motorists, cyclists, and pedestrians.
Biology and health sciences
Symptoms and signs
Health
527892
https://en.wikipedia.org/wiki/Magma%20chamber
Magma chamber
A magma chamber is a large pool of liquid rock beneath the surface of the Earth. The molten rock, or magma, in such a chamber is less dense than the surrounding country rock, which produces buoyant forces on the magma that tend to drive it upwards. If the magma finds a path to the surface, then the result will be a volcanic eruption; consequently, many volcanoes are situated over magma chambers. These chambers are hard to detect deep within the Earth, and therefore most of those known are close to the surface, commonly between 1 km and 10 km down. Dynamics of magma chambers Magma rises through cracks from beneath and across the crust because it is less dense than the surrounding rock. When the magma cannot find a path upwards it pools into a magma chamber. These chambers are commonly built up over time, by successive horizontal or vertical magma injections. The influx of new magma causes reaction of pre-existing crystals and the pressure in the chamber to increase. The residing magma starts to cool, with the higher melting point components such as olivine crystallizing out of the solution, particularly near to the cooler walls of the chamber, and forming a denser conglomerate of minerals which sinks (cumulative rock). Upon cooling, new mineral phases saturate and the rock type changes (e.g. fractional crystallization), typically forming (1) gabbro, diorite, tonalite and granite or (2) gabbro, diorite, syenite and granite. If magma resides in a chamber for a long period, then it can become stratified with lower density components rising to the top and denser materials sinking. Rocks accumulate in layers, forming a layered intrusion. Any subsequent eruption may produce distinctly layered deposits; for example, the deposits from the 79 AD eruption of Mount Vesuvius include a thick layer of white pumice from the upper portion of the magma chamber overlaid with a similar layer of grey pumice produced from material erupted later from lower in the chamber. Another effect of the cooling of the chamber is that the solidifying crystals will release the gas (primarily steam) previously dissolved when they were liquid, causing the pressure in the chamber to rise, possibly sufficiently to produce an eruption. Additionally, the removal of the lower melting point components will tend to make the magma more viscous (by increasing the concentration of silicates). Thus, stratification of a magma chamber may result in an increase in the amount of gas within the magma near the top of the chamber, and also make this magma more viscous, potentially leading to a more explosive eruption than would be the case had the chamber not become stratified. Supervolcano eruptions are possible only when an extraordinarily large magma chamber forms at a relatively shallow level in the crust. However, the rate of magma production in tectonic settings that produce supervolcanoes is quite low, around 0.002 km3 year−1, so that accumulation of sufficient magma for a supereruption takes 105 to 106 years. This raises the question of why the buoyant silicic magma does not break through to the surface more frequently in relatively small eruptions. The combination of regional extension, which lowers the maximum attainable overpressure on the chamber roof, and a large magma chamber with warm walls, which has a high effective viscoelasticity, may suppress rhyolite dike formation and allow such large chambers to fill with magma. If the magma is not vented to the surface in a volcanic eruption, it will slowly cool and crystallize at depth to form an intrusive igneous body, one, for example, composed of granite or gabbro (see also pluton). Often, a volcano may have a deep magma chamber many kilometers down, which supplies a shallower chamber near the summit. The location of magma chambers can be mapped using seismology: seismic waves from earthquakes move more slowly through liquid rock than solid, allowing measurements to pinpoint the regions of slow movement which identify magma chambers. As a volcano erupts, surrounding rock will collapse into the emptying chamber. If the chamber's size is reduced considerably, the resulting depression at the surface can form a caldera. Examples In Iceland, Thrihnukagigur, discovered in 1974 by cave explorer Árni B. Stefánsson and opened for tourism in 2012, is the only volcano in the world where visitors can take an elevator and safely descend into the magma chamber.
Physical sciences
Volcanology
Earth science
528080
https://en.wikipedia.org/wiki/FADEC
FADEC
A full authority digital engine (or electronics) control (FADEC) is a system consisting of a digital computer, called an "electronic engine controller" (EEC) or "engine control unit" (ECU), and its related accessories that control all aspects of aircraft engine performance. FADECs have been produced for both piston engines and jet engines. History The goal of any engine control system is to allow the engine to perform at maximum efficiency for a given condition. Originally, engine control systems consisted of simple mechanical linkages connected physically to the engine. By moving these levers the pilot or the flight engineer could control fuel flow, power output, and many other engine parameters. The mechanical/hydraulic engine control unit for Germany's BMW 801 piston aviation radial engine of World War II was just one notable example of this in its later stages of development. This mechanical engine control was progressively replaced first by analogue electronic engine control and, later, digital engine control. Analogue electronic control varies an electrical signal to communicate the desired engine settings. The system was an evident improvement over mechanical control but had its drawbacks, including common electronic noise interference and reliability issues. Full authority analogue control was used in the 1960s and introduced as a component of the Rolls-Royce/Snecma Olympus 593 engine of the supersonic transport aircraft Concorde. However, the more critical inlet control was digital on the production aircraft. Digital electronic control followed. In 1968, Rolls-Royce and Elliott Automation, in conjunction with the National Gas Turbine Establishment, worked on a digital engine control system that completed several hundred hours of operation on a Rolls-Royce Olympus Mk 320. In the 1970s, NASA and Pratt and Whitney experimented with their first experimental FADEC, first flown on an F-111 fitted with a highly modified Pratt & Whitney TF30 left engine. The experiments led to Pratt & Whitney F100 and Pratt & Whitney PW2000 being the first military and civil engines, respectively, fitted with FADEC, and later the Pratt & Whitney PW4000 as the first commercial "dual FADEC" engine. The first FADEC in service was the Rolls-Royce Pegasus engine developed for the Harrier II by Dowty and Smiths Industries Controls. Function True full authority digital engine controls have no form of manual override nor manual controls available, placing full authority over all of the operating parameters of the engine in the hands of the computer. If a total FADEC failure occurs, the engine fails. If the engine is controlled digitally and electronically but allows for manual override, it is considered to be an EEC or ECU. An EEC, though a component of a FADEC, is not by itself FADEC. When standing alone, the EEC makes all of the decisions until the pilot wishes to intervene. The term FADEC is often misused for partial digital engine controls, such as those only electronically controlling fuel and ignition. A turbocharged piston engine would require digital control over all intake airflow to meet the definition of FADEC. FADEC works by receiving multiple input variables of the current flight condition including air density, power lever request position, engine temperatures, engine pressures, and many other parameters. The inputs are received by the EEC and analyzed up to 70 times per second. Engine operating parameters such as fuel flow, stator vane position, air bleed valve position, and others are computed from this data and applied as appropriate. FADEC also controls engine starting and restarting. The FADEC's basic purpose is to provide optimum engine efficiency for a given flight condition. FADEC not only provides for efficient engine operation, it also allows the manufacturer to program engine limitations and receive engine health and maintenance reports. For example, to avoid exceeding a certain engine temperature, the FADEC can be programmed to automatically take the necessary measures without pilot intervention. Safety With the operation of the engines relying on automation, safety is a great concern. Redundancy is provided in the form of two or more separate but identical digital channels. Each channel may provide all engine functions without restriction. FADEC also monitors a variety of data coming from the engine subsystems and related aircraft systems, providing for fault tolerant engine control. Engine control problems simultaneously causing loss of thrust on up to three engines have been cited as causal in the crash of an Airbus A400M aircraft at Seville Spain on 9 May 2015. Airbus Chief Strategy Officer Marwan Lahoud confirmed on 29 May that incorrectly installed engine control software caused the fatal crash. "There are no structural defects [with the aircraft], but we have a serious quality problem in the final assembly." Applications A typical civilian transport aircraft flight may illustrate the function of a FADEC. The flight crew first enters flight data such as wind conditions, runway length, or cruise altitude, into the flight management system (FMS). The FMS uses this data to calculate power settings for different phases of the flight. At take-off, the flight crew advances the power lever to a predetermined setting, or opts for an auto-throttle take-off if available. The FADECs now apply the calculated take-off thrust setting by sending an electronic signal to the engines; there is no direct linkage to open fuel flow. This procedure can be repeated for any other phase of flight. In flight, small changes in operation are constantly made to maintain efficiency. Maximum thrust is available for emergency situations if the power lever is advanced to full, but limitations can not be exceeded; the flight crew has no means of manually overriding the FADEC. Advantages Automatic engine protection against out-of-tolerance operations Safer as the multiple channel FADEC computer provides redundancy in case of failure Care-free engine handling, with guaranteed thrust settings Ability to use single engine type for wide thrust requirements by just reprogramming the FADECs Provides semi-automatic engine starting Provides high-idle control appropriate for piston engine warmup Better systems integration with engine and aircraft systems Can provide engine long-term health monitoring and diagnostics Number of external and internal parameters used in the control processes increases by one order of magnitude Reduces the number of parameters to be monitored by flight crews Due to the high number of parameters monitored, the FADEC makes possible "Fault Tolerant Systems" (where a system can operate within required reliability and safety limitation with certain fault configurations) Disadvantages Full authority digital engine controls have no form of manual override available, placing full authority over the operating parameters of the engine in the hands of the computer. (see note) If a total FADEC failure occurs, the engine fails. (see note) Upon total FADEC failure, pilots have no manual controls for engine restart, throttle, or other functions. (see note) Single point of failure risk can be mitigated with redundant FADECs (assuming that the failure is a random hardware failure and not the result of a design or manufacturing error, which may cause identical failures in all identical redundant components). (see note) High system complexity compared to hydromechanical, analogue or manual control systems High system development and validation effort due to the complexity Whereas in crisis (for example, imminent terrain contact), a non-FADEC engine can produce significantly more than its rated thrust, a FADEC engine will always operate within its limits. (see note) Note: Most modern FADEC controlled aircraft engines (particularly those of the turboshaft variety) can be overridden and placed in manual mode, effectively countering most of the disadvantages on this list. Pilots should be very aware of where their manual override is located, because inadvertent engagement of the manual mode can lead to an overspeed of the engine. Requirements Engineering processes must be used to design, manufacture, install and maintain the sensors which measure and report flight and engine parameters to the control system itself. Formal systems engineering processes are often used in the design, implementation and testing of the software used in these safety-critical control systems. This requirement led to the development and use of specialized software such as model-based systems engineering (MBSE) tools. The application development toolset SCADE (from Ansys) (not to be confused with the application category SCADA) is an example of an MBSE tool and has been used as part of the development of FADEC systems. Research NASA has analyzed a distributed FADEC architecture rather than the current centralized one, specifically for helicopters. Greater flexibility and lower life cycle costs are likely advantages of distribution.
Technology
Aircraft components
null
528155
https://en.wikipedia.org/wiki/Solar%20radius
Solar radius
Solar radius is a unit of distance used to express the size of stars in astronomy relative to the Sun. The solar radius is usually defined as the radius to the layer in the Sun's photosphere where the optical depth equals 2/3: is approximately 10 times the average radius of Jupiter, 109 times the radius of the Earth, and 1/215th of an astronomical unit, the approximate distance between Earth and the Sun. The solar radius to either pole and that to the equator differ slightly due to the Sun's rotation, which induces an oblateness in the order of 10 parts per million. Measurements The uncrewed SOHO spacecraft was used to measure the radius of the Sun by timing transits of Mercury across the surface during 2003 and 2006. The result was a measured radius of . Haberreiter, Schmutz & Kosovichev (2008) determined the radius corresponding to the solar photosphere to be . This new value is consistent with helioseismic estimates; the same study showed that previous estimates using inflection point methods had been overestimated by approximately . Nominal solar radius In 2015, the International Astronomical Union passed Resolution B3, which defined a set of nominal conversion constants for stellar and planetary astronomy. Resolution B3 defined the nominal solar radius (symbol ) to be equal to exactly . The nominal value, which is the rounded value, within the uncertainty, given by Haberreiter, Schmutz & Kosovichev (2008), was adopted to help astronomers avoid confusion when quoting stellar radii in units of the Sun's radius, even when future observations will likely refine the Sun's actual photospheric radius (which is currently only known to about an accuracy of ±). Examples Solar radii as a unit are common when describing spacecraft moving close to the sun. Two spacecraft in the 2010s include: Solar Orbiter (as close as ) Parker Solar Probe (as close as )
Physical sciences
Astronomical
Basics and measurement
528867
https://en.wikipedia.org/wiki/Surface%20integral
Surface integral
In mathematics, particularly multivariable calculus, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analogue of the line integral. Given a surface, one may integrate over this surface a scalar field (that is, a function of position which returns a scalar as a value), or a vector field (that is, a function which returns a vector as value). If a region R is not flat, then it is called a surface as shown in the illustration. Surface integrals have applications in physics, particularly with the theories of classical electromagnetism. Surface integrals of scalar fields Assume that f is a scalar, vector, or tensor field defined on a surface S. To find an explicit formula for the surface integral of f over S, we need to parameterize S by defining a system of curvilinear coordinates on S, like the latitude and longitude on a sphere. Let such a parameterization be , where varies in some region in the plane. Then, the surface integral is given by where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of , and is known as the surface element (which would, for example, yield a smaller value near the poles of a sphere, where the lines of longitude converge more dramatically, and latitudinal coordinates are more compactly spaced). The surface integral can also be expressed in the equivalent form where is the determinant of the first fundamental form of the surface mapping . For example, if we want to find the surface area of the graph of some scalar function, say , we have where . So that , and . So, which is the standard formula for the area of a surface described this way. One can recognize the vector in the second-last line above as the normal vector to the surface. Because of the presence of the cross product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, where the metric tensor is given by the first fundamental form of the surface. Surface integrals of vector fields Consider a vector field v on a surface S, that is, for each in S, v(r) is a vector. The integral of v on S was defined in the previous section. Suppose now that it is desired to integrate only the normal component of the vector field over the surface, the result being a scalar, usually called the flux passing through the surface. For example, imagine that we have a fluid flowing through S, such that v(r) determines the velocity of the fluid at r. The flux is defined as the quantity of fluid flowing through S per unit time. This illustration implies that if the vector field is tangent to S at each point, then the flux is zero because the fluid just flows in parallel to S, and neither in nor out. This also implies that if v does not just flow along S, that is, if v has both a tangential and a normal component, then only the normal component contributes to the flux. Based on this reasoning, to find the flux, we need to take the dot product of v with the unit surface normal n to S at each point, which will give us a scalar field, and integrate the obtained field as above. In other words, we have to integrate v with respect to the vector surface element , which is the vector normal to S at the given point, whose magnitude is We find the formula The cross product on the right-hand side of this expression is a (not necessarily unital) surface normal determined by the parametrisation. This formula defines the integral on the left (note the dot and the vector notation for the surface element). We may also interpret this as a special case of integrating 2-forms, where we identify the vector field with a 1-form, and then integrate its Hodge dual over the surface. This is equivalent to integrating over the immersed surface, where is the induced volume form on the surface, obtained by interior multiplication of the Riemannian metric of the ambient space with the outward normal of the surface. Surface integrals of differential 2-forms Let be a differential 2-form defined on a surface S, and let be an orientation preserving parametrization of S with in D. Changing coordinates from to , the differential forms transform as So transforms to , where denotes the determinant of the Jacobian of the transition function from to . The transformation of the other forms are similar. Then, the surface integral of f on S is given by where is the surface element normal to S. Let us note that the surface integral of this 2-form is the same as the surface integral of the vector field which has as components , and . Theorems involving surface integrals Various useful results for surface integrals can be derived using differential geometry and vector calculus, such as the divergence theorem, magnetic flux, and its generalization, Stokes' theorem. Dependence on parametrization Let us notice that we defined the surface integral by using a parametrization of the surface S. We know that a given surface might have several parametrizations. For example, if we move the locations of the North Pole and the South Pole on a sphere, the latitude and longitude change for all the points on the sphere. A natural question is then whether the definition of the surface integral depends on the chosen parametrization. For integrals of scalar fields, the answer to this question is simple; the value of the surface integral will be the same no matter what parametrization one uses. For integrals of vector fields, things are more complicated because the surface normal is involved. It can be proven that given two parametrizations of the same surface, whose surface normals point in the same direction, one obtains the same value for the surface integral with both parametrizations. If, however, the normals for these parametrizations point in opposite directions, the value of the surface integral obtained using one parametrization is the negative of the one obtained via the other parametrization. It follows that given a surface, we do not need to stick to any unique parametrization, but, when integrating vector fields, we do need to decide in advance in which direction the normal will point and then choose any parametrization consistent with that direction. Another issue is that sometimes surfaces do not have parametrizations which cover the whole surface. The obvious solution is then to split that surface into several pieces, calculate the surface integral on each piece, and then add them all up. This is indeed how things work, but when integrating vector fields, one needs to again be careful how to choose the normal-pointing vector for each piece of the surface, so that when the pieces are put back together, the results are consistent. For the cylinder, this means that if we decide that for the side region the normal will point out of the body, then for the top and bottom circular parts, the normal must point out of the body too. Last, there are surfaces which do not admit a surface normal at each point with consistent results (for example, the Möbius strip). If such a surface is split into pieces, on each piece a parametrization and corresponding surface normal is chosen, and the pieces are put back together, we will find that the normal vectors coming from different pieces cannot be reconciled. This means that at some junction between two pieces we will have normal vectors pointing in opposite directions. Such a surface is called non-orientable, and on this kind of surface, one cannot talk about integrating vector fields.
Mathematics
Multivariable and vector calculus
null
529138
https://en.wikipedia.org/wiki/Megalodon
Megalodon
Otodus megalodon ( ; meaning "big tooth"), commonly known as megalodon, is an extinct species of giant mackerel shark that lived approximately 23 to 3.6 million years ago (Mya), from the Early Miocene to the Early Pliocene epochs. O. megalodon was formerly thought to be a member of the family Lamnidae and a close relative of the great white shark (Carcharodon carcharias), but has been reclassified into the extinct family Otodontidae, which diverged from the great white shark during the Early Cretaceous. While regarded as one of the largest and most powerful predators to have ever lived, megalodon is only known from fragmentary remains, and its appearance and maximum size are uncertain. Scientists differ on whether it would have more closely resembled a stockier version of the great white shark (Carcharodon carcharias), the basking shark (Cetorhinus maximus) or the sand tiger shark (Carcharias taurus). The most recent estimate with the least error range suggests a maximum length estimate up to , although the modal lengths are estimated at . Their teeth were thick and robust, built for grabbing prey and breaking bone, and their large jaws could exert a bite force of up to . Megalodon probably had a major impact on the structure of marine communities. The fossil record indicates that it had a cosmopolitan distribution. It probably targeted large prey, such as whales, seals and sea turtles. Juveniles inhabited warm coastal waters and fed on fish and small whales. Unlike the great white, which attacks prey from the soft underside, megalodon probably used its strong jaws to break through the chest cavity and puncture the heart and lungs of its prey. The animal faced competition from whale-eating cetaceans, such as Livyatan and other macroraptorial sperm whales and possibly smaller ancestral killer whales (Orcinus). As the shark preferred warmer waters, it is thought that oceanic cooling associated with the onset of the ice ages, coupled with the lowering of sea levels and resulting loss of suitable nursery areas, may have also contributed to its decline. A reduction in the diversity of baleen whales and a shift in their distribution toward polar regions may have reduced megalodon's primary food source. The shark's extinction coincides with a gigantism trend in baleen whales. Classification Prescientific and early research history Megalodon teeth have been excavated and used since ancient times. They were a valued artifact amongst pre-Columbian cultures in the Americas for their large sizes and serrated blades, from which they were modified into projectile points, knives, jewelry, and funeral accessories. At least some, such as the Panamanian Sitio Conte societies, seemed to have used them primarily for ceremonial purposes. Mining of megalodon teeth by the Algonquin peoples in the Chesapeake Bay and their selective trade with the Adena culture in Ohio occurred as early as 430 BC. The earliest written account of megalodon teeth was by Pliny the Elder in an AD 73 volume of Historia Naturalis, who described them as resembling petrified human tongues that Roman folklorists believed to have fallen from the sky during lunar eclipses and called them glossopetrae ("tongue stones"). The purported tongues were later thought in a 12th-century Maltese tradition to have belonged to serpents that Paul the Apostle turned to stone while shipwrecked there, and were given antivenom powers by the saint. Glossopetrae reappeared throughout Europe in late 13th to 16th century literature, ascribed with more supernatural properties that cured a wider variety of poisons. Use of megalodon teeth for this purpose became widespread among medieval and Renaissance nobility, who fashioned them into protective amulets and tableware to purportedly detoxify poisoned liquids or bodies that touched the stones. By the 16th century, teeth were directly consumed as ingredients of European-made Goa stones. The true nature of the glossopetrae as shark's teeth was held by some since at least 1554, when cosmographer André Thevet described it as hearsay, although he did not believe it. The earliest scientific argument for this view was made by Italian naturalist Fabio Colonna, who in 1616 published an illustration of a Maltese megalodon tooth alongside a great white shark's and noted their striking similarities. He argued that the former and its likenesses were not petrified serpent's tongues but actually the teeth of similar sharks that washed up on shore. Colonna supported this thesis through an experiment of burning glossopetrae samples, from which he observed carbon residue he interpreted as proving an organic origin. However, interpretation of the stones as shark's teeth remained widely unaccepted. This was in part due the inability to explain how some of them are found far from the sea. The shark tooth argument was academically raised again during the late 17th century by English scientists Robert Hooke, John Ray, and Danish naturalist Niels Steensen (Latinized Nicholas Steno). Steensen's argument in particular is most recognized as inferred from his dissection of the head of a great white caught in 1666. His 1667 report depicted engravings of a shark's head and megalodon teeth that became especially iconic. However, the illustrated head was not actually the head that Steensen dissected, nor were the fossil teeth illustrated by him. Both engravings were originally commissioned in the 1590s by Papal physician Michele Mercati, who also had in possession the head of a great white, for his book Metallotheca. The work remained unpublished in Steensen's time due to Mercati's premature death, and the former reused the two illustrations per suggestion by Carlo Roberto Dati, who thought a depiction of the actual dissected shark was unsuitable for readers. Steensen also stood out in pioneering a stratigraphic explanation for how similar stones appeared further inland. He observed that rock layers bearing megalodon teeth contained marine sediments and hypothesized that these layers correlated to a period of flood that was later covered by terrestrial layers and uplifted by geologic activity. Swiss naturalist Louis Agassiz gave megalodon its scientific name in his seminal 1833-1843 work Recherches sur les poissons fossiles (Research on fossil fish). He named it Carcharias megalodon in an 1835 illustration of the holotype and additional teeth, congeneric with the modern sand tiger shark. The specific name is a portmanteau of the Ancient Greek words μεγάλος (megálos, meaning "big") and ὀδών (odṓn, meaning "tooth"), combined meaning "big tooth". Agassiz referenced the name as early as 1832, but because specimens were not referenced they are not taxonomically recognized uses. Formal description of the species was published in an 1843 volume, where Agassiz revised the name to Carcharodon megalodon as its teeth were far too large for the former genus and more alike to the great white shark. He also erroneously identified several megalodon teeth as belonging to additional species eventually named Carcharodon rectidens, Carcharodon subauriculatus, Carcharodon productus, and Carcharodon polygurus. Because Carcharodon megalodon appeared first in the 1835 illustration, the remaining names are considered junior synonyms under the principle of priority. Evolution While the earliest megalodon remains have been reported from the Late Oligocene, around 28 million years ago (Mya), there is disagreement as to when it appeared, with dates ranging to as young as 16 mya. It has been thought that megalodon became extinct around the end of the Pliocene, about 2.6 Mya; claims of Pleistocene megalodon teeth, younger than 2.6 million years old, are considered unreliable. A 2019 assessment moves the extinction date back to earlier in the Pliocene, 3.6 Mya. Megalodon is considered to be a member of the family Otodontidae, genus Otodus, as opposed to its previous classification into Lamnidae, genus Carcharodon. Megalodon's classification into Carcharodon was due to dental similarity with the great white shark, but most authors believe that this is due to convergent evolution. In this model, the great white shark is more closely related to the extinct broad-toothed mako (Cosmopolitodus hastalis) than to megalodon, as evidenced by more similar dentition in those two sharks; megalodon teeth have much finer serrations than great white shark teeth. The great white shark is more closely related to the mako sharks (Isurus spp.), with a common ancestor around 4 Mya. Proponents of the former model, wherein megalodon and the great white shark are more closely related, argue that the differences between their dentition are minute and obscure. The genus Carcharocles contains four species: C. auriculatus, C. angustidens, C. chubutensis, and C. megalodon. The evolution of this lineage is characterized by the increase of serrations, the widening of the crown, the development of a more triangular shape, and the disappearance of the lateral cusps. The evolution in tooth morphology reflects a shift in predation tactics from a tearing-grasping bite to a cutting bite, likely reflecting a shift in prey choice from fish to cetaceans. Lateral cusplets were finally lost in a gradual process that took roughly 12 million years during the transition between C. chubutensis and C. megalodon. The genus was proposed by D. S. Jordan and H. Hannibal in 1923 to contain C. auriculatus. In the 1980s, megalodon was assigned to Carcharocles. Before this, in 1960, the genus Procarcharodon was erected by French ichthyologist Edgard Casier, which included those four sharks and was considered separate from the great white shark. It is since considered a junior synonym of Carcharocles. The genus Palaeocarcharodon was erected alongside Procarcharodon to represent the beginning of the lineage, and, in the model wherein megalodon and the great white shark are closely related, their last common ancestor. It is believed to be an evolutionary dead-end and unrelated to the Carcharocles sharks by authors who reject that model. Another model of the evolution of this genus, also proposed by Casier in 1960, is that the direct ancestor of the Carcharocles is the shark Otodus obliquus, which lived from the Paleocene through the Miocene epochs, 60 to 13 Mya. The genus Otodus is ultimately derived from Cretolamna, a shark from the Cretaceous period. In this model, O. obliquus evolved into O. aksuaticus, which evolved into C. auriculatus, and then into C. angustidens, and then into C. chubutensis, and then finally into C. megalodon. Another model of the evolution of Carcharocles, proposed in 2001 by paleontologist Michael Benton, is that the three other species are actually a single species of shark that gradually changed over time between the Paleocene and the Pliocene, making it a chronospecies. Some authors suggest that C. auriculatus, C. angustidens, and C. chubutensis should be classified as a single species in the genus Otodus, leaving C. megalodon the sole member of Carcharocles. The genus Carcharocles may be invalid, and the shark may actually belong in the genus Otodus, making it Otodus megalodon. A 1974 study on Paleogene sharks by Henri Cappetta erected the subgenus Megaselachus, classifying the shark as Otodus (Megaselachus) megalodon, along with O. (M.) chubutensis. A 2006 review of Chondrichthyes elevated Megaselachus to genus, and classified the sharks as Megaselachus megalodon and M. chubutensis. The discovery of fossils assigned to the genus Megalolamna in 2016 led to a re-evaluation of Otodus, which concluded that it is paraphyletic, that is, it consists of a last common ancestor but it does not include all of its descendants. The inclusion of the Carcharocles sharks in Otodus would make it monophyletic, with the sister clade being Megalolamna. The cladogram below represents the hypothetical relationships between megalodon and other sharks, including the great white shark. Modified from Shimada et al. (2016), Ehret et al., (2009), and the findings of Siversson et al. (2015). Biology Appearance One interpretation on how megalodon appeared was that it was a robust-looking shark, and may have had a similar build to the great white shark. The jaws may have been blunter and wider than the great white, and the fins would have also been similar in shape, though thicker due to its size. It may have had a pig-eyed appearance, in that it had small, deep-set eyes. Another interpretation is that megalodon bore a similarity to the whale shark (Rhincodon typus) or the basking shark (Cetorhinus maximus). The tail fin would have been crescent-shaped, the anal fin and second dorsal fin would have been small, and there would have been a caudal keel present on either side of the tail fin (on the caudal peduncle). This build is common in other large aquatic animals, such as whales, tuna, and other sharks, in order to reduce drag while swimming. The head shape can vary between species as most of the drag-reducing adaptations are toward the tail-end of the animal. One associated set of megalodon remains was found with placoid scales, which are in maximum width, and have broadly spaced keels. Size Due to fragmentary remains, there have been many contradictory size estimates for megalodon, as they can only be drawn from fossil teeth and vertebrae. The great white shark has been the basis of reconstruction and size estimation, as it is regarded as the best analogue to megalodon. Several total length estimation methods have been produced from comparing megalodon teeth and vertebrae to those of the great white. Megalodon size estimates vary depending on the method used, with maximum total length estimates ranging from . A 2015 study estimated the modal total body length at , calculated from 544 megalodon teeth, found throughout geological time and geography, including juveniles and adults ranging from in total length. In comparison, large great white sharks are generally around in length, with a few contentious reports suggesting larger sizes. The whale shark is the largest living fish, with one large female reported with a precaudal length of and an estimated total length of . It is possible that different populations of megalodon around the globe had different body sizes and behaviors due to different ecological pressures. Megalodon is thought to have been the largest macropredatory shark that ever lived. In his 2015 book, The Story of Life in 25 Fossils: Tales of Intrepid Fossil Hunters and the Wonders of Evolution, Donald Prothero proposed the body mass estimates for different individuals of different length by extrapolating from a vertebral centra based on the dimensions of the great white, a methodology also used for the 2008 study which supports the maximum mass estimate. In 2020, Cooper and his colleagues reconstructed a 2D model of megalodon based on the dimensions of all the extant lamnid sharks and suggested that a long megalodon would have had a long head, tall gill slits, a tall dorsal fin, long pectoral fins, and a tall tail fin. In 2022, Cooper and his colleagues also reconstructed a 3D model with the same basis as the 2020 study, resulting in a body mass estimate of for a long megalodon (higher than the previous estimates); a vertebral column specimen named IRSNB P 9893 (formerly IRSNB 3121), belonging to a 46 year old individual from Belgium, was used for extrapolation. An individual of this size would have required 98,175 kcal per day, 20 times more than what the adult great white requires. Mature male megalodon may have had a body mass of , and mature females may have been , assuming that males could range in length from and females . A 2015 study linking shark size and typical swimming speed estimated that megalodon would have typically swum at –assuming that its body mass was typically –which is consistent with other aquatic creatures of its size, such as the fin whale (Balaenoptera physalus) which typically cruises at speeds of . In 2022, Cooper and his colleagues converted this calculation into relative cruising speed (body lengths per second), resulting in a mean absolute cruising speed of and a mean relative cruising speed of 0.09 body lengths per second for a long megalodon; the authors found their mean absolute cruising speed to be faster than any extant lamnid sharks and their mean relative cruising speed to be slower, consistent with previous estimates. Its large size may have been due to climatic factors and the abundance of large prey items, and it may have also been influenced by the evolution of regional endothermy (mesothermy) which would have increased its metabolic rate and swimming speed. The otodontid sharks have been considered to have been ectotherms, so on that basis megalodon would have been ectothermic. However, the largest contemporary ectothermic sharks, such as the whale shark, are filter feeders, while lamnids are regional endotherms, implying some metabolic correlations with a predatory lifestyle. These considerations, as well as tooth oxygen isotopic data and the need for higher burst swimming speeds in macropredators of endothermic prey than ectothermy would allow, imply that otodontids, including megalodon, were probably regional endotherms. In 2020, Shimada and colleagues suggested large size was instead due to intrauterine cannibalism, where the larger fetus eats the smaller fetus, resulting in progressively larger and larger fetuses, requiring the mother to attain even greater size as well as caloric requirements which would have promoted endothermy. Males would have needed to keep up with female size in order to still effectively copulate (which probably involved latching onto the female with claspers, like modern cartilaginous fish). Maximum estimates The first attempt to reconstruct the jaw of megalodon was made by Bashford Dean in 1909, displayed at the American Museum of Natural History. From the dimensions of this jaw reconstruction, it was hypothesized that megalodon could have approached in length. Dean had overestimated the size of the cartilage on both jaws, causing it to be too tall. In 1973, John E. Randall, an ichthyologist, used the enamel height (the vertical distance of the blade from the base of the enamel portion of the tooth to its tip) to measure the length of the shark, yielding a maximum length of about . However, tooth enamel height does not necessarily increase in proportion to the animal's total length. In 1994, marine biologists Patrick J. Schembri and Stephen Papson opined that O. megalodon may have approached a maximum of around in total length. In 1996, shark researchers Michael D. Gottfried, Leonard Compagno, and S. Curtis Bowman proposed a linear relationship between the great white shark's total length and the height of the largest upper anterior tooth. The proposed relationship is: total length in meters = − (0.096) × [UA maximum height (mm)]-(0.22). Using this tooth height regression equation, the authors estimated a total length of based on a tooth tall, which the authors considered a conservative maximum estimate. They also compared the ratio between the tooth height and total length of large female great whites to the largest megalodon tooth. A long female great white, which the authors considered the largest 'reasonably trustworthy' total length, produced an estimate of . However, based on the largest female great white reported, at , they estimated a maximum estimate of . In 2002, shark researcher Clifford Jeremiah proposed that total length was proportional to the root width of an upper anterior tooth. He claimed that for every of root width, there are approximately of shark length. Jeremiah pointed out that the jaw perimeter of a shark is directly proportional to its total length, with the width of the roots of the largest teeth being a tool for estimating jaw perimeter. The largest tooth in Jeremiah's possession had a root width of about , which yielded in total length. In 2002, paleontologist Kenshu Shimada of DePaul University proposed a linear relationship between tooth crown height and total length after conducting anatomical analysis of several specimens, allowing any sized tooth to be used. Shimada stated that the previously proposed methods were based on a less-reliable evaluation of the dental homology between megalodon and the great white shark, and that the growth rate between the crown and root is not isometric, which he considered in his model. Using this model, the upper anterior tooth possessed by Gottfried and colleagues corresponded to a total length of . Among several specimens found in the Gatún Formation of Panama, one upper lateral tooth was used by other researchers to obtain a total length estimate of using this method. In 2019, Shimada revisited the size of megalodon and discouraged using non-anterior teeth for estimations, noting that the exact position of isolated non-anterior teeth is difficult to identify. Shimada provided maximum total length estimates using the largest anterior teeth available in museums. The tooth with the tallest crown height known to Shimada, NSM PV-19896, produced a total length estimate of . The tooth with the tallest total height, FMNH PF 11306, was reported at . However, Shimada remeasured the tooth and found it actually to measure . Using the total height tooth regression equation proposed by Gottfried and colleagues produced an estimate of . In 2021, Victor J. Perez, Ronny M. Leder, and Teddy Badaut proposed a method of estimating total length of megalodon from the sum of the tooth crown widths. Using more complete megalodon dentitions, they reconstructed the dental formula and then made comparisons to living sharks. The researchers noted that the 2002 Shimada crown height equations produce wildly varying results for different teeth belonging to the same shark (range of error of ± ), casting doubt on some of the conclusions of previous studies using that method. Using the largest tooth available to the authors, GHC 6, with a crown width of , they estimated a maximum body length of approximately , with a range of error of approximately ± . This maximum length estimate was also supported by Cooper and his colleagues in 2022. There are anecdotal reports of teeth larger than those found in museum collections. Gordon Hubbell from Gainesville, Florida, possesses an upper anterior megalodon tooth whose maximum height is , one of the largest known tooth specimens from the shark. In addition, a megalodon jaw reconstruction developed by fossil hunter Vito Bertucci contains a tooth whose maximum height is reportedly over . Teeth and bite force The most common fossils of megalodon are its teeth. Diagnostic characteristics include a triangular shape, robust structure, large size, fine serrations, a lack of lateral denticles, and a visible V-shaped neck (where the root meets the crown). The tooth met the jaw at a steep angle, similar to the great white shark. The tooth was anchored by connective tissue fibers, and the roughness of the base may have added to mechanical strength. The lingual side of the tooth, the part facing the tongue, was convex; and the labial side, the other side of the tooth, was slightly convex or flat. The anterior teeth were almost perpendicular to the jaw and symmetrical, whereas the posterior teeth were slanted and asymmetrical. Megalodon teeth can measure over in slant height (diagonal length) and are the largest of any known shark species, implying it was the largest of all macropredatory sharks. In 1989, a nearly complete set of megalodon teeth was discovered in Saitama, Japan. Another nearly complete associated megalodon dentition was excavated from the Yorktown Formations in the United States, and served as the basis of a jaw reconstruction of megalodon at the National Museum of Natural History (USNM). Based on these discoveries, an artificial dental formula was put together for megalodon in 1996. The dental formula of megalodon is: . As evident from the formula, megalodon had four kinds of teeth in its jaws: anterior, intermediate, lateral, and posterior. Megalodon's intermediate tooth technically appears to be an upper anterior and is termed as "A3" because it is fairly symmetrical and does not point mesially (side of the tooth toward the midline of the jaws where the left and right jaws meet). Megalodon had a very robust dentition, and had over 250 teeth in its jaws, spanning 5 rows. It is possible that large megalodon individuals had jaws spanning roughly across. The teeth were also serrated, which would have improved efficiency in cutting through flesh or bone. The shark may have been able to open its mouth to a 75° angle, though a reconstruction at the USNM approximates a 100° angle. In 2008, a team of scientists led by S. Wroe conducted an experiment to determine the bite force of the great white shark, using a long specimen, and then isometrically scaled the results for its maximum size and the conservative minimum and maximum body mass of megalodon. They placed the bite force of the latter between in a posterior bite, compared to the bite force for the largest confirmed great white shark, and for the placoderm fish Dunkleosteus. In addition, Wroe and colleagues pointed out that sharks shake sideways while feeding, amplifying the force generated, which would probably have caused the total force experienced by prey to be higher than the estimate. In 2021, Antonio Ballell and Humberto Ferrón used Finite Element Analysis modeling to examine the stress distribution of three types of megalodon teeth and closely related mega-toothed species when exposed to anterior and lateral forces, the latter of which would be generated when a shark shakes its head to tear through flesh. The resulting simulations identified higher levels of stress in megalodon teeth under lateral force loads compared to its precursor species such as O. obliquus and O. angusteidens when tooth size was removed as a factor. This suggests that megalodon teeth were of a different functional significance than previously expected, challenging prior interpretations that megalodon's dental morphology was primarily driven by a dietary shift towards marine mammals. Instead, the authors proposed that it was a byproduct of an increase in body size caused by heterochronic selection. Internal anatomy Megalodon is represented in the fossil record by teeth, vertebral centra, and coprolites. As with all sharks, the skeleton of megalodon was formed of cartilage rather than bone; consequently most fossil specimens are poorly preserved. To support its large dentition, the jaws of megalodon would have been more massive, stouter, and more strongly developed than those of the great white, which possesses a comparatively gracile dentition. Its chondrocranium, the cartilaginous skull, would have had a blockier and more robust appearance than that of the great white. Its fins were proportional to its larger size. Some fossil vertebrae have been found. The most notable example is a partially preserved vertebral column of a single specimen, excavated in the Antwerp Basin, Belgium, in 1926. It comprises 150 vertebral centra, with the centra ranging from to in diameter. The shark's vertebrae may have gotten much bigger, and scrutiny of the specimen revealed that it had a higher vertebral count than specimens of any known shark, possibly over 200 centra; only the great white approached it. Another partially preserved vertebral column of a megalodon was excavated from the Gram Formation in Denmark in 1983, which comprises 20 vertebral centra, with the centra ranging from to in diameter. The coprolite remains of megalodon are spiral-shaped, indicating that the shark may have had a spiral valve, a corkscrew-shaped portion of the lower intestines, similar to extant lamniform sharks. Miocene coprolite remains were discovered in Beaufort County, South Carolina, with one measuring . Gottfried and colleagues reconstructed the entire skeleton of megalodon, which was later put on display at the Calvert Marine Museum in the United States and the Iziko South African Museum. This reconstruction is long and represents a mature male, based on the ontogenetic changes a great white shark experiences over the course of its life. Paleobiology Prey relationships Though sharks are generally opportunistic feeders, megalodon's great size, high-speed swimming capability, and powerful jaws, coupled with an impressive feeding apparatus, made it an apex predator capable of consuming a broad spectrum of animals. Otodus megalodon was probably one of the most powerful predators to have existed. A study focusing on calcium isotopes of extinct and extant elasmobranch sharks and rays revealed that megalodon fed at a higher trophic level than the contemporaneous great white shark ("higher up" in the food chain). Fossil evidence indicates that megalodon preyed upon many cetacean species, such as dolphins, small whales, cetotheres, squalodontids (shark toothed dolphins), sperm whales, bowhead whales, and rorquals. In addition to this, they also targeted seals, sirenians, and sea turtles. The shark was an opportunist and piscivorous, and it would have also gone after smaller fish and other sharks. Many whale bones have been found with deep gashes most likely made by their teeth. Various excavations have revealed megalodon teeth lying close to the chewed remains of whales, and sometimes in direct association with them. The feeding ecology of megalodon appears to have varied with age and between sites, like the modern great white shark. It is plausible that the adult megalodon population off the coast of Peru targeted primarily cetothere whales in length and other prey smaller than itself, rather than large whales in the same size class as themselves. Meanwhile, juveniles likely had a diet that consisted more of fish. Feeding strategies Sharks often employ complex hunting strategies to engage large prey animals. Great white shark hunting strategies may be similar to how megalodon hunted its large prey. Megalodon bite marks on whale fossils suggest that it employed different hunting strategies against large prey than the great white shark. One particular specimen–the remains of a long undescribed Miocene baleen whale–provided the first opportunity to quantitatively analyze its attack behavior. Unlike great whites which target the underbelly of their prey, megalodon probably targeted the heart and lungs, with their thick teeth adapted for biting through tough bone, as indicated by bite marks inflicted to the rib cage and other tough bony areas on whale remains. Furthermore, attack patterns could differ for prey of different sizes. Fossil remains of some small cetaceans, for example cetotheres, suggest that they were rammed with great force from below before being killed and eaten, based on compression fractures. There is also evidence that a possible separate hunting strategy existed for attacking raptorial sperm whales; a tooth belonging to an undetermined physeteroid closely resembling those of Acrophyseter discovered in the Nutrien Aurora Phosphate Mine in North Carolina suggests that a megalodon or O. chubutensis may have aimed for the head of the sperm whale in order to inflict a fatal bite, the resulting attack leaving distinctive bite marks on the tooth. While scavenging behavior cannot be ruled out as a possibility, the placement of the bite marks is more consistent with predatory attacks than feeding by scavenging, as the jaw is not a particularly nutritious area to for a shark feed or focus on. The fact that the bite marks were found on the tooth's roots further suggest that the shark broke the whale's jaw during the bite, suggesting the bite was extremely powerful. The fossil is also notable as it stands as the first known instance of an antagonistic interaction between a sperm whale and an otodontid shark recorded in the fossil record. During the Pliocene, larger cetaceans appeared. Megalodon apparently further refined its hunting strategies to cope with these large whales. Numerous fossilized flipper bones and tail vertebrae of large whales from the Pliocene have been found with megalodon bite marks, which suggests that megalodon would immobilize a large whale before killing and feeding on it. Growth and reproduction In 2010, Ehret estimated that megalodon had a fast growth rate nearly two times that of the extant great white shark. He also estimated that the slowing or cessation of somatic growth in megalodon occurred around 25 years of age, suggesting that this species had an extremely delayed sexual maturity. In 2021, Shimada and colleagues calculated the growth rate of an approximately individual based on the Belgian vertebrate column specimen that presumably contains annual growth rings on three of its vertebrae. They estimated the individual died at 46 years of age, with a growth rate of per year, and a length of at birth. For a individualwhich they considered to have been the maximum size attainablethis would equate to a lifespan of 88 to 100 years. However, Cooper and his colleagues in 2022 estimated the length of this 46 year old individual at nearly based on the 3D reconstruction which resulted in the complete vertebral column to be long; the researchers claimed that this size estimate difference occurred due to the fact that Shimada and his colleagues extrapolated its size only based on the vertebral centra. Megalodon, like contemporaneous sharks, made use of nursery areas to birth their young in, specifically warm-water coastal environments with large amounts of food and protection from predators. Nursery sites were identified in the Gatún Formation of Panama, the Calvert Formation of Maryland, Banco de Concepción in the Canary Islands, and the Bone Valley Formation of Florida. Given that all extant lamniform sharks give birth to live young, this is believed to have been true of megalodon also. Infant megalodons were around at their smallest, and the pups were vulnerable to predation by other shark species, such as the great hammerhead shark (Sphyrna mokarran) and the snaggletooth shark (Hemipristis serra). Their dietary preferences display an ontogenetic shift: Young megalodon commonly preyed on fish, sea turtles, dugongs, and small cetaceans; mature megalodon moved to off-shore areas and consumed large cetaceans. An exceptional case in the fossil record suggests that juvenile megalodon may have occasionally attacked much larger balaenopterid whales. Three tooth marks apparently from a long Pliocene shark were found on a rib from an ancestral blue or humpback whale that showed evidence of subsequent healing, which is suspected to have been inflicted by a juvenile megalodon. Paleoecology Range and habitat Megalodon had a cosmopolitan distribution; its fossils have been excavated from many parts of the world, including Europe, Africa, the Americas, and Australia. It most commonly occurred in subtropical to temperate latitudes. It has been found at latitudes up to 55° N; its inferred tolerated temperature range was . It arguably had the capacity to endure such low temperatures due to mesothermy, the physiological capability of large sharks to maintain a higher body temperature than the surrounding water by conserving metabolic heat. Megalodon inhabited a wide range of marine environments (i.e., shallow coastal waters, areas of coastal upwelling, swampy coastal lagoons, sandy littorals, and offshore deep water environments), and exhibited a transient lifestyle. Adult megalodon were not abundant in shallow water environments, and mostly inhabited offshore areas. Megalodon may have moved between coastal and oceanic waters, particularly in different stages of its life cycle. Fossil remains show a trend for specimens to be larger on average in the Southern Hemisphere than in the Northern, with mean lengths of , respectively; and also larger in the Pacific than the Atlantic, with mean lengths of respectively. They do not suggest any trend of changing body size with absolute latitude, or of change in size over time (although the Carcharocles lineage in general is thought to display a trend of increasing size over time). The overall modal length has been estimated at , with the length distribution skewed towards larger individuals, suggesting an ecological or competitive advantage for larger body size. Locations of fossils Megalodon had a global distribution and fossils of the shark have been found in many places around the world, bordering all oceans of the Neogene. Competition Megalodon faced a highly competitive environment. Its position at the top of the food chain probably had a significant impact on the structuring of marine communities. Fossil evidence indicates a correlation between megalodon and the emergence and diversification of cetaceans and other marine mammals. Juvenile megalodon preferred habitats where small cetaceans were abundant, and adult megalodon preferred habitats where large cetaceans were abundant. Such preferences may have developed shortly after they appeared in the Oligocene. Megalodon were contemporaneous with whale-eating toothed whales (particularly macroraptorial sperm whales and squalodontidae), which were also probably among the era's apex predators, and provided competition. Some attained gigantic sizes, such as Livyatan, estimated between . Fossilized teeth of an undetermined species of such physeteroids from Lee Creek Mine, North Carolina, indicate it had a maximum body length of and a maximum lifespan of about 25 years. This is very different from similarly sized modern killer whales that live to 65 years, suggesting that unlike the latter, which are apex predators, these physeteroids were subject to predation from larger species such as megalodon or Livyatan. By the Late Miocene, around 11 Mya, macroraptorials experienced a significant decline in abundance and diversity. Other species may have filled this niche in the Pliocene, such as the fossil killer whale Orcinus citoniensis which may have been a pack predator and targeted prey larger than itself, but this inference is disputed, and it was probably a generalist predator rather than a marine mammal specialist. Megalodon may have subjected contemporaneous white sharks to competitive exclusion, as the fossil records indicate that other shark species avoided regions it inhabited by mainly keeping to the colder waters of the time. In areas where their ranges seemed to have overlapped, such as in Pliocene Baja California, it is possible that megalodon and the great white shark occupied the area at different times of the year while following different migratory prey. Megalodon probably also had a tendency for cannibalism, much like contemporary sharks. Extinction Climate change The Earth experienced a number of changes during the time period megalodon existed which affected marine life. A cooling trend starting in the Oligocene 35 Mya ultimately led to glaciation at the poles. Geological events changed currents and precipitation; among these were the closure of the Central American Seaway and changes in the Tethys Ocean, contributing to the cooling of the oceans. The stalling of the Gulf Stream prevented nutrient-rich water from reaching major marine ecosystems, which may have negatively affected its food sources. The largest fluctuation of sea levels in the Cenozoic era occurred in the Plio-Pleistocene, between around 5 million to 12 thousand years ago, due to the expansion of glaciers at the poles, which negatively impacted coastal environments, and may have contributed to its extinction along with those of several other marine megafaunal species. These oceanographic changes, in particular the sea level drops, may have restricted many of the suitable shallow warm-water nursery sites for megalodon, hindering reproduction. Nursery areas are pivotal for the survival of many shark species, in part because they protect juveniles from predation. As its range did not apparently extend into colder waters, megalodon may not have been able to retain a significant amount of metabolic heat, so its range was restricted to shrinking warmer waters. Fossil evidence confirms the absence of megalodon in regions around the world where water temperatures had significantly declined during the Pliocene. However, an analysis of the distribution of megalodon over time suggests that temperature change did not play a direct role in its extinction. Its distribution during the Miocene and Pliocene did not correlate with warming and cooling trends; while abundance and distribution declined during the Pliocene, megalodon did show a capacity to inhabit colder latitudes. It was found in locations with a mean temperature ranging from , with a total range of , indicating that the global extent of suitable habitat should not have been greatly affected by the temperature changes that occurred. This is consistent with evidence that it was a mesotherm. Changing ecosystem Marine mammals attained their greatest diversity during the Miocene, such as with baleen whales with over 20 recognized Miocene genera in comparison to only six extant genera. Such diversity presented an ideal setting to support a super-predator such as megalodon. By the end of the Miocene, many species of mysticetes had gone extinct; surviving species may have been faster swimmers and thus more elusive prey. Furthermore, after the closure of the Central American Seaway, tropical whales decreased in diversity and abundance. The extinction of megalodon correlates with the decline of many small mysticete lineages, and it is possible that it was quite dependent on them as a food source. Additionally, a marine megafauna extinction during the Pliocene was discovered to have eliminated 36% of all large marine species including 55% of marine mammals, 35% of seabirds, 9% of sharks, and 43% of sea turtles. The extinction was selective for endotherms and mesotherms relative to poikilotherms, implying causation by a decreased food supply and thus consistent with megalodon being mesothermic. Megalodon may have been too large to sustain itself on the declining marine food resources. The cooling of the oceans during the Pliocene might have restricted the access of megalodon to the polar regions, depriving it of the large whales which had migrated there. Competition from large odontocetes, such as macropredatory sperm whales which appeared in the Miocene, and a member of genus Orcinus (i.e., Orcinus citoniensis) in the Pliocene, is assumed to have contributed to the decline and extinction of megalodon. But this assumption is disputed: The Orcininae emerged in Mid-Pliocene with O. citoniensis reported from the Pliocene of Italy, and similar forms reported from the Pliocene of England and South Africa, indicating the capacity of these dolphins to cope with increasingly prevalent cold water temperatures in high latitudes. These dolphins were assumed to have been macrophagous in some studies, but on closer inspection, these dolphins are not found to be macrophagous and fed on small fishes instead. On the other hand, gigantic macropredatory sperm whales such as Livyatan-like forms are last reported from Australia and South Africa circa 5 million years ago. Others, such as Hoplocetus and Scaldicetus also occupied a niche similar to that of modern killer whales but the last of these forms disappeared during the Pliocene. Members of genus Orcinus became large and macrophagous in the Pleistocene. Paleontologist Robert Boessenecker and his colleagues rechecked the fossil record of megalodon for carbon dating errors and concluded that it disappeared circa 3.5 million years ago. Boessenecker and his colleagues further suggest that megalodon suffered range fragmentation due to climatic shifts, and competition with white sharks might have contributed to its decline and extinction. Competition with white sharks is assumed to be a factor in other studies as well, but this hypothesis warrants further testing. Multiple compounding environmental and ecological factors including climate change and thermal limitations, collapse of prey populations and resource competition with white sharks are believed to have contributed to decline and extinction of megalodon. The extinction of megalodon set the stage for further changes in marine communities. The average body size of baleen whales increased significantly after its disappearance, although possibly due to other, climate-related, causes. Conversely the increase in baleen whale size may have contributed to the extinction of megalodon, as they may have preferred to go after smaller whales; bite marks on large whale species may have come from scavenging sharks. Megalodon may have simply become coextinct with smaller whale species, such as Piscobalaena nana. The extinction of megalodon had a positive impact on other apex predators of the time, such as the great white shark, in some cases spreading to regions where megalodon became absent. In popular culture Megalodon has been portrayed in many works of fiction, including films and novels, and continues to be a popular subject for fiction involving sea monsters. Reports of supposedly fresh megalodon teeth, such as those found by in 1873 which were dated in 1959 by the zoologist Wladimir Tschernezky to be around 11,000 to 24,000 years old, helped popularise claims of recent megalodon survival amongst cryptozoologists. These claims have been discredited, and are probably teeth that were well-preserved by a thick mineral-crust precipitate of manganese dioxide, and so had a lower decomposition rate and retained a white color during fossilization. Fossil megalodon teeth can vary in color from off-white to dark browns, greys, and blues, and some fossil teeth may have been redeposited into a younger stratum. The claims that megalodon could remain elusive in the depths, similar to the megamouth shark which was discovered in 1976, are unlikely as the shark lived in warm coastal waters and probably could not survive in the cold and nutrient-poor deep sea environment. Alleged sightings of the megalodon have been noted to be likely hoaxes or misidentifications of the whale shark, which shared many visual characteristics with megalodon sightings. Contemporary fiction about megalodon surviving into modern times was pioneered by the 1997 novel Meg: A Novel of Deep Terror by Steve Alten and its subsequent sequels. Megalodon subsequently began to feature in films, such as the 2002 direct to video Shark Attack 3: Megalodon, and later The Meg, a 2018 film based on the 1997 book which grossed over $500 million at the box office. Animal Planet's pseudo-documentary Mermaids: The Body Found included an encounter 1.6 mya between a pod of mermaids and a megalodon. Later, in August 2013, the Discovery Channel opened its annual Shark Week series with another film for television, Megalodon: The Monster Shark Lives, a controversial docufiction about the creature that presented alleged evidence in order to suggest that megalodons still lived. This program received criticism for being completely fictional and for inadequately disclosing its fictional nature; for example, all of the supposed scientists depicted were paid actors, and there was no disclosure in the documentary itself that it was fictional. In a poll by Discovery, 73% of the viewers of the documentary thought that megalodon was not extinct. In 2014, Discovery re-aired The Monster Shark Lives, along with a new one-hour program, Megalodon: The New Evidence, and an additional fictionalized program entitled Shark of Darkness: Wrath of Submarine, resulting in further backlash from media sources and the scientific community. Despite the criticism from scientists, Megalodon: The Monster Shark Lives was a huge ratings success, gaining 4.8 million viewers, the most for any Shark Week episode up to that point. Megalodon teeth are the state fossil of North Carolina.
Biology and health sciences
Prehistoric chondrichthyans
Animals
529418
https://en.wikipedia.org/wiki/D-brane
D-brane
In string theory, D-branes, short for Dirichlet membrane, are a class of extended objects upon which open strings can end with Dirichlet boundary conditions, after which they are named. D-branes are typically classified by their spatial dimension, which is indicated by a number written after the D. A D0-brane is a single point, a D1-brane is a line (sometimes called a "D-string"), a D2-brane is a plane, and a D25-brane fills the highest-dimensional space considered in bosonic string theory. There are also instantonic D(−1)-branes, which are localized in both space and time. Discovery D-branes were discovered by Jin Dai, Leigh, and Polchinski, and independently by Hořava, in 1989. In 1995, Polchinski identified D-branes with black p-brane solutions of supergravity, a discovery that triggered the Second Superstring Revolution and led to both holographic and M-theory dualities. Theoretical background The equations of motion of string theory require that the endpoints of an open string (a string with endpoints) satisfy one of two types of boundary conditions: The Neumann boundary condition, corresponding to free endpoints moving through spacetime at the speed of light, or the Dirichlet boundary conditions, which pin the string endpoint. Each coordinate of the string must satisfy one or the other of these conditions. There can also exist strings with mixed boundary conditions, where the two endpoints satisfy NN, DD, ND and DN boundary conditions. If p spatial dimensions satisfy the Neumann boundary condition, then the string endpoint is confined to move within a p-dimensional hyperplane. This hyperplane provides one description of a Dp-brane. Although rigid in the limit of zero coupling, the spectrum of open strings ending on a D-brane contains modes associated with its fluctuations, implying that D-branes are dynamical objects. When D-branes are nearly coincident, the spectrum of strings stretching between them becomes very rich. One set of modes produce a non-abelian gauge theory on the world-volume. Another set of modes is an dimensional matrix for each transverse dimension of the brane. If these matrices commute, they may be diagonalized, and the eigenvalues define the position of the D-branes in space. More generally, the branes are described by non-commutative geometry, which allows exotic behavior such as the Myers effect, in which a collection of Dp-branes expand into a D(p+2)-brane. Tachyon condensation is a central concept in this field. Ashoke Sen has argued that in Type IIB string theory, tachyon condensation allows (in the absence of Neveu-Schwarz 3-form flux) an arbitrary D-brane configuration to be obtained from a stack of D9 and anti D9-branes. Edward Witten has shown that such configurations will be classified by the K-theory of the spacetime. Tachyon condensation is still very poorly understood. This is due to the lack of an exact string field theory that would describe the off-shell evolution of the tachyon. Braneworld cosmology This has implications for physical cosmology. Because string theory implies that the Universe has more dimensions than we expect—26 for bosonic string theories and 10 for superstring theories—we have to find a reason why the extra dimensions are not apparent. One possibility would be that the visible Universe is in fact a very large D-brane extending over three spatial dimensions. Material objects, made of open strings, are bound to the D-brane, and cannot move "at right angles to reality" to explore the Universe outside the brane. This scenario is called a brane cosmology. The force of gravity is not due to open strings; the gravitons which carry gravitational forces are vibrational states of closed strings. Because closed strings do not have to be attached to D-branes, gravitational effects could depend upon the extra dimensions orthogonal to the brane. D-brane scattering When two D-branes approach each other the interaction is captured by the one loop annulus amplitude of strings between the two branes. The scenario of two parallel branes approaching each other at a constant velocity can be mapped to the problem of two stationary branes that are rotated relative to each other by some angle. The annulus amplitude yields singularities that correspond to the on-shell production of open strings stretched between the two branes. This is true irrespective of the charge of the D-branes. At non-relativistic scattering velocities the open strings may be described by a low-energy effective action that contains two complex scalar fields that are coupled via a term . Thus, as the field (separation of the branes) changes, the mass of the field changes. This induces open string production and as a result the two scattering branes will be trapped. Gauge theories The arrangement of D-branes constricts the types of string states which can exist in a system. For example, if we have two parallel D2-branes, we can easily imagine strings stretching from brane 1 to brane 2 or vice versa. (In most theories, strings are oriented objects: each one carries an "arrow" defining a direction along its length.) The open strings permissible in this situation then fall into two categories, or "sectors": those originating on brane 1 and terminating on brane 2, and those originating on brane 2 and terminating on brane 1. Symbolically, we say we have the [1 2] and the [2 1] sectors. In addition, a string may begin and end on the same brane, giving [1 1] and [2 2] sectors. (The numbers inside the brackets are called Chan–Paton indices, but they are really just labels identifying the branes.) A string in either the [1 2] or the [2 1] sector has a minimum length: it cannot be shorter than the separation between the branes. All strings have some tension, against which one must pull to lengthen the object; this pull does work on the string, adding to its energy. Because string theories are by nature relativistic, adding energy to a string is equivalent to adding mass, by Einstein's relation E = mc2. Therefore, the separation between D-branes controls the minimum mass open strings may have. Furthermore, affixing a string's endpoint to a brane influences the way the string can move and vibrate. Because particle states "emerge" from the string theory as the different vibrational states the string can experience, the arrangement of D-branes controls the types of particles present in the theory. The simplest case is the [1 1] sector for a Dp-brane, that is to say the strings which begin and end on any particular D-brane of p dimensions. Examining the consequences of the Nambu–Goto action (and applying the rules of quantum mechanics to quantize the string), one finds that among the spectrum of particles is one resembling the photon, the fundamental quantum of the electromagnetic field. The resemblance is precise: a p-dimensional version of the electromagnetic field, obeying a p-dimensional analogue of Maxwell's equations, exists on every Dp-brane. In this sense, then, one can say that string theory "predicts" electromagnetism: D-branes are a necessary part of the theory if we permit open strings to exist, and all D-branes carry an electromagnetic field on their volume. Other particle states originate from strings beginning and ending on the same D-brane. Some correspond to massless particles like the photon; also in this group are a set of massless scalar particles. If a Dp-brane is embedded in a spacetime of d spatial dimensions, the brane carries (in addition to its Maxwell field) a set of d − p massless scalars (particles which do not have polarizations like the photons making up light). Intriguingly, there are just as many massless scalars as there are directions perpendicular to the brane; the geometry of the brane arrangement is closely related to the quantum field theory of the particles existing on it. In fact, these massless scalars are Goldstone excitations of the brane, corresponding to the different ways the symmetry of empty space can be broken. Placing a D-brane in a universe breaks the symmetry among locations, because it defines a particular place, assigning a special meaning to a particular location along each of the d − p directions perpendicular to the brane. The quantum version of Maxwell's electromagnetism is only one kind of gauge theory, a U(1) gauge theory where the gauge group is made of unitary matrices of order 1. D-branes can be used to generate gauge theories of higher order, in the following way: Consider a group of N separate Dp-branes, arranged in parallel for simplicity. The branes are labeled 1,2,...,N for convenience. Open strings in this system exist in one of many sectors: the strings beginning and ending on some brane i give that brane a Maxwell field and some massless scalar fields on its volume. The strings stretching from brane i to another brane j have more intriguing properties. For starters, it is worthwhile to ask which sectors of strings can interact with one another. One straightforward mechanism for a string interaction is for two strings to join endpoints (or, conversely, for one string to "split down the middle" and make two "daughter" strings). Since endpoints are restricted to lie on D-branes, it is evident that a [1 2] string may interact with a [2 3] string, but not with a [3 4] or a [4 17] one. The masses of these strings will be influenced by the separation between the branes, as discussed above, so for simplicity's sake, we can imagine the branes squeezed closer and closer together until they lie atop one another. If we regard two overlapping branes as distinct objects, then we still have all the sectors we had before, but without the effects due to the brane separations. The zero-mass states in the open-string particle spectrum for a system of N coincident D-branes yields a set of interacting quantum fields which is exactly a U(N) gauge theory. (The string theory does contain other interactions, but they are only detectable at very high energies.) Gauge theories were not invented starting with bosonic or fermionic strings; they originated from a different area of physics, and have become quite useful in their own right. If nothing else, the relation between D-brane geometry and gauge theory offers a useful pedagogical tool for explaining gauge interactions, even if string theory fails to be the "theory of everything". Black holes Another important use of D-branes has been in the study of black holes. Since the 1970s, scientists have debated the problem of black holes having entropy. Consider, as a thought experiment, dropping an amount of hot gas into a black hole. Since the gas cannot escape from the hole's gravitational pull, its entropy would seem to have vanished from the universe. In order to maintain the second law of thermodynamics, one must postulate that the black hole gained whatever entropy the infalling gas originally had. Attempting to apply quantum mechanics to the study of black holes, Stephen Hawking discovered that a hole should emit energy with the characteristic spectrum of thermal radiation. The characteristic temperature of this Hawking radiation is given by where is the Newtonian constant of gravitation, is the black hole's mass and is the Boltzmann constant. Using this expression for the Hawking temperature, and assuming that a zero-mass black hole has zero entropy, one can use thermodynamic arguments to derive the "Bekenstein entropy": The Bekenstein entropy is proportional to the black hole mass squared; because the Schwarzschild radius is proportional to the mass, the Bekenstein entropy is proportional to the black hole's surface area. In fact, where is the Planck length. The concept of black hole entropy poses some interesting conundra. In an ordinary situation, a system has entropy when a large number of different "microstates" can satisfy the same macroscopic condition. For example, given a box full of gas, many different arrangements of the gas atoms can have the same total energy. However, a black hole was believed to be a featureless object (in John Wheeler's catchphrase, "Black holes have no hair"). What, then, are the "degrees of freedom" which can give rise to black hole entropy? String theorists have constructed models in which a black hole is a very long (and hence very massive) string. This model gives rough agreement with the expected entropy of a Schwarzschild black hole, but an exact proof has yet to be found one way or the other. The chief difficulty is that it is relatively easy to count the degrees of freedom quantum strings possess if they do not interact with one another. This is analogous to the ideal gas studied in introductory thermodynamics: the easiest situation to model is when the gas atoms do not have interactions among themselves. Developing the kinetic theory of gases in the case where the gas atoms or molecules experience inter-particle forces (like the van der Waals force) is more difficult. However, a world without interactions is an uninteresting place: most significantly for the black hole problem, gravity is an interaction, and so if the "string coupling" is turned off, no black hole could ever arise. Therefore, calculating black hole entropy requires working in a regime where string interactions exist. Extending the simpler case of non-interacting strings to the regime where a black hole could exist requires supersymmetry. In certain cases, the entropy calculation done for zero string coupling remains valid when the strings interact. The challenge for a string theorist is to devise a situation in which a black hole can exist which does not "break" supersymmetry. In recent years, this has been done by building black holes out of D-branes. Calculating the entropies of these hypothetical holes gives results which agree with the expected Bekenstein entropy. Unfortunately, the cases studied so far all involve higher-dimensional spaces – D5-branes in nine-dimensional space, for example. They do not directly apply to the familiar case, the Schwarzschild black holes observed in our own universe. History Dirichlet boundary conditions and D-branes had a long "pre-history" before their full significance was recognized. A series of 1975–76 papers by Bardeen, Bars, Hanson and Peccei dealt with an early concrete proposal of interacting particles at the ends of strings (quarks interacting with QCD flux tubes), with dynamical boundary conditions for string endpoints where the Dirichlet conditions were dynamical rather than static. Mixed Dirichlet/Neumann boundary conditions were first considered by Warren Siegel in 1976 as a means of lowering the critical dimension of open string theory from 26 or 10 to 4 (Siegel also cites unpublished work by Halpern, and a 1974 paper by Chodos and Thorn, but a reading of the latter paper shows that it is actually concerned with linear dilation backgrounds, not Dirichlet boundary conditions). This paper, though prescient, was little-noted in its time (a 1985 parody by Siegel, "The Super-g String", contains an almost dead-on description of braneworlds). Dirichlet conditions for all coordinates including Euclidean time (defining what are now known as D-instantons) were introduced by Michael Green in 1977 as a means of introducing point-like structure into string theory, in an attempt to construct a string theory of the strong interaction. String compactifications studied by Harvey and Minahan, Ishibashi and Onogi, and Pradisi and Sagnotti in 1987–1989 also employed Dirichlet boundary conditions. In 1989, Dai, Leigh, Polchinski, and Hořava independently, discovered that T-duality interchanges the usual Neumann boundary conditions with Dirichlet boundary conditions. This result implies that such boundary conditions must necessarily appear in regions of the moduli space of any open string theory. The Dai et al. paper also notes that the locus of the Dirichlet boundary conditions is dynamical, and coins the term Dirichlet-brane (D-brane) for the resulting object (this paper also coins orientifold for another object that arises under string T-duality). A 1989 paper by Leigh showed that D-brane dynamics are governed by the Dirac–Born–Infeld action. D-instantons were extensively studied by Green in the early 1990s, and were shown by Polchinski in 1994 to produce the nonperturbative string effects anticipated by Shenker. In 1995 Polchinski showed that D-branes are the sources of electric and magnetic Ramond–Ramond fields that are required by string duality, leading to rapid progress in the nonperturbative understanding of string theory.
Physical sciences
Particle physics: General
Physics
309317
https://en.wikipedia.org/wiki/Caper
Caper
Capparis spinosa, the caper bush, also called Flinders rose, is a perennial plant that bears rounded, fleshy leaves and large white to pinkish-white flowers. The taxonomic status of the species is controversial and unsettled. Species within the genus Capparis are highly variable, and interspecific hybrids have been common throughout the evolutionary history of the genus. As a result, some authors have considered C. spinosa to be composed of multiple distinct species, others that the taxon is a single species with multiple varieties or subspecies, or that the taxon C. spinosa is a hybrid between C. orientalis and C. sicula. Capparis spinosa is native to almost all the circum-Mediterranean countries, and is included in the flora of most of them, but whether it is indigenous to this region is uncertain. The family Capparaceae could have originated in the tropics and later spread to the Mediterranean basin. The plant is best known for the edible flower buds (capers), used as a seasoning or garnish, and the fruit (caper berries), both of which are usually consumed salted or pickled. Other species of Capparis are also picked along with C. spinosa for their buds or fruits. Other parts of Capparis plants are used in the manufacture of medicines and cosmetics. Description The shrubby plant is many-branched, with alternate leaves, thick and shiny, round to ovate. The flowers are complete, sweetly fragrant, and showy, with four sepals and four white to pinkish-white petals, many long violet-coloured stamens, and a single stigma usually rising well above the stamens. Accepted infraspecifics Eleven subspecies and variants are accepted, according to Plants of the World Online: Capparis spinosa var. aegyptia (Lam.) Boiss. Capparis spinosa var. atlantica (Inocencio, D.Rivera, Obón & Alcaraz) Fici Capparis spinosa var. canescens Coss. Capparis spinosa subsp. cordifolia (Lam.) Fici Capparis spinosa var. herbacea (Willd.) Fici Capparis spinosa var. mucronifolia (Boiss.) Hedge & Lamond ex R.R.Stewart Capparis spinosa var. myrtifolia (Inocencio, D.Rivera, Obón & Alcaraz) Fici Capparis spinosa var. ovata (Desf.) Sm. Capparis spinosa subsp. parviflora (Boiss.) Ahmadi, H.Saeidi & Mirtadz. Capparis spinosa subsp. rupestris (Sm.) Nyman Capparis spinosa subsp. spinosa Capparis nummularia was formerly considered a subspecies of Capparis spinosa. Distribution and habitat Capparis spinosa ranges around the Mediterranean Basin, Arabian Peninsula, and portions of Western and Central Asia. In southern Europe, it is found in southern Portugal, southern and eastern Spain (including the Balearic Islands), Mediterranean France including Corsica, Italy including Sicily and Sardinia, Croatia's Dalmatian islands, Albania, Greece and the Greek Islands, western and southern Turkey, on Cyprus, and on the Crimean Peninsula in Ukraine. In Spain, it ranges from sea level up to in elevation. In northern Africa, it is found throughout the north and the Atlas Mountains of Morocco, where it occurs from sea level up to in elevation. It is also found in northern Algeria (Kabylie, coastal Algeria, Bouzaréa, and Oran) and the Hoggar Mountains of the Algerian Sahara, in Tunisia north of the Sahara, and Cyrenaica in Libya. In western Asia, it is found along the eastern Mediterranean in Lebanon, Israel, Syria, and western Jordan, and in the southern Sinai Peninsula of Egypt. It is also found south of the Caucasus in Armenia, Azerbaijan, Georgia, and northeastern Turkey. On the Arabian Peninsula it occurs in Oman, Yemen including Socotra, and Asir province of Saudi Arabia. In central Asia, it inhabits the mountains of central Afghanistan, the lower Karakoram range in northern Pakistan and Ladakh, and Tajikistan, Kyrgyzstan, and eastern Uzbekistan. Environmental requirements The caper bush requires a semiarid or arid climate. The caper bush has developed a series of mechanisms that reduce the impact of high radiation levels, high daily temperature, and insufficient soil water during its growing period. In response to sudden increases in humidity, the bush forms wart-like pockmarks across the leaf surface. It quickly adjusts to the new conditions and produces unaffected leaves. Agriculture Capers can be grown easily from fresh seeds gathered from ripe fruit and planted into a well-drained seed-raising mix. Seedlings appear in two to four weeks. Old, stored seeds enter a state of dormancy and require cold stratification to germinate. The viable embryos germinate within three to four days after partial removal of the lignified seed coats. The seed coats and the mucilage surrounding the seeds may be ecological adaptations to avoid water loss and conserve seed viability during the dry season. Orchard establishment Mean annual temperatures in areas under cultivation are over . A rainy spring and a hot, dry summer are considered advantageous. This drought-tolerant perennial plant is used for landscaping and reducing erosion along highways, steep rocky slopes, dunes or fragile semiarid ecosystems. Harvest Caper buds are usually picked in the morning. Because the youngest, smallest buds fetch the highest prices, daily picking is typical. Capers may be harvested from wild plants, in which case it is necessary to know that the plant is not one of the few poisonous Capparis species that look similar. The plant normally has curved thorns that may scratch the people who harvest the buds, although a few spineless varieties have been developed. Uses Nutrition Canned, pickled capers are 84% water, 5% carbohydrates, 2% protein, and 1% fat. Preserved capers are particularly high in sodium due to the amount of salt added to the brine. In a typical serving of 28 grams (one ounce), capers supply 6 kcal and 35% of the Daily Value (DV) for sodium, with no other nutrients in significant content. In a 100-gram amount, the sodium content is 2960 mg or 197% DV, with vitamin K (23% DV), iron (13% DV), and riboflavin (12% DV) also having appreciable levels. Culinary The salted and pickled caper bud (simply called a "caper") is used as an ingredient, seasoning, or garnish. Capers are a common ingredient in Mediterranean cuisine, especially Cypriot, Italian, Aeolian Greek, and Maltese food. The immature fruit of the caper shrub are prepared similarly and marketed as "caper berries". Fully mature fruit are not preferred, as they contain many hard seeds. The buds, when ready to pick, are a dark olive green and range in size from under to more than . They are picked, then pickled in salt or a salt and vinegar solution, and drained. Intense flavour, sometimes described as being similar to black pepper or mustard, is developed as glucocapparin, a glycoside organosulfur molecule, is released from each caper bud. This enzymatic reaction leads to the formation of rutin, often seen as crystallized white spots on the surfaces of individual caper buds. Capers are a distinctive ingredient in Italian cuisine, especially in Sicilian, Aeolian and southern Italian cooking. They are commonly used in salads, pasta salads, meat dishes, and pasta sauces. Examples of uses in Italian cuisine are piccata dishes, vitello tonnato and spaghetti alla puttanesca. Capers are sometimes an ingredient in tartar sauce. They are often served with cold smoked salmon or cured salmon dishes, especially lox and cream cheese. Capers and caper berries are sometimes substituted for olives to garnish a martini. Capers are categorized and sold by their size, defined as follows, with the smallest sizes being the most desirable: non-pareil (up to 7 mm), surfines (7–8 mm), capucines (8–9 mm), capotes (9–11 mm), fines (11–13 mm), and grusas (14+ mm). If the caper bud is not picked, it flowers and produces a caper berry. The fruit can be pickled and then served as a Greek mezze. Caper leaves, which are hard to find outside of Greece or Cyprus, are used particularly in salads and fish dishes. They are pickled or boiled and preserved in jars with brine—like caper buds. Dried caper leaves are also used as a substitute for rennet in manufacturing high-quality cheese. Polyphenols Canned capers contain polyphenols, including the flavonoids quercetin (173 mg per 100 g) and kaempferol (131 mg per 100 g), as well as anthocyanins. Other uses Capers are sometimes used in cosmetics. History Archaeobotanical evidence of capers has been found in the Mediterranean region and Mesopotamia as early as the upper Paleolithic period. The caper was used in ancient Greece as a carminative. It is represented in archaeological levels in the form of carbonised seeds and rarely as flower buds and fruits from archaic and classical antiquity contexts. Athenaeus in Deipnosophistae pays a lot of attention to the caper, as do Pliny (NH XIX, XLVIII.163) and Theophrastus. Etymologically, the caper and its relatives in several European languages can be traced back to Classical Latin capparis, "caper", in turn, borrowed from the Greek κάππαρις, kápparis, whose origin (as with that of the plant) is unknown but is probably Asian. Another theory links kápparis to the name of the island of Cyprus (Κύπρος, Kýpros), where capers grow abundantly. In Biblical times, the caper berry was supposed to have aphrodisiac properties; the Hebrew word aviyyonah (אֲבִיּוֹנָה) for caperberry is closely linked to the Hebrew root אבה (avah), meaning "desire". The berries (abiyyonot) were eaten, as appears from their liability to tithes and the restrictions of the 'Orlah. They are carefully distinguished in the Mishnah and the Talmud from the caper leaves, alin, shoots, temarot, and the caper buds, capperisin (note the similarity "caper"isin to "caper"); all of which were eaten as seen from the blessing requirement, and declared to be the fruit of the ẓelaf or caper plant. The "capperisin" mentioned in the Talmud are actually referring to a shell that protected the "abiyyonot" as it grew. Talmud Bavli discusses the eating of caper sepals versus caper berries, both in Israel and in Syria. Capers are mentioned as a spice in the Roman cookbook Apicius.
Biology and health sciences
Herbs and spices
Plants
309401
https://en.wikipedia.org/wiki/Macadamia
Macadamia
Macadamia is a genus of four species of trees in the flowering plant family Proteaceae. They are indigenous to Australia, native to northeastern New South Wales and central and southeastern Queensland specifically. Two species of the genus are commercially important for their fruit, the macadamia nut (or simply macadamia). Global production in 2015 was . Other names include Queensland nut, bush nut, maroochi nut, bauple nut and, in the US, they are also known as Hawaii nut. It was an important source of bushfood for the Aboriginal peoples. The nut was first commercially produced on a wide scale in Hawaii, where Australian seeds were introduced in the 1880s, and for more than a century they were the world's largest producer. South Africa has been the world's largest producer of the macadamia since the 2010s. The macadamia is the only widely grown food plant that is native to Australia. Description Macadamia is an evergreen genus that grows tall. The leaves are arranged in whorls of three to six, lanceolate to obovate or elliptic in shape, long and broad, with an entire or spiny-serrated margin. The flowers are produced in a long, slender, and simple raceme long, the individual flowers long, white to pink or purple, with four tepals. The fruit is a hard, woody, globose follicle with a pointed apex containing one or two seeds. The nutshell ("coat") is particularly tough and requires around 2000 N to crack. The shell material is five times harder than hazelnut shells and has mechanical properties similar to aluminum. It has a Vickers hardness of 35. Taxonomy Species Nuts from M. jansenii and M. ternifolia contain cyanogenic glycosides. The other two species are cultivated for the commercial production of macadamia nuts for human consumption. Previously, more species with disjunct distributions were named as members of this genus Macadamia. Genetics and morphological studies published in 2008 show they have separated from the genus Macadamia, correlating less closely than thought from earlier morphological studies. The species previously named in the genus Macadamia may still be referred to overall by the descriptive, non-scientific name of macadamia. Formerly included in the genus Lasjia , formerly Macadamia until 2008 Lasjia claudiensis ; synonym, base name: Macadamia claudiensis Lasjia erecta ; synonym, base name: Macadamia erecta A tree endemic to the island of Sulawesi, Indonesia. First described by science in 1995. Lasjia grandis ; synonym, base name: Macadamia grandis Lasjia hildebrandii ; synonym, base name: Macadamia hildebrandii Another species endemic to Sulawesi. Lasjia whelanii ; synonyms: base name: Helicia whelanii , Macadamia whelanii Catalepidia , formerly Macadamia until 1995 Catalepidia heyana ; synonyms: base name: Helicia heyana , Macadamia heyana Virotia , formerly Macadamia until the first species renaming began in 1975 and comprehensive in 2008 Virotia angustifolia ; synonym, base name: Macadamia angustifolia Virotia francii ; synonym, base name: Roupala francii Virotia leptophylla (1975 type species); synonym, base name: Kermadecia leptophylla Virotia neurophylla ; synonyms: base name: Kermadecia neurophylla , Macadamia neurophylla Virotia rousselii ; synonym, base name: Roupala rousselii Virotia vieillardi ; synonym, base name: Roupala vieillardii Etymology The German-Australian botanist Ferdinand von Mueller gave the genus the name Macadamia in 1857 in honour of the Scottish-Australian chemist, medical teacher, and politician John Macadam, who was the honorary Secretary of the Philosophical Institute of Victoria beginning in 1857. Cultivation The macadamia tree is usually propagated by grafting and does not begin to produce commercial quantities of seeds until it is 7–10 years old, but once established, it may continue bearing for over 100 years. Macadamias prefer fertile, well-drained soils, a rainfall of , and temperatures not falling below (although once established, they can withstand light frosts), with an optimum temperature of . The roots are shallow, and trees can be blown down in storms; like most Proteaceae, they are also susceptible to Phytophthora root disease. As of 2019, the macadamia nut is the most expensive nut in the world, which is attributed to the slow harvesting process. Cultivars Beaumont A Macadamia integrifolia / M. tetraphylla hybrid commercial variety is widely planted in Australia and New Zealand; Dr. J. H. Beaumont discovered it. It is high in oil but is not sweet. New leaves are reddish, and flowers are bright pink, borne on long racemes. It is one of the quickest varieties to come into bearing once planted in the garden, usually carrying a useful crop by the fourth year and improving from then on. It crops prodigiously when well pollinated. The impressive, grape-like clusters are sometimes so heavy they break the branchlets to which they are attached. Commercial orchards have reached per tree by eight years old. On the downside, the macadamias do not drop from the tree when ripe, and the leaves are a bit prickly when one reaches into the tree's interior during harvest. Its shell is easier to open than that of most commercial varieties. Maroochy A pure M. tetraphylla variety from Australia, this strain is cultivated for its productive crop yield, flavour, and suitability for pollinating 'Beaumont.' Nelmac II A South African M. integrifolia / M. tetraphylla hybrid cultivar, it has a sweet seed, which means it has to be cooked carefully so that the sugars do not caramelise. The sweet seed is usually not fully processed, as it generally does not taste as good, but many people enjoy eating it uncooked. It has an open micropyle (hole in the shell), which may let in fungal spores. The crack-out percentage (ratio of nut meat to the whole nut by weight) is high. Ten-year-old trees average per tree. It is a popular variety because of its pollination of 'Beaumont,' and the yields are almost comparable. Renown A M. integrifolia / M. tetraphylla hybrid, this is a rather spreading tree. On the plus side, it is high yielding commercially; from a 9-year-old tree has been recorded, and the nuts drop to the ground. However, they are thick-shelled, with not much flavour. Production In 2024, South Africa was the leading producer of macadamia nuts, with 77,000 tonnes, up from 54,000 tonnes out of global production of 211,000 tonnes in 2018. Macadamia is commercially produced in many countries of Southeast Asia, South America, Australia, and North America having Mediterranean, temperate or tropical climates. History The first commercial orchard of macadamia trees was planted in the early 1880s by Rous Mill, southeast of Lismore, New South Wales, consisting of M. tetraphylla. Besides the development of a small boutique industry in Australia during the late 19th and early 20th centuries, macadamia was extensively planted as a commercial crop in Hawaii from the 1920s onward. Macadamia seeds were first imported into Hawaii in 1882 by William H. Purvis, who planted seeds that year at Kapulena. The Hawaiian-produced macadamia established the well-known seed internationally, and in 2017, Hawaii produced over 22,000 tonnes. In 2019, researchers collected samples from hundreds of trees in Queensland and compared their genetic profiles to samples from Hawaiian orchards. They determined that essentially all the Hawaiian trees must have descended from a small population of Australian trees from Gympie, possibly just a single tree. This lack of genetic diversity in the commercial crop puts it at risk of succumbing to pathogens (as has happened in the past to banana cultivars). Growers may seek to diversify the cultivated population by hybridizing with wild specimens. Shelling Macadamias are the world's hardest edible nut to crack. Since ordinary nutcrackers apply insufficient force, various types of specialist macadamia nut crackers are available, many of which apply force to the micropyle (white dot) to fracture the shell. For commercial scale deshelling, rotating steel rollers are used. In South Africa, the average crack-out rate, meaning ratio of usable nut to discarded shell, is 27.6% nut to 72.4% waste. Toxicity Nuts from M. jansenii and M. ternifolia contain cyanogenic glycosides. Allergen Macadamia allergy is a type of food allergy to macadamia nuts which is relatively rare, affecting less than 5% of people with tree nut allergy in the United States. Macadamia allergy can cause mild to severe allergic reactions, such as oral allergy syndrome, urticaria, angioedema, vomiting, abdominal pain, asthma, and anaphylaxis. Macadamia allergy can also cross-react with other tree nuts or foods that have similar allergenic proteins, such as coconut, walnut, hazelnut, and cashew. The diagnosis and management of macadamia allergy involves avoiding macadamia nuts and their derivatives, reading food labels carefully, carrying an epinephrine auto-injector in case of severe reactions, and consulting a doctor for further testing and advice. Toxicity in dogs and cats Macadamias are toxic to dogs. Ingestion may result in macadamia toxicity marked by weakness and hind limb paralysis with the inability to stand, occurring within 12 hours of ingestion. It is not known what makes macadamia nuts toxic in dogs. Depending on the quantity ingested and the size of the dog, symptoms may also include muscle tremors, joint pain, and severe abdominal pain. In high doses of toxin, opiate medication may be required for symptom relief until the toxic effects diminish, with full recovery usually within 24 to 48 hours. Macadamias are also toxic to cats, causing tremor, paralysis, stiffness in joints and high fever. Uses Nutrition Raw macadamia nuts are 1% water, 14% carbohydrates, 76% fat, and 8% protein. A 100-gram reference amount of macadamia nuts provides 740 kilocalories and are a rich source (20% or more of the Daily Value (DV)) of numerous essential nutrients, including thiamine (104% DV), vitamin B6 (21% DV), other B vitamins, manganese (195% DV), iron (28% DV), magnesium (37% DV) and phosphorus (27% DV). Compared with other common edible nuts, such as almonds and cashews, macadamias are high in total fat and relatively low in protein. They have a high amount of monounsaturated fats (59% of total content) and contain, as 17% of total fat, the monounsaturated fat, omega-7 palmitoleic acid. Other uses The trees are also grown as ornamental plants in subtropical regions for their glossy foliage and attractive flowers. The flowers produce a well-regarded honey. The wood is used decoratively for small items. Macadamia species are used as food plants by the larvae of some Lepidoptera species, including Batrachedra arenosella. Macadamia seeds are often fed to hyacinth macaws in captivity. These large parrots are one of the few animals, aside from humans, capable of cracking the shell and removing the seed. Modern history 1828 Allan Cunningham was the first European to encounter the macadamia plant in Australia. 1857–1858 German-Australian botanist Ferdinand von Mueller gave the genus the scientific name Macadamia. He named it after his friend John Macadam, a noted scientist and secretary of the Philosophical Institute of Australia. 1858 'Bauple nuts' were discovered in Bauple, Queensland; they are now known as macadamia nuts. Walter Hill, superintendent of the Brisbane Botanic Gardens (Australia), observed a boy eating the kernel without ill effect, becoming the first nonindigenous person recorded to eat macadamia nuts. 1860s King Jacky, aboriginal elder of the Logan River clan, south of Brisbane, Queensland, was the first known macadamia entrepreneur in his tribe and he regularly collected and traded the macadamias with settlers. 1866 Tom Petrie planted macadamias at Yebri Creek (near Petrie) from nuts obtained from Aboriginals at Buderim. 1882 William H. Purvis introduced macadamia nuts to Hawaii as a windbreak for sugar cane. 1888 The first commercial orchard of macadamias was planted at Rous Mill, 12 km from Lismore, New South Wales, by Charles Staff. 1889 Joseph Maiden, an Australian botanist, wrote, "It is well worth extensive cultivation, for the nuts are always eagerly bought." 1910 The Hawaiian Agricultural Experiment Station encouraged the planting of macadamias on Hawaii's Kona District as a crop to supplement coffee production in the region. 1916 Tom Petrie begins trial macadamia plantations in Maryborough, Queensland, combining macadamias with pecans to shelter the trees. 1922 Ernest van Tassel formed the Hawaiian Macadamia Nut Co. in Hawaii. 1925 Tassel leased on Round Top in Honolulu and began Nutridge, Hawaii's first macadamia seed farm. 1931 Tassel established a macadamia-processing factory on Puhukaina Street in Kakaako, Hawaii, selling the nuts as Van's Macadamia Nuts. 1937 Winston Jones and J. H. Beaumont of the University of Hawaii's Agricultural Experiment Station reported the first successful grafting of macadamias, paving the way for mass production. 1946 A large plantation was established in Hawaii. 1953 Castle & Cooke added a new brand of macadamia nuts called "Royal Hawaiian," which was credited with popularizing the nuts in the U.S. 1991 A fourth macadamia species, Macadamia jansenii, was described, being first brought to the attention of plant scientists in 1983 by Ray Jansen, a sugarcane farmer and amateur botanist from South Kolan in Central Queensland. 1997 Australia surpassed the United States as the major producer of macadamias. 2012–2015 South Africa surpassed Australia as the largest producer of macadamias. 2014 The manner in which macadamia nuts were served on Korean Air Flight 86 from John F. Kennedy International Airport in New York City led to a "nut rage incident", which gave the nuts high visibility in South Korea and marked a sharp increase in consumption there.
Biology and health sciences
Others
null
309428
https://en.wikipedia.org/wiki/RR%20Lyrae%20variable
RR Lyrae variable
RR Lyrae variables are periodic variable stars, commonly found in globular clusters. They are used as standard candles to measure (extra) galactic distances, assisting with the cosmic distance ladder. This class is named after the prototype and brightest example, RR Lyrae. They are pulsating horizontal branch stars of spectral class A or F, with a mass of around half the Sun's. They are thought to have shed mass during the red-giant branch phase, and were once stars at around 0.8 solar masses. In contemporary astronomy, a period-luminosity relation makes them good standard candles for relatively nearby targets, especially within the Milky Way and Local Group. They are also frequent subjects in the studies of globular clusters and the chemistry (and quantum mechanics) of older stars. Discovery and recognition In surveys of globular clusters, these "cluster-type" variables were being rapidly identified in the mid-1890s, especially by E. C. Pickering. Probably the first star definitely of RR Lyrae type found outside a cluster was U Leporis, discovered by J. Kapteyn in 1890. The prototype star RR Lyrae was discovered prior to 1899 by Williamina Fleming, and reported by Pickering in 1900 as "indistinguishable from cluster-type variables". From 1915 to the 1930s, the RR Lyraes became increasingly accepted as a class of star distinct from the classical Cepheids, due to their shorter periods, differing locations within the galaxy, and chemical differences. RR Lyrae variables are metal-poor, Population II stars. RR Lyraes have proven difficult to observe in external galaxies because of their intrinsic faintness. (In fact, Walter Baade's failure to find them in the Andromeda Galaxy led him to suspect that the galaxy was much farther away than predicted, to reconsider the calibration of Cepheid variables, and to propose the concept of stellar populations.) Using the Canada-France-Hawaii Telescope in the 1980s, Pritchet & van den Bergh found RR Lyraes in Andromeda's galactic halo and, more recently, in its globular clusters. Classification The RR Lyrae stars are conventionally divided into three main types, following classification by S.I. Bailey based on the shape of the stars' brightness curves: RRab variables are the most common, making up 91% of all observed RR Lyrae, and display the steep rises in brightness typical of RR Lyrae RRc are less common, making up 9% of observed RR Lyrae, and have shorter periods and more sinusoidal variation RRd are rare, making up between <1% and 30% of RR Lyrae in a system, and are double-mode pulsators, unlike RRab and RRc Distribution RR Lyrae stars were formerly called "cluster variables" because of their strong (but not exclusive) association with globular clusters; conversely, over 80% of all variables known in globular clusters are RR Lyraes. RR Lyrae stars are found at all galactic latitudes, as opposed to classical Cepheids, which are strongly associated with the galactic plane. Because of their old age, RR Lyraes are commonly used to trace certain populations in the Milky Way, including the halo and thick disk. Several times as many RR Lyraes are known as all Cepheids combined; in the 1980s, about 1900 were known in globular clusters. Some estimates have about 85,000 in the Milky Way. Though binary star systems are common for typical stars, RR Lyraes are very rarely observed in binaries. Properties RR Lyrae stars pulse in a manner similar to Cepheid variables, but the nature and histories of these stars is thought to be rather different. Like all variables on the Cepheid instability strip, pulsations are caused by the κ-mechanism, when the opacity of ionised helium varies with its temperature. RR Lyraes are old, relatively low mass, Population II stars, in common with W Virginis and BL Herculis variables, the type II Cepheids. Classical Cepheid variables are higher mass population I stars. RR Lyrae variables are much more common than Cepheids, but also much less luminous. The average absolute magnitude of an RR Lyrae star is about +0.75, only 40 or 50 times brighter than the Sun. Their period is shorter, typically less than one day, sometimes ranging down to seven hours. Some RRab stars, including RR Lyrae itself, exhibit the Blazhko effect in which there is a conspicuous phase and amplitude modulation. Period-luminosity relationships Unlike Cepheid variables, RR Lyrae variables do not follow a strict period-luminosity relationship at visual wavelengths, although they do in the infrared K band. They are normally analysed using a period-colour-relationship, for example using a Wesenheit function. In this way, they can be used as standard candles for distance measurements although there are difficulties with the effects of metallicity, faintness, and blending. The effect of blending can impact RR Lyrae variables sampled near the cores of globular clusters, which are so dense that in low-resolution observations multiple (unresolved) stars may appear as a single target. Thus the brightness measured for that seemingly single star (e.g., an RR Lyrae variable) is erroneously too bright, given those unresolved stars contributed to the brightness determined. Consequently, the computed distance is wrong, and certain researchers have argued that the blending effect can introduce a systematic uncertainty into the cosmic distance ladder, and may bias the estimated age of the Universe and the Hubble constant. Recent developments The Hubble Space Telescope has identified several RR Lyrae candidates in globular clusters of the Andromeda Galaxy and has measured the distance to the prototype star RR Lyrae. The Kepler space telescope provided accurate photometric coverage of a single field at regular intervals over an extended period. 37 known RR Lyrae variables lie within the Kepler field, including RR Lyrae itself, and new phenomena such as period-doubling have been detected. The Gaia mission mapped 140,784 RR Lyrae stars, of which 50,220 were not previously known to be variable, and for which 54,272 interstellar absorption estimates are available. The Next Generation Virgo Cluster Survey (NGVS) was used by Feng et al. (2024) to identify faint (~21 mag) candidate stars at galactocentric distances of ~20–300 kpc. The study employed empirical pulsation fitting techniques, initially developed in the Sloan Digital Sky Survey (SDSS), to analyze these candidates. Follow-up photometric data from the Dark Energy Survey (DES), Pan-STARRS 1 (PS1), and Subaru HSC strategic survey were used to validate and refine the derived pulsation parameters. In addition, mock RR Lyrae simulations addressed biases caused by measurement uncertainties and fitting complexities. Keck II's ESI spectrograph was also used to analyze spectra of distant Milky Way halo RR Lyrae candidates to identify background quasar contaminants in previously mentioned surveys.
Physical sciences
Stellar astronomy
Astronomy