id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
1063456
https://en.wikipedia.org/wiki/X-ray%20spectroscopy
X-ray spectroscopy
X-ray spectroscopy is a general term for several spectroscopic techniques for characterization of materials by using x-ray radiation. Characteristic X-ray spectroscopy When an electron from the inner shell of an atom is excited by the energy of a photon, it moves to a higher energy level. When it returns to the low energy level, the energy it previously gained by excitation is emitted as a photon of one of the wavelengths uniquely characteristic of the element. Analysis of the X-ray emission spectrum produces qualitative results about the elemental composition of the specimen. Comparison of the specimen's spectrum with the spectra of samples of known composition produces quantitative results (after some mathematical corrections for absorption, fluorescence and atomic number). Atoms can be excited by a high-energy beam of charged particles such as electrons (in an electron microscope for example), protons (see PIXE) or a beam of X-rays (see X-ray fluorescence, or XRF or also recently in transmission XRT). These methods enable elements from the entire periodic table to be analysed, with the exception of H, He and Li. In electron microscopy an electron beam excites X-rays; there are two main techniques for analysis of spectra of characteristic X-ray radiation: energy-dispersive X-ray spectroscopy (EDS) and wavelength dispersive X-ray spectroscopy (WDS). In X-ray transmission (XRT), the equivalent atomic composition (Zeff) is captured based on photoelectric and Compton effects. Energy-dispersive X-ray spectroscopy In an energy-dispersive X-ray spectrometer, a semiconductor detector measures energy of incoming photons. To maintain detector integrity and resolution it should be cooled with liquid nitrogen or by Peltier cooling. EDS is widely employed in electron microscopes (where imaging rather than spectroscopy is a main task) and in cheaper and/or portable XRF units. Wavelength-dispersive X-ray spectroscopy In a wavelength-dispersive X-ray spectrometer, a single crystal diffracts the photons according to Bragg's law, which are then collected by a detector. By moving the diffraction crystal and detector relative to each other, a wide region of the spectrum can be observed. To observe a large spectral range, three of four different single crystals may be needed. In contrast to EDS, WDS is a method of sequential spectrum acquisition. While WDS is slower than EDS and more sensitive to the positioning of the sample in the spectrometer, it has superior spectral resolution and sensitivity. WDS is widely used in microprobes (where X-ray microanalysis is the main task) and in XRF; it is widely used in the field of X-ray diffraction to calculate various data such as interplanar spacing and wavelength of the incident X-ray using Bragg's law. X-ray emission spectroscopy The father-and-son scientific team of William Lawrence Bragg and William Henry Bragg, who were 1915 Nobel Prize Winners, were the original pioneers in developing X-ray emission spectroscopy. An example of a spectrometer developed by William Henry Bragg, which was used by both father and son to investigate the structure of crystals, can be seen at the Science Museum, London. Jointly they measured the X-ray wavelengths of many elements to high precision, using high-energy electrons as excitation source. The cathode-ray tube or an x-ray tube was the method used to pass electrons through a crystal of numerous elements. They also painstakingly produced numerous diamond-ruled glass diffraction gratings for their spectrometers. The law of diffraction of a crystal is called Bragg's law in their honor. Intense and wavelength-tunable X-rays are now typically generated with synchrotrons. In a material, the X-rays may suffer an energy loss compared to the incoming beam. This energy loss of the re-emerging beam reflects an internal excitation of the atomic system, an X-ray analogue to the well-known Raman spectroscopy that is widely used in the optical region. In the X-ray region there is sufficient energy to probe changes in the electronic state (transitions between orbitals; this is in contrast with the optical region, where the energy emitted or absorbed is often due to changes in the state of the rotational or vibrational degrees of freedom of the system's atoms and groups of atoms). For instance, in the ultra soft X-ray region (below about 1 keV), crystal field excitations give rise to the energy loss. The photon-in-photon-out process may be thought of as a scattering event. When the x-ray energy corresponds to the binding energy of a core-level electron, this scattering process is resonantly enhanced by many orders of magnitude. This type of X-ray emission spectroscopy is often referred to as resonant inelastic X-ray scattering (RIXS). Due to the wide separation of orbital energies of the core levels, it is possible to select a certain atom of interest. The small spatial extent of core level orbitals forces the RIXS process to reflect the electronic structure in close vicinity of the chosen atom. Thus, RIXS experiments give valuable information about the local electronic structure of complex systems, and theoretical calculations are relatively simple to perform. Instrumentation There exist several efficient designs for analyzing an X-ray emission spectrum in the ultra soft X-ray region. The figure of merit for such instruments is the spectral throughput, i.e. the product of detected intensity and spectral resolving power. Usually, it is possible to change these parameters within a certain range while keeping their product constant. Grating spectrometers Usually X-ray diffraction in spectrometers is achieved on crystals, but in Grating spectrometers, the X-rays emerging from a sample must pass a source-defining slit, then optical elements (mirrors and/or gratings) disperse them by diffraction according to their wavelength and, finally, a detector is placed at their focal points. Spherical grating mounts Henry Augustus Rowland (1848–1901) devised an instrument that allowed the use of a single optical element that combines diffraction and focusing: a spherical grating. Reflectivity of X-rays is low, regardless of the used material and therefore, grazing incidence upon the grating is necessary. X-ray beams impinging on a smooth surface at a few degrees glancing angle of incidence undergo external total reflection which is taken advantage of to enhance the instrumental efficiency substantially. Denoted by R the radius of a spherical grating. Imagine a circle with half the radius R tangent to the center of the grating surface. This small circle is called the Rowland circle. If the entrance slit is anywhere on this circle, then a beam passing the slit and striking the grating will be split into a specularly reflected beam, and beams of all diffraction orders, that come into focus at certain points on the same circle. Plane grating mounts Similar to optical spectrometers, a plane grating spectrometer first needs optics that turns the divergent rays emitted by the x-ray source into a parallel beam. This may be achieved by using a parabolic mirror. The parallel rays emerging from this mirror strike a plane grating (with constant groove distance) at the same angle and are diffracted according to their wavelength. A second parabolic mirror then collects the diffracted rays at a certain angle and creates an image on a detector. A spectrum within a certain wavelength range can be recorded simultaneously by using a two-dimensional position-sensitive detector such as a microchannel photomultiplier plate or an X-ray sensitive CCD chip (film plates are also possible to use). Interferometers Instead of using the concept of multiple beam interference that gratings produce, the two rays may simply interfere. By recording the intensity of two such co-linearly at some fixed point and changing their relative phase one obtains an intensity spectrum as a function of path length difference. One can show that this is equivalent to a Fourier transformed spectrum as a function of frequency. The highest recordable frequency of such a spectrum is dependent on the minimum step size chosen in the scan and the frequency resolution (i.e. how well a certain wave can be defined in terms of its frequency) depends on the maximum path length difference achieved. The latter feature allows a much more compact design for achieving high resolution than for a grating spectrometer because x-ray wavelengths are small compared to attainable path length differences. Early history of X-ray spectroscopy in the U.S. Philips Gloeilampen Fabrieken, headquartered in Eindhoven in the Netherlands, got its start as a manufacturer of light bulbs, but quickly evolved until it is now one of the leading manufacturers of electrical apparatus, electronics, and related products including X-ray equipment. It also has had one of the world's largest R&D labs. In 1940, the Netherlands was overrun by Hitler’s Germany. The company was able to transfer a substantial sum of money to a company that it set up as an R&D laboratory in an estate in Irvington on the Hudson in NY. As an extension to their work on light bulbs, the Dutch company had developed a line of X-ray tubes for medical applications that were powered by transformers. These X-ray tubes could also be used in scientific X-ray instrumentations, but there was very little commercial demand for the latter. As a result, management decided to try to develop this market and they set up development groups in their research labs in both Holland and the United States. They hired Dr. Ira Duffendack, a professor at University of Michigan and a world expert on infrared research to head the lab and to hire a staff. In 1951 he hired Dr. David Miller as Assistant Director of Research. Dr. Miller had done research on X-ray instrumentation at Washington University in St. Louis. Dr. Duffendack also hired Dr. Bill Parish, a well known researcher in X-ray diffraction, to head up the section of the lab on X-ray instrumental development. X-ray diffraction units were widely used in academic research departments to do crystal analysis. An essential component of a diffraction unit was a very accurate angle measuring device known as a goniometer. Such units were not commercially available, so each investigator had do try to make their own. Dr Parrish decided this would be a good device to use to generate an instrumental market, so his group designed and learned how to manufacture a goniometer. This market developed quickly and, with the readily available tubes and power supplies, a complete diffraction unit was made available and was successfully marketed. The U.S. management did not want the laboratory to be converted to a manufacturing unit so it decided to set up a commercial unit to further develop the X-ray instrumentation market. In 1953 Norelco Electronics was established in Mount Vernon, NY, dedicated to the sale and support of X-ray instrumentation. It included a sales staff, a manufacturing group, an engineering department and an applications lab. Dr. Miller was transferred from the lab to head up the engineering department. The sales staff sponsored three schools a year, one in Mount Vernon, one in Denver, and one in San Francisco. The week-long school curricula reviewed the basics of X-ray instrumentation and the specific application of Norelco products. The faculty were members of the engineering department and academic consultants. The schools were well attended by academic and industrial R&D scientists. The engineering department was also a new product development group. It added an X-ray spectrograph to the product line very quickly and contributed other related products for the next 8 years. The applications lab was an essential sales tool. When the spectrograph was introduced as a quick and accurate analytical chemistry device, it was met with widespread skepticism. All research facilities had a chemistry department and analytical analysis was done by “wet chemistry” methods. The idea of doing this analysis by physics instrumentation was considered suspect. To overcome this bias, the salesman would ask a prospective customer for a task the customer was doing by “wet methods”. The task would be given to the applications lab and they would demonstrate how accurately and quickly it could be done using the X-ray units. This proved to be a very strong sales tool, particularly when the results were published in the Norelco Reporter, a technical journal issued monthly by the company with wide distribution to commercial and academic institutions. An X-ray spectrograph consists of a high voltage power supply (50 kV or 100 kV), a broad band X-ray tube, usually with a tungsten anode and a beryllium window, a specimen holder, an analyzing crystal, a goniometer, and an X-ray detector device. These are arranged as shown in Fig. 1. The continuous X-spectrum emitted from the tube irradiates the specimen and excites the characteristic spectral X-ray lines in the specimen. Each of the 92 elements emits a characteristic spectrum. Unlike the optical spectrum, the X-ray spectrum is quite simple. The strongest line, usually the Kalpha line, but sometimes the Lalpha line, suffices to identify the element. The existence of a particular line betrays the existence of an element, and the intensity is proportional to the amount of the particular element in the specimen. The characteristic lines are reflected from a crystal, the analyzer, under an angle that is given by the Bragg condition. The crystal samples all the diffraction angles theta by rotation, while the detector rotates over the corresponding angle 2-theta. With a sensitive detector, the X-ray photons are counted individually. By stepping the detectors along the angle, and leaving it in position for a known time, the number of counts at each angular position gives the line intensity. These counts may be plotted on a curve by an appropriate display unit. The characteristic X-rays come out at specific angles, and since the angular position for every X-ray spectral line is known and recorded, it is easy to find the sample's composition. A chart for a scan of a Molybdenum specimen is shown in Fig. 2. The tall peak on the left side is the characteristic alpha line at a two theta of 12 degrees. Second and third order lines also appear. Since the alpha line is often the only line of interest in many industrial applications, the final device in the Norelco X- ray spectrographic instrument line was the Autrometer. This device could be programmed to automatically read at any desired two theta angle for any desired time interval. Soon after the Autrometer was introduced, Philips decided to stop marketing X-ray instruments developed in both the U.S. and Europe and settled on offering only the Eindhoven line of instruments. In 1961, during the development of the Autrometer, Norelco was given a sub-contract from the Jet Propulsion Lab. The Lab was working on the instrument package for the Surveyor spaceship. The composition of the Moon’s surface was of major interest and the use of an X-ray detection instrument was viewed as a possible solution. Working with a power limit of 30 watts was very challenging, and a device was delivered but it wasn’t used. Later NASA developments did lead to an X-ray spectrographic unit that did make the desired moon soil analysis. The Norelco efforts faded but the use of X-ray spectroscopy in units known as XRF instruments continued to grow. With a boost from NASA, units were finally reduced to handheld size and are seeing widespread use. Units are available from Bruker, Thermo Scientific, Elvatech Ltd. and SPECTRA. Other types of X-ray spectroscopy X-ray absorption spectroscopy X-ray magnetic circular dichroism
Physical sciences
Spectroscopy
Chemistry
1064008
https://en.wikipedia.org/wiki/Alvarezsauridae
Alvarezsauridae
Alvarezsauridae is a family of small, long-legged dinosaurs. Although originally thought to represent the earliest known flightless birds, they are now thought to be an early diverging branch of maniraptoran theropods. Alvarezsaurids were highly specialized. They had tiny but stout forelimbs, with compact, bird-like hands. Their skeletons suggest that they had massive breast and arm muscles, possibly adapted for digging or tearing. They had long, tube-shaped snouts filled with tiny teeth. They have been interpreted as myrmecophagous, adapted to prey on colonial insects such as termites, with the short arms acting as effective digging instruments to break into nests. Alvarezsaurus, the type genus of the family, was named for the historian Gregorio Álvarez. History of study Bonaparte (1991) described the first alvarezsaurid, Alvarezsaurus calvoi, from an incomplete skeleton found in Patagonia, Argentina. Bonaparte also named a family, Alvarezsauridae, to contain it. He argued that Alvarezsaurus might be most closely related to the ornithomimosaurs. In 1993, Perle et al. described the next alvarezsaur to be discovered, naming it Mononychus olecranus (meaning "one claw"). A month later they changed the genus name to Mononykus, because the earlier spelling was already the genus name of an extant beetle. Perle et al. mistakenly described Mononykus as a member of Avialae, one more advanced than Archaeopteryx. They argued that the family Alvarezsauridae was actually a group of Mesozoic flightless birds on the basis of several features that were supposedly unique to birds. In 1996, Novas described another member of the group called Patagonykus puertai. Karhu and Rautian (1996) described a Mongolian member of the family; Parvicursor remotus. Chiappe et al.(1998) described another Mongolian member, Shuvuuia deserti, and found it to be a bird as in Perle et al.'s analysis. These mistaken assignments of alvarezsaurids to birds were caused primarily by features that are strikingly, or even uniquely, avian. The sternum, for example, is elongated and deeply keeled for an enlarged pectoralis muscle, as it is in neognathous birds and volant ratites. One bone in the skull of Shuvuuia appeared to be an ectethmoid fused to a prefrontal. The ectethmoid is an ossification known only in Neornithes. Other birdlike characters included the palatine, foramen magnum, cervical and caudal vertebrae, and many others. Several researchers disagreed with Perle et al. (1993) and Chiappe et al. (1998), Feduccia (1994), Ostrom (1994), Wellnhofer (1994), Kurochkin (1995), Zhou (1995), and Sereno (1997) considered it unlikely that alvarezsaurids were members of Avialae. Martin (1997) performed a cladistic analysis but Sereno criticized it strongly, finding it flawed by incorrect codings, use of only select data, and results that did not support his conclusions. Sereno (1999) performed a new analysis, revising the anatomical interpretations and clarifying the characters. He found that alvarezsaurids were more parsimoniously related to the Ornithomimosauria. As the more primitive members of the Alvarezsauridae were better characterized, the monophyly of the clade was strongly supported, but the more primitive members lacked the most birdlike traits. Some of these traits had been misinterpreted, also. The remaining similarities between birds and alvarezsaurs, like the keeled sterna, are another case of homoplasy; where the derived alvarezsaurids developed birdlike characters through convergent evolution, rather than inheriting them from a common ancestor with birds. Description Alvarezsaurids ranged from to in length, although some possible members may have been larger, including the European Heptasteornis that may have reached long. Fossils attributed to alvarezsaurids have also been found in North and South America and Asia, and range in age from about 86 to 66 million years ago. Feathers At least one specimen of alvarezsaurid, from the species Shuvuuia deserti, preserved down-like, feathery, integumental structures covering the fossil. Schweitzer et al. (1999) subjected these filaments to microscopic, morphological, mass spectrometric, and immunohistochemical studies and found that they consisted of beta-keratin, which is the primary protein in feathers. Lifestyle The lifestyle of alvarezsaurids has been debated since the nature of these dinosaurs was established. It has been suggested by numerous palaeontologists that they used their claws to break into ant and termite colonies, though the arm anatomy of an alvarezsaurid would require the animal to lie on its chest against a termite nest. It is also possible that the alvarezsaurids filled some ecological niche that has not yet been considered. Studies of the tails in various alvarezsaur genera also suggest they were possessed of an incredible ability to change their rotational inertia, and combined with their forelimbs, this suggests their ecological niches were similar to those of aardvarks, pangolins, and anteaters. Additionally, it is known that alvarezsaurids, with their long legs, appear to be built for speed. What implications this has on possible lifestyle is unknown. The discovery of Qiupanykus in association with oviraptorid eggs, indicates that the advanced alvarezsaurids may also have been specialists in nest raiding, using their robust thumb claws to crack open eggshells. Classification Turner et al. (2007) place the alvarezsaurs as the most basal group in the Maniraptora, one step more derived than Ornitholestes and two more derived than the Ornithomimosauria. The alvarezsaurs are more primitive than the Oviraptorosauria. Novas' 1996 description of Patagonykus, demonstrated that it was a link between the more primitive (basal) Alvarezsaurus and the more advanced (derived) Mononykus, and reinforced their monophyly. Parvicursor was discovered shortly after, and placed in its own family Parvicursoridae, and then Shuvuuia in 1998. Everything has since been lumped into Alvarezsauridae, with Mononykinae surviving as a subfamily. There may be a relationship between the alvarezsaurids and the Ornithomimosauria as sister clades within either Thomas Holtz's Arctometatarsalia or Paul Sereno's Ornithomimiformes. The discovery of Haplocheirus, which exhibits transitional features between the more derived alvarezsaurs and other maniraptorans, particularly in relation to the skull structure and development of the hand, has provided further support for that relationship. The taxonomy of the alvarezsaurs has been somewhat confused, due to different authors using different names for groups with the same definition. The family Alvarezsauridae was first coined by Jose Bonaparte in 1991, but given no specific phylogenetic definition. Novas later defined the group as the most recent common ancestor of Alvarezsaurus and Mononykus plus all its descendants, though others, such as Paul Sereno, used a more inclusive definition, such as all dinosaurs closer to Shuvuuia than to modern birds. In 2009, Livezey and Zusi used the name Alvarezsauroidea for the total group of all alvarezsaurs, restricting the name Alvarezsauridae to the clade defined by Alvarezsaurus + Mononykus. This was followed by Choiniere and colleagues in 2010, who described the first non-alvarezsaurid alvarezsauroid, Haplocheirus. Some authors have used the name Mononykinae for the sub-group of alvarezsaurs including the advanced Mongolian species. However, Choiniere and colleagues argued that Parvicursorinae has priority, since its coordinate name under the ICZN Code, Parvicursoridae, was named earlier. Another subfamily, Patagonykinae, has been named to include the South American Patagonykus and Bonapartenykus, but a few recent studies have placed them just outside Alvarezsauridae, some of which do not even recover them in a single clade, making Patagonykinae turn out to be paraphyletic. The cladogram presented here follows a 2012 phylogenetic analysis by Agnolin and colleagues. Cladogram after Xu et al., 2011:
Biology and health sciences
Theropods
Animals
1064018
https://en.wikipedia.org/wiki/Confuciusornis
Confuciusornis
Confuciusornis is a genus of basal crow-sized avialan from the Early Cretaceous Period of the Yixian and Jiufotang Formations of China, dating from 125 to 120 million years ago. Like modern birds, Confuciusornis had a toothless beak, but closer and later relatives of modern birds such as Hesperornis and Ichthyornis were toothed, indicating that the loss of teeth occurred convergently in Confuciusornis and living birds. It was thought to be the oldest known bird to have a beak, though this title now belongs to an earlier relative Eoconfuciusornis. It was named after the Chinese moral philosopher Confucius (551–479 BC). Confuciusornis is one of the most abundant vertebrates found in the Yixian Formation, and several hundred complete specimens have been found. History of discovery In November 1993, the Chinese paleontologists Hou Lianhai and Hu Yoaming of the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) at Beijing, visited fossil collector Zhang He at his home in Jinzhou, where he showed them a fossil bird specimen that he had bought at a local flea market. In December, Hou learned about a second specimen, which had been discovered by a farmer named Yang Yushan. Both specimens were found in the same locality in Shangyuan, Beipiao. In 1995, these two specimens, as well as a third one, were formally described as a new genus and species of bird, Confuciusornis sanctus, by Hou and colleagues. The generic name combines the philosopher Confucius with Greek ὄρνις (ornis), "bird". The specific name means "holy one" in Latin and is a translation of Chinese 圣贤 (shèngxián), "sage", again in reference to Confucius. The first discovered specimen was designated the holotype and catalogued under the specimen number IVPP V10918; it comprises a partial skeleton with skull and parts of the forelimb. Of the other two skeletons, one (paratype, IVPP V10895) comprises a complete pelvis and hind limb, and the other (paratype, IVPP V10919–10925) a fragmentary hind limb together with six feather impressions attached to both sides of the tibia (shin bone). It was soon noted that the two paratype specimens only comprise bones that are unknown from the holotype, and that this lack of overlap makes their referral to the species speculative. Only the discovery of a great number of well-preserved specimens shortly after had confirmed that the specimens indeed represent a single species. Together with the early mammal Zhangheotherium, which was discovered at around the same time, Confuciusornis was considered to be the most remarkable fossil discovery of the Jehol Biota, which in the next decades would reveal the most important record of Mesozoic birds worldwide. In the late 1990s, Confuciusornis was thought to be both the oldest beaked bird as well as the earliest bird after Archaeopteryx. It was also considered to be only slightly younger than Archaeopteryx – the Yixian Formation, the rock unit where most Confuciusornis specimens have been found, was thought to be of Late Jurassic (Tithonian) age at the time. Although two bird genera, Sinornis and Cathayornis, had already described from the Jehol biota in 1992, these were only based on fragmentary remains and stem from the younger Jiufotang Formation, which was considered to be of Early Cretaceous age. Later, both formations have been dated to the Lower Cretaceous (Barremian to Aptian stages, 131–120 million years ago). In 1995, local farmers began digging for fossils near the village of Sihetun, Beipiao, in what would become one of the most productive localities of the Jehol biota. Large-scale professional excavations at this single locality have been carried out by the IVPP from 1997 onwards; recovered fossils include several hundred specimens of Confuciusornis. Many additional sites producing fossils of the Jehol biota have been recognized since, distributed over a large region including Liaoning, Hebei, and Inner Mongolia. Due to the great abundance, preservation, and commercial value of the fossils, excavations by local farmers produced an unusually high number of fossils. Although a portion of these fossils have been added to the collections of Chinese research institutions, more have probably been smuggled out of the country. In 1999, it was estimated that the National Geological Museum of China in Beijing housed nearly 100 specimens of Confuciusornis, and in 2010, the Shandong Tianyu Museum of Nature was reported to possess 536 specimens of the bird. The majority of specimens, however, are held privately and thus are not available for research. At one time forty individuals were discovered on a surface of about 100 m2. This has been explained as the result of entire flocks of birds being simultaneously killed by ash, heat or poisonous gas following the volcanic eruptions that caused the tuff stone in which the fossils were found to be deposited as lake sediments. Additional species and synonyms Since the description of Confuciusornis sanctus, five additional species have been formally named and described. As with many other fossil genera, species are difficult to define, as differences between species can often not be readily distinguished from variation that occurs within a species. In the case of Confuciusornis, only C. sanctus is universally accepted. Confuciusornis chuonzhous was named by Hou in 1997 based on specimen IVPP V10919, originally a paratype of C. sanctus. The specific name refers to Chuanzhou, an ancient name for Beipiao. C. chuonzhous is now generally considered synonymous with C. sanctus. Confuciusornis suniae, named by Hou in the same 1997 publication, was based on specimen IVPP V11308. The specific name honours madam Sun, the wife of Shikuan Liang who donated the fossil to the IVPP. C. suniae is now usually considered synonymous with C. sanctus. Confuciusornis dui was named by Hou and colleagues in 1999. The specific name again honours the donating collector, Du Wengya. The holotype specimen (IVPP V11553) is a nearly complete skeleton of an adult that includes a pair of long tail feathers and the impression of the horny beak. A second specimen, the paratype IVPP 11521, is fragmentary and includes some vertebrae and ribs, tail, sternum and pelvis, and femora. According to Hou and colleagues, C. dui was smaller and more gracile than most other Confuciusornis specimens, with the holotype being ca. 15% smaller than the holotype of C. sanctus and ca. 30% smaller than larger individuals of that species. The jaw tips were more pointed than in C. sanctus, and the mandible lacked the underside keel that is distinct in the latter species. Further differences to the type species can be found in the postcranium: The claw on the first digit was not enlarged as in C. sanctus. The sternum was more elongate and differed in anatomical details, and the lower segment of the hind limb (the tarsometatarsus) was shorter than the pygostyle of the tail. A statistical analysis by Marguán-Lobon and colleagues in 2011 revealed no significant differences to specimens referred to the smallest size class of C. sanctus, suggesting that the supposed differences are individual variations of a single species. However, these authors could not re-locate the C. dui holotype, which is possibly lost, and therefore had to rely on a cast of that specimen for their measurements. A re-study of the C. dui specimens would be required in order to evaluate the validity of the species. Confuciusornis feducciai was named in 2009 by Zhang Fucheng and colleagues, the specific name honouring ornithologist Alan Feduccia. The holotype, D2454, was discovered at the Sihetun locality and is kept at the Dalian Natural Museum. According to Zhang and colleagues, C. feducciai differed from other Confuciusornis species in its larger size, skeletal proportions, and a number of morphological features. The forelimb was 15% longer than the hind limb, while they were of equal length in C. sanctus. The upper end of the humerus lacked the large opening (foramen) that is characteristic for other Confuciusornis specimens. The first phalanx of the first digit was more slender. Other differences occur in the furcula, which was V-shaped; the sternum, which was broader than long; and the ischium, which was long compared to the pubis. Marguán-Lobon and colleagues, in 2011, argued that this diagnosis is problematic. The large opening in the humerus, although apparently absent in the left humerus, was clearly present in the right humerus of the holotype. Furthermore, their statistical analysis found the specimen to fall well within the continuum of variation of C. sanctus. These authors therefore proposed that C. feducciai is identical (a junior synonym) of C. sanctus. Confuciusornis jianchangensis was named in 2010 by Li Li and colleagues, based on specimen PMOL-AB00114 found at Toudaoyingzi. In contrast to most other species, which stem from the Yixian Formation, C. jianchangensis is found in the Jiufotang Formation. In 2002 Hou named the genus Jinzhouornis, but Chiappe et al. (2018) and Wang et al. (2018) showed that this genus is a junior synonym of Confuciusornis based on morphometry and examination of known confuciusornithiform specimens. Description Size Confuciusornis was about the size of a modern crow, with a total length of and a wingspan of up to . Its body weight has been estimated to have been as much as , or as little as . C. feducciai was about a third longer than average specimens of C. sanctus. Distinguishing traits Confuciusornis shows a mix of basal and derived traits. It was more "advanced" or derived than Archaeopteryx in possessing a short tail with a pygostyle (a bone formed from a series of short, fused tail vertebrae) and a bony sternum (breastbone), but more basal or "primitive" than modern birds in retaining large claws on the forelimbs, having a primitive skull with a closed eye-socket, and a relatively small breastbone. At first the number of basal characteristics was exaggerated: Hou assumed in 1995 that a long tail was present and mistook grooves in the jaw bones for small degenerated teeth. Skull The skull morphology of Confuciusornis has been difficult to determine, due to the crushed and deformed nature of the fossils. The skull was near triangular in side view, and the toothless beak was robust and pointed. The front of the jaws had deep neurovascular foramina and grooves, associated with the keratinous rhamphotheca (horn-covered beak). The skull was rather robust, with deep jaws, especially the mandible. The tomial crest of the upper jaw (a bony support for the jaw's cutting edge) was straight for its entire length. The premaxillae (front bones of the upper jaw) were fused together for most of the front half of the snout, but were separated at the tip by a V-shaped notch. The frontal processes that projected hindwards from the premaxillae were thin and extended above the orbits (eye openings) like in modern birds, but unlike Archaeopteryx and other primitive birds without pygostyles, where these processes end in front of the orbits. The maxilla (the second large bone of the upper jaw) and premaxilla articulated by an oblique suture, and the maxilla had an extensive palatal shelf. The nasal bone was smaller than in most birds, and had a slender process that directed down towards the maxilla. The orbit was large, round, and contained sclerotic plates (the bony support inside the eye). A crescent-shaped element that formed the front wall of the orbit may be an ethmoidolacrimal complex similar to that of pigeons, but the identity of these bones is unclear due to bad preservation, and the fact that this region is very variable in modern birds. The external nares (bony nostrils) were near triangular and positioned far from the tip of the snout. The borders of the nostrils were formed by the premaxillae above, the maxilla below, and the nasal wall at the back. Few specimens preserve the sutures of the braincase, but one specimen shows that the frontoparietal suture crossed the skull just behind the postorbital process and the hindmost wall of the orbit. This was similar to Archaeopteryx and Enaliornis, whereas it curves back and crosses the skull roof much farther behind in modern birds, making the frontal bone of Confuciusornis small compared to those of modern birds. A prominent supraorbital flange formed the upper border of the orbit, and continued as the postorbital process, which had prominent crests which projected outwards to the sides, forming an expansion of the orbit's rim. The squamosal bone was fully incorporated into the braincase wall, making its exact borders impossible to determine, which is also true for adult modern birds. Various interpretations have been proposed of the morphology and identity of the bones in the temporal region behind the orbits, but it may not be resolvable with the available fossils. Confuciusornis was considered the first known bird with an ancestral diapsid skull (with two temporal fenestrae on each side of the skull) in the late 1990s, but in 2018, Elzanowski and colleagues concluded that the configuration seen in the temporal region of confuciusornithids was autapomorphic (a unique trait that evolved secondarily rather than having been retained from a primitive condition) for their group. The quadrate bone and the back end of the jugal bar were bound in a complex scaffolding that connected the squamosal bone with the lower end of the postorbital process. This scaffolding consisted of two bony bridges, the temporal bar and the orbitozygomatic junction, which gave the appearance of the temporal opening being divided similarly to diapsid skulls, though this structure is comparable to bridges over the temporary fossa in modern birds. The mandible (lower jaw) is one of the best preserved parts of the skull. It was robust, especially at the front third of its length. The tomial crest was straight for its entire length, and a notch indented the sharp tip of the mandible. The mandible was spear-shaped in side view, due to its lower margin slanting downwards and back from its tip for the front third of its length (the jaw was also deepest at a point one third from the tip). The symphyseal part (where the two halves of the lower jaw connected) of the dentary was very robust. The lower margin formed an angle at the level of the front margin of the nasal foramen, which indicates how far back the rhamphotheca of the beak extended. The dentary had three processes that extended backwards into other bones placed further back in the mandible. The articular bone at the back of the mandible was completely fused with the surangular and prearticular bones. The mandible extended hindwards beyond the cotyla (which connected with the condyle of the upper jaw), and this part was therefore similar to a retroarticular process as seen in other taxa. The surangular enclosed two mandibular fenestrae. The hindmost part of the surangular had a small foramen placed in the same position as similar openings in the mandibles of non-bird theropods and modern birds. The splenial bone was three-pronged (as in some modern birds, but unlike the simple splenial of Archaeopteryx), and its lower margin followed the lower margin of the mandible. There was a large rostral mandibular fenestra and a small, rounded caudal fenestra behind it. Though only five specimens preserve parts of the beak's keratinous covering, these show that there would have been differences between species not seen in the skeleton. The holotype of C. dui preserves the outline of an upwards curving beak which sharply tapers towards its tip, while a C. sanctus specimen (IVPP V12352) has an upper margin that is almost straight, and a tip that appears to be slightly hooked downwards. Two further specimens (STM13-133 and STM13-162) belonging to an indeterminate species were described in 2020; the former suggests that, unlike modern birds, the beak on both jaws was made up of two separate elements that met at the midline, with feathers growing between them on the upper jaw. Also unlike modern birds, these specimens suggest that the upper beak extended backwards onto the maxilla due to the presence of foramina. Postcranial skeleton The various specimens seem to have a variable number of neck vertebrae, some showing eight, others nine. The first vertebra, the atlas, bore a faint keel on the underside. The next, the axis, had an expanded spinal process on the top and its side was excavated by an elongated groove in the side. The remaining neck vertebrae all had rather low spinal processes. There is no clear evidence of a pneumatisation, in the form of internal air spaces, of the vertebral bodies of the neck. The front articulation facets of the neck vertebrae were saddle-shaped. Their undersides were pinched. There were at least twelve back vertebrae. They were amphiplatian, flat at both ends, and had rather small intervertebral foramina, the spaces between the vertebral body and the neural arch. Their spinal processes were tall and narrow in side view. Their side processes projected horizontally and were deeply excavated at the rear underside. The sides of the back vertebrae also had deep oval excavations. Seven sacral vertebrae were fused into a synsacrum. The front sacral vertebra had a round and concave front articulation facet. The vertebral bodies of the front half of the synsacrum were excavated at their sides, comparable to the back vertebrae. Robust side processes connected the synsacrum to the ilia of the pelvis. Although earlier descriptions had counted four or five "free", not fused, tail vertebrae, Chiappe e.a. in 1999 reported seven of them. These had round and somewhat concave front articulation facets. Their spinal processes were high and transversely compressed. The side processes were robust and stick out horizontally to the side. Their articulation processes were rather long. The last of these vertebrae had a rectangular profile. Its neural arch had short processes pointing obliquely to above and sideways. The tail ended in a pygostyle, a complete fusion of the last vertebrae. Their number is uncertain. The pygostyle was about 40% longer than the first part of the tail. At its underside the pygostyle bore a well-developed keel, running from front to rear. Its top was incised by a long groove between prominent ridges. Confuciusornis had an exceptionally large humerus (upper arm bone). Near its shoulder-end this was equipped with a prominent deltopectoral crest. Characteristically this crista deltopectoralis was with Confuciusornis pierced by an oval hole which may have reduced the bone's weight or enlarged the attachment area of the flight muscles. The furcula or wishbone, like that of Archaeopteryx, was a simple curved bar lacking a pointed process at the back, a hypocleidium. The sternum was relatively broad and had a low keel which was raised at the back end. This bony keel may or may not have anchored a larger, cartilaginous, keel for enlarged pectoral muscles. The scapulae (shoulder blades) were fused to the strut-like coracoid bones and may have formed a solid base for the attachment of wing muscles. The orientation of the shoulder joint was sideways, instead of angled upward as in modern birds; this means that Confuciusornis was unable to lift its wings high above its back. According to a study by Phil Senter in 2006, the joint was even pointed largely downwards meaning that the humerus could not be lifted above the horizontal. This would make Confuciusornis incapable of the upstroke required for flapping flight; the same would have been true for Archaeopteryx. The wrist of Confuciusornis shows fusion, forming a carpometacarpus. The second and third metacarpals were also partially fused, but the first was unfused, and the fingers could freely move relative to each other. The second metacarpal, which supported the flight feathers, was very heavily built; its finger carries a small claw. The claw of the first finger to the contrary was very large and curved. The stub-like third metacarpal, which supported the calami of the feathers, was probably enclosed in the flesh of the hand. The formula of the finger phalanges was 2-3-4-0-0. The pelvis was connected to a sacrum formed by seven sacral vertebrae. The pubis was strongly pointing backwards. The left and right ischia were not fused. The femur was straight; the tibia only slightly longer. The metatarsals of the foot were relatively short and fused to each other and to the lower ankle bones, forming a tarsometatarsus. A rudimentary fifth metatarsal is present. The first metatarsal was attached to the lower shaft of the second and supported a first toe or hallux, pointing to the back. The formula of the toe phalanges was 2-3-4-5-0. The proportions of the toes suggest that they were used for both walking and perching, while the large claws of the thumb and third finger were probably used for climbing. Feathers and soft tissue The wing feathers of Confuciusornis were long and modern in appearance. The primary wing feathers of a 0.5-kilogram individual reached 20.7 centimetres in length. The five longest primary feathers (remiges primarii) were more than times the length of the hand and relatively longer than those of any living bird, while the secondary feathers of the lower arm were rather short by comparison. The outermost primary was much shorter than the second outermost primary, creating a relatively round, broad wing. Its wing shape does not specifically match any particular shape found among living birds. The primary feathers were asymmetrical to varying degrees, and especially so in the outermost primaries. It is unclear whether the upper arm carried tertiaries. Covert feathers are preserved covering the upper part of the wing feathers in some specimens, and some specimens have preserved the contour feathers of the body. Unlike some more advanced birds, Confuciusornis lacked an alula, or "bastard wing". In modern birds this is formed by feathers anchored to the first digit of the hand, but this digit appears to have been free of feathers and independent of the body of the wing in Confuciusornis. According to Dieter Stefan Peters, to compensate for the lack of an alula, the third finger might have formed a separate winglet below the main wing, functioning like the flap of an aircraft. Despite the relatively advanced and long wing feathers, the forearm bones lacked any indication of quill knobs (papillae ulnares), or bony attachment points for the feather ligaments. Many specimens preserve a pair of long, narrow tail feathers, which grew longer than the entire length of the rest of the body. Unlike the feathers of most modern birds, these feathers were not differentiated into a central quill and barbs for most of their length. Rather, most of the feather formed a ribbon-like sheet, about six millimetres wide. Only at the last one quarter of the feather, towards the rounded tip, does the feather become differentiated into a central shaft with interlocking barbs. Many individuals of Confuciusornis lacked even these two tail feathers, possibly due to sexual dimorphism. The rest of the tail around the pygostyle was covered in short, non-aerodynamic feather tufts similar to the contour feathers of the body, rather than the familiar feather fan of modern bird tails. Laser fluorescence of two Confuciusornis specimens revealed additional details of their soft-tissue anatomy. The propatagium of Confuciusornis was large, likely relatively thick, and extended from the shoulder to the wrist, as in modern birds; the extent of the postpatagium is also similar to modern birds. Reticulate scales covered the underside of the foot, and the phalanges and metatarsals supported large, fleshy pads, although the interphalangeal pads were either small or entirely absent. Plumage pattern In early 2010, a group of scientists led by Zhang Fucheng examined fossils with preserved melanosomes (organelles which contain colors). By studying such fossils with an electron microscope, they found melanosomes preserved in a fossil Confuciusornis specimen, IVPP V13171. They reported the presence of melanosomes were of two types: eumelanosomes and pheomelanosomes. This indicated that Confuciusornis had hues of grey, red/brown and black, possibly something like the modern zebra finch. It was also the first time an early bird fossil has been shown to contain preserved pheomelanosomes. However, a second research team failed to find these reported traces of pheomelanosomes. Their 2011 study also found a link between the presence of certain metals, like copper, and preserved melanin. Using a combination of fossil impressions of melanosomes and the presence of metals in the feathers, the second team of scientists reconstructed Confuciusornis with darkly colored body feathers and upper wing feathers, but found no trace of either melanosomes or metals in the majority of the wing feathers. They suggested that the wings of Confuciusornis would have been white or, possibly, colored with carotenoid pigments. The long tail feathers of male specimens would have also been dark in color along their entire length. A 2018 study of the specimen CUGB P1401 indicated the presence of heavy spotting on the wings, throat, and crest of Confuciusornis. Classification Hou assigned Confuciusornis to the Confuciusornithidae in 1995. At first he assumed it was a member of the Enantiornithes and the sister taxon of Gobipteryx. Later he understood that Confuciusornis was not an enantiornithean but concluded it was the sister taxon of the Enantiornithes, within a larger Sauriurae. This was heavily criticised by Chiappe who regarded Sauriurae to be paraphyletic as there were insufficient shared traits that indicated that the Confuciusornithidae and the Enantiornithes were closely related. In 2001, Ji Qiang suggested an alternative position as the sister taxon of the Ornithothoraces. In 2002 Ji's hypothesis was confirmed by a cladistic analysis by Chiappe, who defined a new group: the Pygostylia of which Confuciusornis is by definition the most basal member. Several traits of Confuciusornis show its position in bird evolution; it has a more "primitive" skull than Archaeopteryx, but it is the first known bird to have lost the long tail of Archaeopteryx and develop fused tail vertebrae, a pygostyle. One controversial study concluded that Confuciusornis may be more closely related to Microraptor and other dromaeosaurids than to Archaeopteryx, but this study was criticized on methodological grounds. The present standard interpretation of the phylogenetic position of Confuciusornis can be shown in this cladogram: A close relative, the confuciusornithid Changchengornis hengdaoziensis, also lived in the Yixian Formation. Changchengornis also possessed the paired, long tail feathers, as did several more advanced enantiornith birds. True, mobile tail fans only appeared in ornithuromorph birds, and possibly in the enantiornithine Shanweiniao. Paleobiology The large, fleshy phalangeal foot pads, small interphalangeal foot pads, presence of only reticulate scales on the underside of the foot (which increases flexibility), and curved foot claws of Confuciusornis are all traits shared with modern tree-dwelling, perching birds, suggesting that Confuciusornis may have had a similar lifestyle. Comparisons between the scleral rings supporting the eyes of Confuciusornis and modern birds and other reptiles indicate that it may have been diurnal, similar to most modern birds. Flight Confuciusornis has traditionally been assumed to have been a competent flier based on its extremely long wings with strongly asymmetrical feathers. Other adaptations for improved flight capabilities include: a fused wrist, a short tail, an ossified sternum with a central keel, a strut-like coracoid, a large deltopectoral crest, a strong ulna (forearm bone) and an enlarged second metacarpal. The sternal keel and deltopectoral crest (which provides a more powerful upstroke) are adaptations to flapping flight in modern birds, indicating that Confuciusornis may have been capable of the same. However, it may have had a different flight stroke due to being incapable of rotating its arm behind the body, and its relatively smaller sternal keel indicates that it likely was not capable of flight for extended periods of time. Several contrary claims have been made against that the flight capabilities of Confuciusornis. The first of these regarded problems to attain a steep flight path due to a limited wing amplitude. In Senter's interpretation of the position of the shoulder joint, a normal upstroke would be impossible precluding flapping flight entirely. Less radical is the assessment that due to the lack of a keeled sternum and a high acrocoracoid, the musculus pectoralis minor could not serve as a M. supracoracoideus lifting the humerus via a tendon running through a . This, coupled with a limited upstroke caused by a lateral position of the shoulder joint, would have made it difficult to gain altitude. Some authors, therefore, proposed that Confuciusornis used its large thumb claws to climb tree trunks. Martin assumed that it could raise its torso almost vertically like a squirrel. Daniel Hembree, however, while acknowledging that tree climbing was likely, pointed out that the rump was apparently not lifted more than 25° relative to the femur in vertical position, as shown by the location of the antitrochanter in the hip joint. Dieter S. Peters considered it very unlikely that Confuciusornis climbed trunks as turning the thumb claw inwards would stretch the very long wing forwards, right in the path of obstructing branches. Peters sees Confuciusornis as capable of flapping flight but specialised in soaring flight. Also a controversy is the strength of the feathers. In 2010, Robert Nudds and Gareth Dyke published a study arguing that in both Confuciusornis and Archaeopteryx, the raches (central shafts) of the primary feathers were too thin and weak to have remained rigid during the power stroke required for true flight. They argued that Confuciusornis would at most have employed gliding flight, which is also consistent with the unusual adaptations seen in its upper arm bones, and more likely used its wings for mere parachuting, limiting fall speed if it dropped from a tree. Gregory S. Paul, however, disagreed with their study. He argued that Nudds and Dyke had overestimated the weights of these early birds, and that more accurate weight estimates allowed powered flight even with relatively narrow raches. Nudds and Dyke assumed a weight of for Confuciusornis, as heavy as the modern teal. Paul argued that a more reasonable body weight estimate is about , less than that of a pigeon. Paul also noted that Confuciusornis is commonly found as large assemblages in lake bottom sediments with little to no evidence of extensive postmortem transport, and that it would be highly unusual for gliding animals to be found in such large numbers in deep water. Rather, this evidence suggests that Confuciusornis traveled in large flocks over the lake surfaces, a habitat consistent with a flying animal. A number of researchers have questioned the correctness of the rachis measurements, stating that the specimens they had studied showed a shaft thickness of , compared to as reported by Nudds and Dyke. Nudd and Dyke replied that, apart from the weight aspect, such greater shaft thickness alone would make flapping flight possible; however, they allowed for the possibility of two species being present in the Chinese fossil material with a differing rachis diameter. In 2016, Falk et al. argued in favor of flight capabilities for Confuciusornis using evidence from laser fluorescence of two soft tissue-preserving specimens. They found that, contrary to Nudds and Dyke's assertions, the raches of Confuciusornis were relatively robust, with a maximum width of over . The wing shape is consistent with either birds that live in dense forests or gliding birds; the former is consistent with its environment being densely forested, and requiring more maneuverability and stability than speed. The substantial propatagium would have produced a generous amount of lift, while the likewise large postpatagium would have provided a large attachment area for the calami of the feathers, which would have kept them as a straight airfoil. This collectively is strongly indicative that Confuciusornis was capable of powered flight, if not only for short periods of time. Tail feathers Many specimens of Confuciusornis preserve a single pair of long, streamer-like tail feathers, similar to those present in some modern birds-of-paradise. Specimens lacking these feathers include ones that otherwise have exquisitely preserved feathers on the rest of the body, indicating that their absence is not simply due to poor preservation. Larry Martin and colleagues stated in 1998 that long tail feathers are present in about 5 to 10% of the specimens known at the time. A 2011 analysis by Jesús Marugán-Lobón and colleagues found that out of 130 specimens, 18% had long tail feathers and 28% had not, while in the remaining 54% preservation was insufficient to determine their presence or absence. The biological meaning of this pattern has been discussed controversially. Martin and colleagues suggested that the pattern might reflect sexual dimorphism, with the streamer-like feathers only present in one sex (likely the males) which used them in courtship displays. This interpretation was followed by the majority of subsequent studies. Chiappe and colleagues, in 1999, argued that sexual dimorphism is not the only but the most reasonable explanation, noting that in modern birds the length of ornamental feathers often varies between the sexes. Controversy arose from the observation that the known specimens of Confuciusornis can be divided into a small-sized and a large-sized group, but that this bimodal distribution is unrelated to the possession of long tail feathers. Chiappe and colleagues argued in 2008 that this size distribution can be explained by a dinosaur-like mode of growth (see section Growth), and maintained that sexual dimorphism is the most likely explanation for the presence and absence of long tail feathers. Winfried and Dieter Peters, however, responded in 2009 that both sexes likely had long tail feathers, as is the case in most modern birds that show similar feathers. One of the sexes, however, would have been larger than the other (sexual size dimorphism). These researchers further suggested that the distribution of size and long tail feathers in Confuciusornis was similar to the modern pheasant-tailed jacana (Hydrophasianus chirurgus), a water-bird in which and the female is largest and adult individuals of both sexes have long tails, but only during the breeding season. Confuciusornis differs from the jacanas in that long tail feathers are present in specimens of all sizes, even in some of the smallest known specimens. This suggests that the long tail feathers might not have had a function in reproduction at all. Several alternative hypotheses explaining the frequent absence of long tail feathers have been proposed. In their 1999 study, Chiappe and colleagues discussed the possibility that individuals might lack tail feathers because they died during molting. Although direct evidence for molting in early birds is missing, the lack of feather abrasion in Confuciusornis specimens suggests that the plumage got periodically renewed. As in modern birds, molting individuals may have been present alongside non-molting individuals, and males and females may have molted at different times during the year, possibly explaining the co-occurrence of specimens with and without long tail feathers. Peters and Petters, on the other hand, suggested that Confuciusornis may have shed the feathers as a defense mechanism, a method used by several extant species. Such shedding would have been triggered by stress induced by the very volcanic explosions that buried the animals, resulting in a large number of specimens lacking these feathers. In a 2011 paper, Jesús Marugán-Lobón and colleagues stated that even the presence of two separate species, one with and one without long tail feathers, needs to be considered. This possibility would be, however, unsubstantiated at present, as other anatomical differences between these possible species are not apparent. Reproduction In 2007, Gary Kaiser mentioned a Confuciusornis skeleton preserving an egg near its right foot – the first possible egg referable to the genus. The skeleton is from the short-tailed form and thus might represent a female. The egg might have fallen out of the body after the death of the presumed female, although it cannot be excluded that this association of an adult with an egg was only by chance. The egg is roundish in shape and measures 17 mm in diameter, slightly smaller than the head of the animal; according to Kaiser, it would have fit precisely through the pelvic canal of the bird. In dinosaurs and Mesozoic birds, the width of the pelvic canal was restricted due to connection of the lower ends of the pubic bones, resulting in a V-shaped bony aperture through which eggs must fit. In modern birds, this connection of the pubic bones is lost, presumably allowing for larger eggs. In a 2010 paper, Gareth Dyke and Kaiser showed that the breadth of the Confuciusornis egg was indeed smaller than what would be expected for a modern bird of similar size. In a 2016 book, Luis Chiappe and Meng Qingjin stated that the aperture of a large specimen (DNHM-D 2454) indicates a maximum egg diameter of . In modern birds, proportionally large eggs are commonly found in species whose hatchlings do fully depend on their parents (altriciality), while smaller eggs are often found in species whose hatchlings are more developed and independent (precociality). As the estimated egg of the specimen would have been around 30% smaller than expected for a modern altricial bird, it is likely that Confuciusornis was precocial. A 2018 study by Charles Deeming and Gerald Mayr measured the size of the pelvic canal of various Mesozoic birds including Confuciusornis to estimate egg size, concluding that eggs would have been small in proportion to body mass for Mesozoic birds in general. These researchers further posit that an avian-style contact incubation (sitting on eggs for breeding) was not possible for non-avian dinosaurs and Mesozoic birds, including Confuciusornis, as these animals would have been too heavy in relation to the size of their eggs. Kaiser, in 2007, argued that Confuciusornis likely did not brood in an open nest but might have used crevices in trees for protection, and that the small size of the only known egg indicates large clutch sizes. In contrast, a 2016 review by David Varricchio and Frankie Jackson argued that nesting above the ground evolved only at a much later stage, within Neornithes, and that Mesozoic birds would have buried their eggs on the ground, either fully or partially, as seen in non-avian dinosaurs. Growth Growth can be reconstructed based on the inner bone structure. The first such study on Confuciusornis, presented by Fucheng Zhang and colleagues in 1998, used scanning electron microscopy to analyze a femur in cross section. Because the bone was well vascularized (contained many blood vessels) and showed only a single line of arrested growth (growth ring), these authors determined that growth must have been fast and continuous as in modern birds, and that Confuciusornis must have been endothermic. Zhang and colleagues corroborated this claim in a subsequent paper, stating that the bone structure was unlike that of a modern ectothermic alligator but similar to the feathered non-avian dinosaur Beipiaosaurus. However, these authors assumed that endothermy in Confuciusornis had evolved independently from that seen in modern birds. This concurred with earlier work by Anusuya Chinsamy and colleagues, who described distinct lines of arrested growth and low vascularity in other Mesozoic birds that are more derived than Confuciusornis. Both features indicate slow growth, which, according to Chinsamy and colleagues, suggests low metabolic rates. Full endothermy, therefore, would have evolved late on the evolutionary line leading to modern birds. This view was contested by subsequent studies, which pointed out that slow growing bone is not necessarily an indicator for low metabolic rates, and in the case of Mesozoic birds was rather a result of the decrease in body size that characterized the early evolution of birds. A more comprehensive study based on thin sectioning of bones was published by Armand de Ricqlès and colleagues in 2003. Based on 80 thin sections taken from an adult Confuciusornis exemplar, this study confirmed the high growth rates proposed by Zhang and colleagues. The fast-growing fibrolamellar bone tissue was similar to that seen in non-avian theropods, and the sampled individual probably reached adult size in much less than 20 weeks. Small body size was not primarily achieved by slowing growth but by shortening the period of rapid growth. The growth rate estimated for Confuciusornis is still lower than the extremely fast growth characteristic for modern birds (6–8 weeks), suggesting that that growth was secondarily accelerated later in avian evolution. In 2008 Chiappe and colleagues conducted a statistical analysis based on 106 specimens to explore the relationship between body size and the possession of long tail feathers. The population showed a clear bimodal distribution of the size of the animals with two distinct weight classes. However, there was no correlation between size and the possession of the long tail feathers. From this it was concluded that either the sexes did not differ in size or both sexes had the long feathers. The first case was deemed most likely which left the size distribution to be explained. It was hypothesized that the smaller animals consisted of very young individuals, that the large animals were adults and that the rarity of individuals with an intermediate size was caused by Confuciusornis experiencing a growth spurt just prior to reaching adulthood, the shortness of which would have prevented many becoming fossilized during this phase. This initially slow growth followed by a growth spurt would have resulted in a S-shaped growth curve, similar to that inferred for non-avian dinosaurs. Such an extended dinosaurian mode of growth conflicts with the earlier histological findings of de Ricqlès that suggest a much shorter, avian-style growth. Alternatively, the observed size distribution might also be explained by the presence of more than one species, although there are no anatomical features that could be correlated with these potential species. It could also be explained by assuming an attritional death assemblage, in which mortality rates (and thus the number of preserved fossils) are highest in young and in very old individuals. The idea of a dinosaur-like mode of growth was criticized by Winfried and Dieter Peters in 2008, who argued that the body size of the smaller size class was too large to possibly have represented the youngest growth state. Analyzing an extended data set, these researchers identified a third size class that supposedly represented this youngest growth state. As it would be highly unlikely that Confuciusornis showed two distinct growth spurts, a feature unseen in known amniotes, they concluded that the two larger size classes represented the two sexes rather than growth stages (sexual size dimorphism). The long tail feathers would have occurred in both sexes, one of which was the largest. This interpretation is consistent with an avian-style mode of growth, as it was suggested by the earlier histological studies. It is also consistent with comparisons to modern birds, in which long tail feathers are typically unrelated to the sexes. The absence of long tail feathers in many specimens was suggested to be the result of stress-induced shedding prior to death. Chiappe and colleagues defended their findings in a 2010 comment, arguing that the assumed short, avian-like growth period is unlikely. The calculation presented by De Ricqlès in 2003 of a growth phase of less than 20 weeks was based on the assumption that bone diameters grew by 10 μm per day, which is subjective. Rather, histology reveals the presence of different tissue types in the bone that grew at different rates, as well as pauses in growth as indicated by the lines of arrested growth. Thus, growth periods must have been longer than in modern birds and likely took several years, as is true for the modern kiwi. The observed size distribution can, therefore, be feasibly explained by assuming a dinosaurian-style growth. In an invited reply in 2010, Peters and Peters stated that Chiappe and colleagues did not comment on their main argument, the gap in body size between the smaller size class and inferred hatchlings, which accounts for one order of magnitude and would be most consistent with a sexual size dimorphism. Marugán-Lobón and colleagues studied the relationships between the presence and absence of long tail feathers and the lengths of various long bones of the arms and limbs, using a once more enlarged sample of 130 specimens. While confirming that the tail feathers are unrelated to body size, their presence corresponds to different proportions of the forelimb compared to the hind limb. The authors concluded that the meaning of the observed distributions of both the tail feathers and the body size remains contentious. Chiappe and colleagues, in their 2008 study, concluded that limb bones growth was almost isometric, meaning that skeletal proportions did not change during growth. This was contested by Peters and Peters in 2009, who observed that wing bones tended to be proportionally longer in very small individuals, as seen in modern chicken, and thus grew allometrically. Chiappe and colleagues, in their 2010 comment, responded that proportional variation is present across the whole size range, and that the presence of allometry was not conclusively demonstrated by the analyses presented by Peters and Peters. Possible medullary bone A 2013 histological study by Anusuya Chinsamy and colleagues found medullary bone within the long bones of a short tailed specimen (DNHM-D1874), while three long-tailed specimens lacked medullary bone. In modern birds, medullary bone only forms temporarily in females, where it functions as a calcium reservoir for eggshell production. Therefore, these authors suggested short-tailed specimens to be females, and long-tailed specimens to be males. The female specimen had already passed its rapid growth phase, although it was still significantly smaller than the maximum size reached by Confuciusornis exemplars. At least two lines of arrested growth (growth lines that form annually) could be identified, demonstrating an extended growth over several years; the studied female would have been in its third year. Long tail feathers were confirmed to occur in small individuals, the smallest of which was only around 23% the mass of the largest specimens. Assuming that the occurrence of tail feathers indicates sexual maturity, the authors concluded that the latter must have occurred well before the animals reached their final size, unlike in birds but similar to non-avian dinosaurs. In a 2018 study, Jingmai O'Connor and colleagues questioned the identification of medullary bone, arguing that the purported medullary bone was only found in the forelimb, while in modern birds it is mostly present in the hind limb. Furthermore, the tissue in question is merely preserved as small fragments, rendering its interpretation difficult. However, the authors were able to identify medullary bone in the hind limb of an enantiornithine, a more derived group of Mesozoic birds. As is the case with the Confuciusornis specimen, this supposed female did not reach its final size, supporting the dinosaur-like mode of growth in basal birds that was inferred by the earlier studies. Diet In 1999, Chinese paleontologist Lianhai Hou and colleagues suggested that Confuciusornis was likely herbivorous, though no stomach contents were yet known, pointing out that the beak curved upwards and was not raptorial. Paleontologists Dieter S. Peters and Ji Qiang hypothesized in 1999 that, although no remains of toe webs have been conserved, it caught its prey swimming using its rather soft bill to search for prey below the waterline. Several extant bird species have been presented as modern analogues of Confuciusornis providing insight into its possible lifestyle. Peters thought that it could be best compared with the white-tailed tropicbird (Phaeton lepturus), a fisher that too has a long tail and narrow wings—and even often nests in the neighbourhood of volcanoes. Polish paleontologist Andrzej Elżanowski, in 2002, found it unlikely that a long-winged and short-legged bird like Confuciusornis would forage in tree crowns, and instead proposed that it foraged on the wing, seizing prey from the water or ground surface. Indications for this included the combination of long wings that appear adapted for soaring, leg proportions (long femur, short foot) that are similar to those of frigate birds and kingfishers, and occipital foramen that opened at the back, a toothless beak similar to but shorter than that of kookaburras, and the absence of specializations for swimming. He conceded that Confuciusornis may have been able to swim, as it possibly foraged over water. In 2003 Chinese paleontologists Zhonghe Zhou and Fucheng Zhang stated that though nothing was known about its diet, its robust and toothless jaws suggested it could have fed on seeds, and noted Jeholornis preserved direct evidence of such a diet. In 2006, Johan Dalsätt and colleagues described a C. sanctus specimen (IVPP V13313) from the Jiufotang Beds which preserves seven to nine vertebrae and several ribs of a small fish, probably Jinanichthys. These fish bones are formed into a tight cluster about across, and the cluster is in contact with the seventh and eighth cervical vertebrae of the bird. The condition of the fish indicates it was about to be regurgitated as a pellet, or that it was stored in the crop. No other fish remains are present in the slab. Though it is unknown how common fish were in the diet of Confuciusornis, the finding did not support a herbivorous diet, and the researchers pointed out that no specimens have been found with gastroliths (stomach stones), which are swallowed by birds to help digest plant fibers. Instead, they suggested it would have been omnivorous, similar to for example crows. Andrei Zinoviev assumed it caught fish on the wing. The skull was relatively immobile, incapable of the kinesis of modern birds that can raise the snout relative to the back of the skull. This immobility was caused by the presence of a triradiate postorbital separating the eye socket from the lower temporal opening, as with more basal theropod dinosaurs, and the premaxillae of the snout reaching all the way to the frontals, forcing the nasals to the sides of the snout. Paleoenvironment and paleoecology Confuciusornis was discovered in the Yixian and Jiufotang Formations and is a member of the Jehol Biota. Tuff makes up a considerable amount of the rock composition in both due to frequent volcanic eruptions, which were slightly more frequent in the Yixian Formation. Shale and mudstone also are major components of the formations. The tuff has allowed detailed dating of the formations by using 40Ar-39Ar isotopes. This results in an age of approximately 125 to 120 million years ago for the Yixian formation and approximately 120.3 million years ago for the Jiufotang Formation. The fossils were buried as a result of flooding and volcanic debris. This method of preservation resulted in fossils that are very flat, almost two-dimensional. The volcanic strata have allowed the preservation of various soft tissues, such as detailed feather impressions. Using oxygen isotopes in reptile bones found in the formation, a 2010 study determined that many formations from East Asia, including the Yixian, had a cool temperate climate. The mean air temperature of the Yixian Formation was estimated at 10 °C ± 4 °C. Fossils of Xenoxylon, a type of wood known from temperate areas of the time, have been found throughout the region. Additionally, reptiles needing heat, such as crocodilians, are absent. The majority of Jehol flora has been discovered in the lower Yixian Formation. This flora includes most groups of Mesozoic plants, including mosses, clubmosses, horsetails, ferns, seed ferns, Czekanowskiales, ginkgo trees, cycadeoids, Gnetales, conifers, and a small number of flowering plants. Fauna that were present in the Jehol Biota include ostracods, gastropods, bivalves, insects, fish, salamanders, mammals, lizards, choristoderes, pterosaurs, and dinosaurs (including birds). These fossils are exceptionally well preserved, with dinosaur fossils frequently preserving filaments and feather impressions and sometimes even pigmentation, such as in Microraptor (an aerial predator of the Jiufotang Formation), Psittacosaurus (a small ceratopsian with a wide distribution throughout both formations), and Sinosauropteryx (a compsognathid and one of the first dinosaurs recovered from the Yixian). Other feathered dinosaurs of the Jehol Biota include the large compsognathid Sinocalliopteryx gigas, a specimen of which was discovered with Confuciusornis bones in its abdominal contents, the small herbivorous oviraptorosaur Caudipteryx, and the large tyrannosauroid Yutyrannus, all from the Yixian Formation. Jehol birds are represented by more than 20 genera, including basal avialans (such as Confucisornis, Jeholornis, and Sapeornis), more derived enantiornithes (such as Eoenantiornis, Longirostravis, Sinornis, Boluochia, and Longipteryx), and even further derived ornithurines (such as Liaoningornis, Yixianornis, and Yanornis).
Biology and health sciences
Prehistoric birds
Animals
1064031
https://en.wikipedia.org/wiki/Protoceratops
Protoceratops
Protoceratops (; ) is a genus of small protoceratopsid dinosaurs that lived in Asia during the Late Cretaceous, around 75 to 71 million years ago. The genus Protoceratops includes two species: P. andrewsi and the larger P. hellenikorhinus. The former was described in 1923 with fossils from the Mongolian Djadokhta Formation, and the latter in 2001 with fossils from the Chinese Bayan Mandahu Formation. Protoceratops was initially believed to be an ancestor of ankylosaurians and larger ceratopsians, such as Triceratops and relatives, until the discoveries of other protoceratopsids. Populations of P. andrewsi may have evolved into Bagaceratops through anagenesis. Protoceratops were small ceratopsians, up to long and around in body mass. While adults were largely quadrupedal, juveniles had the capacity to walk around bipedally if necessary. They were characterized by a proportionally large skull, short and stiff neck, and neck frill. The frill was likely used for display or intraspecific combat, as well as protection of the neck and anchoring of jaw muscles. A horn-like structure was present over the nose, which varied from a single structure in P. andrewsi to a double, paired structure in P. hellenikorhinus. The "horn" and frill were highly variable in shape and size across individuals of the same species, but there is no evidence of sexual dimorphism. They had a prominent parrot-like beak at the tip of the jaws. P. andrewsi had a pair of cylindrical, blunt teeth near the tip of the upper jaw. The forelimbs had five fingers of which only the first three bore wide and flat unguals. The feet were wide and had four toes with flattened, shovel-like unguals, which would have been useful for digging through the sand. The hindlimbs were longer than the forelimbs. The tail was long and had an enigmatic sail-like structure, which may have been used for display, swimming, or metabolic reasons. Protoceratops, like many other ceratopsians, were herbivores equipped with prominent jaws and teeth suited for chopping foliage and other plant material. They are thought to have lived in highly sociable groups of mixed ages. They appear to have cared for their young. They laid soft-shelled eggs, a rare occurrence in dinosaurs. During maturation, the skull and neck frill underwent rapid growth. Protoceratops were hunted by Velociraptor, and one particularly famous specimen (the Fighting Dinosaurs) preserves a pair of them locked in combat. Protoceratops used to be characterized as nocturnal because of the large sclerotic ring around the eye, but they are now thought to have been cathemeral (active at dawn and dusk). History of discovery In 1900 Henry Fairfield Osborn suggested that Central Asia may have been the center of origin of most animal species, including humans, which caught the attention of explorer and zoologist Roy Chapman Andrews. This idea later gave rise to the First (1916 to 1917), Second (1919) and Third (1921 to 1930) Central Asiatic Expeditions to China and Mongolia, organized by the American Museum of Natural History under the direction of Osborn and field leadership of Andrews. The team of the third expedition arrived in Beijing in 1921 for the final preparations and started working in the field in 1922. During late 1922 the expedition explored the famous Flaming Cliffs of the Shabarakh Usu region of the Djadokhta Formation, Gobi Desert, now known as the Bayn Dzak region. On 2 September, the photographer James B. Shackelford discovered a partial juvenile skull—which would become the holotype specimen (AMNH 6251) of Protoceratops—in reddish sandstones. It was subsequently analyzed by the paleontologist Walter W. Granger who identified it as reptilian. On 21 September, the expedition returned to Beijing, and even though it was set up to look for remains of human ancestors, the team collected numerous dinosaur fossils and thus provided insights into the rich fossil record of Asia. Back in Beijing, the skull Shackelford had found was sent to the American Museum of Natural History for further study, after which Osborn reached out to Andrews and team via cable, notifying them about the importance of the specimen. In 1923 the expedition again prospected the Flaming Cliffs, this time discovering even more specimens of Protoceratops and also the first remains of Oviraptor, Saurornithoides and Velociraptor. Most notably, the team discovered the first fossilized dinosaur eggs near the holotype of Oviraptor and given how abundant Protoceratops was, the nest was attributed to this taxon. This would later result in the interpretation of Oviraptor as an egg-thief. In the same year, Granger and William K. Gregory formally described the new genus and species Protoceratops andrewsi based on the holotype skull. The specific name, andrewsi, is in honor of Andrews for his prominent leadership during the expeditions. They identified Protoceratops as an ornithischian dinosaur closely related to ceratopsians representing a possible common ancestor between ankylosaurs and ceratopsians. Since Protoceratops was more primitive than any other known ceratopsian at that time, Granger and Gregory coined the new family Protoceratopsidae, mostly characterized by the lack of horns. The co-authors also agreed with Osborn that Asia, if more thoroughly explored, could solve many major evolutionary gaps in the fossil record. Although not stated in the original description, the generic name, Protoceratops, is intended to mean "first horned face" as it was believed that Protoceratops represented an early ancestor of ceratopsids. Other researchers immediately noted the importance of the Protoceratops finds, and the genus was hailed as the "long-sought ancestor of Triceratops". Most fossils were in an excellent state of preservation with even sclerotic rings (delicate ocular bones) preserved in some specimens, quickly making Protoceratops one of the best-known dinosaurs from Asia. After spending much of 1924 making plans for the next fieldwork seasons, in 1925 Andrews and team explored the Flaming Cliffs yet again. During this year more eggs and nests were collected, alongside well-preserved and complete specimens of Protoceratops. By this time, Protoceratops had become one of the most abundant dinosaurs of the region with more than 100 specimens known, including skulls and skeletons of multiple individuals at different growth stages. Though more remains of Protoceratops were collected in later years of the expeditions, they were most abundant in the 1922 to 1925 seasons. Gregory and Charles C. Mook published another description of Protoceratops in 1925, discussing its anatomy and relationships. Thanks to the large collection of skulls found in the expeditions, they concluded that Protoceratops represented a ceratopsian more primitive than ceratopsids and not an ankylosaur-ceratopsian ancestor. In 1940, Barnum Brown and Erich Maren Schlaikjer described the anatomy of P. andrewsi in extensive detail using newly prepared specimens from the Asiatic expeditions. In 1963, the Mongolian paleontologist Demberelyin Dashzeveg reported the discovery of a new fossiliferous locality of the Djadokhta Formation: Tugriken Shireh. Like the neighbouring Bayn Dzak, this new locality contained an abundance of Protoceratops fossils. During the 1960s to 1970s, Polish-Mongolian and Russian-Mongolian paleontological expeditions collected new, partial to complete specimens of Protoceratops at this locality, making this dinosaur species a common occurrence in Tugriken Shireh. Since its discovery, the Tugriken Shireh locality has yielded some of the most significant specimens of Protoceratops, such as the Fighting Dinosaurs, in situ individuals—a preservation condition also known as "standing" individuals or specimens in some cases—, authentic nests, and small herd-like groups. Specimens from this locality are usually found in articulation, suggesting possible mass mortality events. Stephan N. F. Spiekman and colleagues reported a partial P. andrewsi skull (RGM 818207) in the collections of the Naturalis Biodiversity Center, Netherlands in 2015. Since Protoceratops fossils are only found in the Gobi Desert of Mongolia and this specimen was likely discovered during the Central Asiatic Expeditions, the team concluded that this skull was probably acquired by Delft University between 1940 and 1972 as part of a collection transfer. Species and synonyms Protoceratopsid remains were recovered in the 1970s from the Khulsan locality of the Barun Goyot Formation, Mongolia, during the work of several Polish-Mongolian paleontological expeditions. In 1975, Polish paleontologists Teresa Maryańska and Halszka Osmólska described a second species of Protoceratops which they named P. kozlowskii. This new species was based on the Khulsan material, mostly consisting of juvenile skull specimens. The specific name, kozlowskii, is in tribute to the Polish paleontologist Roman Kozłowski. They also named the new genus and species of protoceratopsid Bagaceratops rozhdestvenskyi, known from specimens of the nearby Hermiin Tsav locality. In 1990 the Russian paleontologist Sergei Mikhailovich Kurzanov referred additional material from Hermiin Tsav to P. kozlowskii. However, he noted that there were enough differences between P. andrewsi and P. kozlowskii, and erected the new genus and combination Breviceratops kozlowskii. Though Breviceratops has been regarded as a synonym and juvenile stage of Bagaceratops, Łukasz Czepiński in 2019 concluded that the former has enough anatomical differences to be considered as a separate taxon. In 2001 Oliver Lambert with colleagues named a new and distinct species of Protoceratops, P. hellenikorhinus. The first known remains of P. hellenikorhinus were collected from the Bayan Mandahu locality of the Bayan Mandahu Formation, Inner Mongolia, in 1995 and 1996 during Sino-Belgian paleontological expeditions. The holotype (IMM 95BM1/1) and paratype (IMM 96BM1/4) specimens consist of large skulls lacking body remains. The holotype skull was found facing upwards, a pose that has been reported in Protoceratops specimens from Tugriken Shireh. The specific name, hellenikorhinus, is derived from Greek hellenikos (meaning Greek) and rhis (meaning nose) in reference to its broad and angular snout, which is reminiscent of the straight profiles of Greek sculptures. In 2017 abundant protoceratopsid material was reported from Alxa near Bayan Mandahu, and it may be preferable to P. hellenikorhinus. Viktor Tereshchenko and Vladimir R. Alifanov in 2003 named a new protoceratopsid dinosaur from the Bayn Dzak locality, Bainoceratops efremovi . This genus was based on a few dorsal (back) vertebrae that were stated to differ from those of Protoceratops. In 2006 North American paleontologists Peter Makovicky and Mark A. Norell suggested that Bainoceratops may be synonymous with Protoceratops as most of the traits used to separate the former from the latter have been reported from other ceratopsians including Protoceratops itself, and they are more likely to fall within the wide intraspecific variation range of the concurring P. andrewsi. The authors Brenda J. Chinnery and Jhon R. Horner in 2007 during their description of Cerasinops stated that Bainoceratops, along with other dubious genera, was determined to be either a variant or immature specimen of other genera. Based on this reasoning, they excluded Bainoceratops from their phylogenetic analysis. Eggs and nests As part of the Third Central Asiatic Expedition of 1923, Andrews and team discovered the holotype specimen of Oviraptor in association with some of the first known fossilized dinosaur eggs (nest AMNH 6508), in the Djadokhta Formation. Each egg was elongated and hard-shelled, and due to the proximity and high abundance of Protoceratops in the formation, these eggs were believed at the time to belong to this dinosaur. This resulted in the interpretation of the contemporary Oviraptor as an egg predatory animal, an interpretation also reflected in its generic name. In 1975, the Chinese paleontologist Zhao Zikui named the new oogenera Elongatoolithus and Macroolithus, including them in a new oofamily: the Elongatoolithidae. As the name implies, they represent elongated dinosaur eggs, including some of referred ones to Protoceratops. In 1994 the Russian paleontologist Konstantin E. Mikhailov named the new oogenus Protoceratopsidovum from the Barun Goyot and Djadokhta formations, with the type species P. sincerum and additional P. fluxuosum and P. minimum. This ootaxon was firmly stated as belonging to protoceratopsid dinosaurs since they were the predominant dinosaurs where the eggs were found and some skeletons of Protoceratops were found in close proximity to Protoceratopsidovum eggs. More specifically, Mikhailov stated that P. sincerum and P. minimum were laid by Protoceratops, and P. fluxuosum by Breviceratops. However, also during 1994, Norell and colleagues reported and briefly described a fossilized theropod embryo inside an egg (MPC-D 100/971) from the Djadokhta Formation. They identified this embryo as an oviraptorid dinosaur and the eggshell, upon close examination, turned out be that of elongatoolithid eggs and thereby the oofamily Elongatoolithidae was concluded to represent the eggs of oviraptorids. This find proved that the nest AMNH 6508 belonged to Oviraptor and rather than an egg-thief, the holotype was actually a mature individual that perished brooding the eggs. Moreover, phylogenetic analyses published in 2008 by Darla K. Zelenitsky and François Therrien have shown that Protoceratopsidovum represents the eggs of a maniraptoran more derived than oviraptorids and not Protoceratops. The description of the eggshell of Protoceratopsidovum has further confirmed that they in fact belong to a maniraptoran, possibly deinonychosaur taxon. Nevertheless, in 2011 an authentic nest of Protoceratops was reported and described by David E. Fastovsky and colleagues. The nest (MPC-D 100/530) containing 15 articulated juveniles was collected from the Tugriken Shireh locality of the Djadokhta Formation during the work of Mongolian-Japanese paleontological expeditions. Gregory M. Erickson and team in 2017 reported an embryo-bearing egg clutch (MPC-D 100/1021) of Protoceratops from the also fossiliferous Ukhaa Tolgod locality, discovered during paleontological expeditions of the American Museum of Natural History and Mongolian Academy of Sciences. This clutch comprises at least 12 eggs and embryos with only 6 embryos preserving nearly complete skeletons. Norell with colleagues in 2020 examined fossilized remains around the eggs of this clutch which indicate a soft-shelled composition. Fighting Dinosaurs The Fighting Dinosaurs specimen preserves a Protoceratops (MPC-D 100/512) and Velociraptor (MPC-D 100/25) fossilized in combat and provides an important window regarding direct evidence of predator-prey behavior in non-avian dinosaurs. In the 1960s and early 1970s, many Polish-Mongolian paleontological expeditions were conducted to the Gobi Desert with the objective of fossil findings. In 1971, the expedition explored several localities of the Djadokhta and Nemegt formations. During fieldwork on 3 August several fossils of Protoceratops and Velociraptor were found at the Tugriken Shire locality (Djadokhta Formation) including a block containing one of each. The individuals in this block were identified as a P. andrewsi and V. mongoliensis. Although the conditions surrounding their burial were not fully understood, it was clear that they died simultaneously in a struggle. The specimen, nicknamed the "Fighting Dinosaurs", has been examined and studied by numerous researchers and paleontologists, and there are various opinions on how the animals were buried and preserved altogether. Though a drowning scenario has been proposed by Barsbold, such a hypothesis is considered unlikely given the arid paleoenvironments of the Djadokhta Formation. It is generally thought that they were buried alive by a sandstorm or a collapsed dune. Skin impressions and footprints During the Third Central Asiatic Expedition in 1923, a nearly complete Protoceratops skeleton (specimen AMNH 6418) was collected at the Flaming Cliffs. Unlike other specimens, it was discovered in a rolled-up position with its skull preserving a thin, hard, and wrinkled layer of matrix (surrounding sediments). This specimen was later described in 1940 by Brown and Schlaikjer, who discussed the nature of the matrix portion. They stated that this layer had a very skin-like texture and covered mostly the left side of the skull from the snout to the neck frill. Brown and Schlaikjer discarded the idea of possible skin impressions as this skin-like layer was likely a product of the decay and burial of the individual, making the sediments become highly attached to the skull. The potential importance of these remains were unrecognized or given attention, and by 2020 the specimen has already been completely prepared losing all traces of this skin-like layer. Some elements were damaged in the process such as the rostrum. In 2022 Phil R. Bell and colleagues briefly described these potential soft tissues based on the photographs provided by Brown and Schlaikjer, as well as other ceratopsian soft tissues. However, although the initial perception was that the entire skin-like layer had been removed, photographs shared by Czepiński during the same year have revealed that the right side of the skull remains intact, retaining much of this layer and pending further analysis. Also from the context of the Polish-Mongolian paleontological expeditions, in 1965 an articulated subadult Protoceratops skeleton (specimen ZPAL Mg D-II/3) was collected from the Bayn Dzak locality of the Djadokhta Formation. In the 2000s during the preparation of the specimen, a fossilized cast of a four-toed digitigrade footprint was found below the pelvic girdle. This footprint was described in 2012 by Grzegorz Niedźwiedzki and colleagues who considered it to represent one of the first reported finds of a dinosaur footprint in association with an articulated skeleton, and also the first one reported for Protoceratops. The limb elements of the skeleton of ZPAL Mg D-II/3 were described in 2019 by paleontologists Justyna Słowiak, Victor S. Tereshchenko and Łucja Fostowicz-Frelik. Tereshchenko in 2021 fully described the axial skeleton of this specimen. Description Protoceratops was a relatively small-sized ceratopsian, with both P. andrewsi and P. hellenikorhinus estimated up to in length, and around in body mass. Although similar in overall body size, the latter had a relatively greater skull length. Both species can be differentiated by the following characteristics: P. andrewsi – Two teeth were present at the premaxilla; the snout was low and long; the nasal horn was a single, pointed structure; the bottom edge of the dentary was slightly curved. P. hellenikorhinus – Absence of premaxillary teeth; the snout was tall and broad; the nasal horn was divided into two pointed ridges; the bottom edge of the dentary was straight. Skull The skull of Protoceratops was relatively large compared to its body and robustly built. The skull of the type species, P. andrewsi, had an average total length of nearly . On the other hand P. hellenikorhinus had a total skull length of about . The rear of the skull gave form to a pronounced neck frill (also known as "parietal frill") mostly composed of the and bones. The exact size and shape of the frill varied by individual; some had short, compact frills, while others had frills nearly half the length of the skull. The squamosal touched the (cheekbone) and was very enlarged and high having a curved end that built the borders of the frill. The parietals were the posteriormost bones of the skull and major elements of the frill. In a top view they had a triangular shape and were joined by the (bones of the skull roof). Both parietals were coossified (fused), creating a long ridge on the center of the frill. The jugal was deep and sharply developed and along with the they formed a horn-like extension that pointed to below at the lateral sides of the skull. The (tip region of the jugal) was separated from the jugal by a prominent suture; this suture was more noticeable in adults. The surfaces around the epijugal were coarse, indicating that it was covered by a horny sheath. Unlike the much derived ceratopsids, the frontal and postorbital bones of Protoceratops were flat and lacked horn cores or supraorbital horns. The (small spur-like bone) joined the prefrontal over the front of the orbit (eye socket). In P. hellenikorhinus the palpebral protruded upwards from the , just above the orbit and slightly meeting the frontal, creating a small horn-like structure. The was a near-rectangular bone located in front of the orbit, contributing to the shape of the latter. The sclerotic ring (structure that supports the eyeball), found inside the orbit, was circular in shape and formed by consecutive bony plates. The snout was formed by the , r, r and bones. The nasal was generally rounded but some individuals had a sharp nasal boss (a feature that has been called "nasal horn"). In P. hellenikorhinus this boss was divided in two sharp and long ridges. The maxilla was very deep and had up to 15 alveoli (tooth sockets) on its underside or teeth bearing surface. The premaxilla had two alveoli on its lower edge—a character that was present at least on P. andrewsi. The rostral bone was devoid of teeth, high and triangular in shape. It had a sharp end and rough texture, which reflects that a rhamphotheca (horny beak) was present. As a whole, the skull had four pairs of fenestrae (skull openings). The foremost hole, the nares (nostril opening), was oval-shaped and considerably smaller than the nostrils seen in ceratopsids. Protoceratops had large orbits, which measured around in diameter and had irregular shapes depending on the individual. The forward facing and closely located orbits combined with a narrow snout, gave Protoceratops a well-developed binocular vision. Behind the eye was a slightly smaller fenestra known as the infratemporal fenestra, formed by the curves of the jugal and squamosal. The last openings of the skull were two parietal fenestrae (holes in the frill). The lower jaw of Protoceratops was a large element composed of the , , , and . The predentary (frontmost bone) was very pointed and elongated, having a V-shaped symphyseal (bone union) region at the front. The dentary (teeth-bearing bone) was robust, deep, slightly recurved, and fused to the angular and surangular. A large and thick ridge ran along the lateral surface of the dentary that connected the coronoid process—a bony projection that extends upwards from the upper surface of the lower jaw behind the tooth row—and surangular. It bore up to 12–14 alveoli on its top margin. Both predentary and dentary had a series of foramina (small pits), the latter mostly on its anterior end. The coronoid (highest point of the lower jaw) was blunt-shaped and touched by the coronoid process of the dentary, being obscured by the jugal. The surangular was near triangular in shape and in old individuals it was coossified together with the coronoid process. The angular was located below the two latter bones and behind the dentary. It was a large and somewhat rounded bone that complemented the curvature of the dentary. On its inner surface it was attached to the . The articular was a smaller bone and had a concavity on its inner surface for the articulation with the quadrate. Protoceratops had leaf-shaped dentary and maxillary teeth that bore several denticles (serrations) on their respective edges. The crowns (upper exposed part) had two faces or lobes that were divided by a central ridge-like structure (also called "primary ridge"). The teeth were packed into a single row that created a shearing surface. Both dentary and maxillary teeth presented marked homodonty—a dental condition where the teeth share a similar shape and size. P. andrewsi bore two small, peg to spike-like teeth that were located on the underside of each premaxilla. The second premaxillary tooth was larger than the first one. Unlike dentary and maxillary teeth, the premaxillary dentition was devoid of denticles, having a relatively smooth surface. All teeth had a single root (lower part inserted in the alveoli). Postcranial skeleton The vertebral column of Protoceratops had nine cervical (neck), 12 dorsal (back), eight sacral (pelvic) and over 40 caudal (tail) vertebrae. The centra (centrum; body of the vertebrae) of the first three cervicals were coossified together (, and third cervical respectively) creating a rigid structure. The neck was rather short and had poor flexibility. The atlas was the smallest cervical and consisted mainly of the centrum because the (upper, and pointy vertebral region) was a thin, narrow bar of bone that extended upwards and backward to the base of the axis neural . The capitular facet (attachment site for chevrons; also known as cervical ribs) was formed by a low projection located near the base of the neural arch. The anterior facet of the atlas centrum was highly concave for the articulation of the of the skull. The neural arch and spine of the axis were notably larger than the atlas itself and any other cervical. The axial neural spine was broad and backward developed being slightly connected to that of the third cervical. From the fourth to the ninth all cervicals were relatively equal in size and proportions. Their neural spines were smaller than the first three vertebrae and the development of the capitular facet diminished from the fourth cervical onwards. The were similar in shape and size. Their neural spines were elongated and sub-rectangular in shape with a tendency to become more elongated in posterior vertebrae. The centra were large and predominantly amphiplatian (flat on both facets) and circular when seen from the front. Sometimes in old individuals the last dorsal vertebra was somewhat coosified to the first sacral. The were firmly coosified giving form to the sacrum, which was connected to the inner sides of both ilia. Their neural spines were broad, not coosified, and rather consistent in length. The centra were mainly opisthocoelous (concave on the posterior facet and convex on the anterior one) and their size became smaller towards the end. The decreased in size progressively towards the end and had very elongated neural spines in the mid-series, forming a sail-like structure. This elongation started from the first to the fourteenth caudal. The centra were (saddle-shaped at both facets). On the anterior caudals they were broad, however, from the twenty-fifth onwards the centra became elongated alongside the neural spines. On the underside of the caudal vertebrae a series of chevrons were attached, giving form to the lower part of the tail. The first chevron was located at the union of the third and fourth caudals. Chevrons three to nine were the largest and from the tenth onwards they became smaller. All vertebrae of Protoceratops had ribs attached on the lateral sides, except for the series of caudals. The first five cervical ribs (sometimes called chevrons) were some of the shortest ribs, and among them the first two were longer than the rest. The third to the sixth dorsal (thoracic) ribs were the longest ribs in the skeleton of Protoceratops, the following ribs became smaller in size as they progressed toward the end of the vertebral column. The two last dorsal ribs were the smallest, and the last of them was in contact with the internal surfaces of the ilium. Most of the sacral ribs were fused into the sacrum, and had a rather curved shape. The pectoral girdle of Protoceratops was formed by the (fusion of the coracoid and scapula) and clavicle. The (shoulder blades) were relatively large and rounded on their inner sides. At their upper region, the scapulae were wide. At their lower region, the scapulae meet the coracoids. The were relatively elliptical, and sometimes coosified (fused) to the scapulae. The clavicle of Protoceratops was an U to slightly V-shaped element that joined to the upper border of the scapulocoracoid. In its general form, the forelimbs of Protoceratops were shorted than the hindlimbs, and composed by the humerus, radius, and ulna. The (upper arm bone) was large and slender, and at the lower part it met with both radius and ulna. The had a slightly recurved shape and was longer than the radius. A concavity was present on its upper part, serving as the connection with the humerus and forming the elbow. The was a rather short bone with a straight shape. The manus (hand) of Protoceratops had five digits (fingers). The first three fingers had unguals (claw bones) and were the largest digits. The last two were devoid of unguals and had a small size, mostly vestigial (retained, but without important function). Both hand and feet unguals were flat, blunt and hoof-like. The pelvic girdle was formed by the , , and . The ilium was a large element, having a narrow preacetabular process (anterior end) and a wide postacetabular process (posterior end). The pubis was the smallest element of the pelvic girdle and it had an irregular shape, although its lower end was developed into a pointed bony projection downward. The ischium was the longest bone of the pelvic girdle. It had an elongated shaft with a somewhat wide lower end. The hindlimbs of Protoceratops were rather long, with a slighter longer tibia (lower leg bone) than femur (thigh bone). The (thighbone) was robust and had a rather rounded and pronounced greater trochanter, which was slightly recurved into the inner sides. The (shinbone) was long and slender with a wide lower end. On its upper region a concavity was developed for the joint with the smaller . The pes (foot) were composed of four and four toes which bore shovel-like pedal unguals. The first metatarsal and toe were the smallest, while the other elements were of similar shape and length. Classification Protoceratops was in 1923 placed within the newly named family Protoceratopsidae as the representative species by Granger and Gregory. This family was characterized by their overall primitive morphology in comparison to the more derived Ceratopsidae, such as lack of well-developed horn cores and relative smaller body size. Protoceratops itself was considered by the authors to be somehow related to ankylosaurians based on skull traits, with a more intensified degree to Triceratops and relatives. Gregory and Charles C. Mook in 1925 upon a more deeper analysis of Protoceratops and its overall morphology, concluded that this taxon represents a ceratopsian more primitive than ceratopsids and not an ankylosaur-ceratopsian ancestor. In 1951 Edwin H. Colbert considered Protoceratops to represent a key ancestor for the ceratopsid lineage, suggesting that it ultimately led to the evolution of large-bodied ceratopsians such as Styracosaurus and Triceratops. Such lineage was suggested to have started from the primitive ceratopsian Psittacosaurus. He also regarded Protoceratops as one of the first "frilled" ceratopsians to appear in the fossil record. However, in 1975 Maryanska and Osmolska argued that it is very unlikely that protoceratopsids evolved from psittacosaurids, and also unlikely that they gave rise to the highly derived (advanced) ceratopsids. The first point was supported by the numerous anatomical differences between protoceratopsids and psittacosaurids, most notably the extreme reduction of some hand digits in the latter group—a trait much less pronounced in protoceratopsids. The second point was explained on the basis of the already derived anatomy in protoceratopsids like Bagaceratops or Protoceratops (such as the jaw morphology). Maryanska and Osmolska also emphasized that some early members of the Ceratopsidae reflect a much older evolutionary history. In 1998, paleontologist Paul Sereno formally defined Protoceratopsidae as the branch-based clade including all coronosaurs closer to Protoceratops than to Triceratops. Furthermore, with the re-examinations of Turanoceratops in 2009 and Zuniceratops—two critical ceratopsian taxa regarding the evolutionary history of ceratopsids—in 2010 it was concluded that the origin of ceratopsids is unrelated to, and older than the fossil record of Protoceratops and relatives. In most recent/modern phylogenetic analyses Protoceratops and Bagaceratops are commonly recovered as sister taxa, leaving the interpretations proposing direct relationships with more derived ceratopsians unsupported. In 2019 Czepiński analyzed a vast majority of referred specimens to the ceratopsians Bagaceratops and Breviceratops, and concluded that most were in fact specimens of the former. Although the genera Gobiceratops, Lamaceratops, Magnirostris, and Platyceratops, were long considered valid and distinct taxa, and sometimes placed within Protoceratopsidae, Czepiński found the diagnostic (identifier) features used to distinguish these taxa to be largely present in Bagaceratops and thus becoming synonyms of this genus. Under this reasoning, Protoceratopsidae consists of Bagaceratops, Breviceratops, and Protoceratops. Below are the proposed relationships among Protoceratopsidae by Czepiński: In 2019 Bitnara Kim and colleagues described a relatively well-preserved Bagaceratops skeleton from the Barun Goyot Formation, noting numerous similarities with Protoceratops. Even though their respective skull anatomy had substantial differences, their postcranial skeleton was virtually the same. The phylogenetic analysis performed by the team recovered both protoceratopsids as sister taxa, indicating that Bagaceratops and Protoceratops were anatomically and systematically related. Below is the obtained cladogram, showing the position of Protoceratops and Bagaceratops: Evolution Longrich and team in 2010 indicated that highly derived morphology of P. hellenikorhinus—when compared to P. andrewsi—indicates that this species may represent a lineage of Protoceratops that had a longer evolutionary history compared to P. andrewsi, or simply a direct descendant of P. andrewsi. The difference in morphologies between Protoceratops also suggests that the nearby Bayan Mandahu Formation is slightly younger than the Djadokhta Formation. In 2020, Czepiński analyzed several long-undescribed protoceratopsid specimens from the Udyn Sayr and Zamyn Khondt localities of the Djadokhta Formation. One specimen (MPC-D 100/551B) was shown to present skull traits that are intermediate between Bagaceratops rozhdestvenskyi (which is native to adjacent Bayan Mandahu and Barun Goyot) and P. andrewsi. The specimen hails from the Udyn Sayr locality, where Protoceratops remains are dominant, and given the lack of more conclusive anatomical traits, Czepiński assigned the specimen as Bagaceratops sp. He explained that the presence of this Bagaceratops specimen in such unusual locality could be solved by: (1) the coexistence and sympatric (altogether) evolution of both Bagaceratops and Protoceratops at this one locality; (2) the rise of B. rozhdestvenskyi in a different region and eventual migration to Udyn Sayr; (3) hybridization between the two protoceratopsids given the near placement of both Bayan Mandahu and Djadokhta; (4) anagenetic (progressive evolution) evolutionary transition from P. andrewsi to B. rozhdestvenskyi. Among scenarios, an anagenetic transition was best supported by Czepiński given the fact that no definitive B. rozhdestvenskyi fossils are found in Udyn Sayr, as expected from a hybridization event; MPC-D 100/551B lacks a well-developed accessory antorbital fenestra (hole behind the nostril openings), a trait expected to be present if B. rozhdestvenskyi had migrated to the area; and many specimens of P. andrewsi recovered at Udyn Sayr already feature a decrease in the presence of primitive premaxillary teeth, hence supporting a growing change in the populations. Paleobiology Feeding In 1955, paleontologist Georg Haas examined the overall skull shape of Protoceratops and attempted to reconstruct its jaw musculature. He suggested that the large neck frill was likely an attachment site for masticatory muscles. Such placement of the muscles may have helped to anchor the lower jaws, useful for feeding. Yannicke Dauphin and colleagues in 1988 described the enamel microstructure of Protoceratops, observing a non-prismatic outer layer. They concluded that enamel shape does not relate to the diet or function of the teeth as most animals do not necessarily use teeth to process food. The maxillary teeth of ceratopsians were usually packed into a dental battery that formed vertical shearing blades which probably chopped the leaves. This feeding method was likely more efficient in protoceratopsids as the enamel surface of Protoceratops was coarsely-textured and the tips of the micro-serrations developed on the basis of the teeth, probably helping to crumble vegetation. Based on their respective peg-like shape and reduced microornamentation, Dauphin and colleagues suggested that the premaxillary teeth of Protoceratops had no specific function. In 1991, the paleontologist Gregory S. Paul stated that contrary to the popular view of ornithischians as obligate herbivores, some groups may have been opportunistic meat-eaters, including the members of Ceratopsidae and Protoceratopsidae. He pointed out that their prominent parrot-like beaks and shearing teeth along with powerful muscles on the jaws suggest an omnivore diet instead, much like pigs, hogs, boars and entelodonts. Such scenario indicates a possible competition with the more predatory theropods over carcasses, however, as the animal tissue ingestion was occasional and not the bulk of their diet, the energy flow in ecosystems was relatively simple. You Hailu and Peter Dodson in 2004 suggested that the premaxillary teeth of Protoceratops may have been useful for selective cropping and feeding. In 2009, Kyo Tanque and team suggested that basal ceratopsians, such as protoceratopsids, were most likely low browsers due to their relatively small body size. This low-browsing method would have allowed to feed on foliage and fruits within range, and large basal ceratopsians may have consumed tougher seeds or plant material not available to smaller basal ceratopsians. David J. Button and Lindsay E. Zanno in 2019 performed a large phylogenetic analysis based on skull biomechanical characters—provided by 160 Mesozoic dinosaur species—to analyze the multiple emergences of herbivory among non-avian dinosaurs. Their results found that herbivorous dinosaurs mainly followed two distinct modes of feeding, either processing food in the gut—characterized by relatively gracile skulls and low bite forces—or the mouth, which was characterized by features associated with extensive processing such as high bite forces and robust jaw musculature. Ceratopsians (including protoceratopsids), along with Euoplocephalus, Hungarosaurus, parkosaurid, ornithopod and heterodontosaurine dinosaurs, were found to be in the former category, indicating that Protoceratops and relatives had strong bite forces and relied mostly on its jaws to process food. Ontogeny Brown and Schlaikjer in 1940 upon their large description and revision of Protoceratops remarked that the orbits, frontals, and lacrimals suffered a shrinkage in relative size as the animal aged; the top border of the nostrils became more vertical; the nasal bones progressively became elongated and narrowed; and the neck frill as a whole also increases in size with age. The neck frill specifically, underwent a dramatic change from a small, flat, and almost rounded structure in juveniles to a large, fan-like one in fully mature Protoceratops individuals. In 2001, Lambert and colleagues considered the development of the two nasal "horns" of P. hellenikorhinus to be a trait that was delayed in relation to the appearance of sexual-discriminant traits. This was based on the fact that one small specimen (IMM 96BM2/1) has a skull size slightly larger than a presumed sexually mature P. andrewsi skull (AMNH 6409), and yet it lacks double nasal horns present in fully mature P. hellenikorhinus. Makovicky and team in 2007 conducted a histological analysis on several specimens of Protoceratops from the American Museum of Natural History collections to provide insights into the life history of Protoceratops. The examined fossil bones indicated that Protoceratops slowed its ontogeny (growth) around 9–10 years of life, and it ceased around 11–13 years. They also observed that the maximum or latest stage of development of the neck frill and nasal horn occurred in the oldest Protoceratops individuals, indicating that such traits were ontogenically variable (meaning that they varied with age). Makovicky and team also stated that as the maximum/radical changes on the neck frill and nasal horn were present in most adult individuals, trying to differentiate sexual dimorphism (anatomical differences between sexes) in adult Protoceratops may not be a good practice. David Hone and colleagues in 2016 upon their analysis of P. andrewsi neck frills, found that the frill of Protoceratops was disproportionally smaller in juveniles, grew at a rapid rate than the rest of the animal during its ontogeny, and reached a considerable size only in large adult individuals. Other changes during ontogeny include the elongation of the premaxillary teeth that are smaller in juveniles and enlarged in adults, and the enlargement of middle neural spines in the tail or caudal vertebrae, which appear to grow much taller when approaching adulthood. In 2017, Mototaka Saneyoshi with team analyzed several Protoceratops specimens from the Djadokhta Formation, noting that from perinate/juvenile to subadult individuals, the parietal and squamosal bones increased their sides to posterior sides of the skull. From subadult to adult individuals, the squamosal bone increased in size more than the parietal bone, and the frill expanded to a top direction. The team concluded that the frill of Protoceratops can be characterized by these ontogenetic changes. In 2018, paleontologists Łucja Fostowicz-Frelik and Justyna Słowiak studied the bone histology of several specimens of P. andrewsi through cross-sections, in order to analyze the growth changes in this dinosaur. The sampled elements consisted of neck frill, femur, tibia, fibula, ribs, humerus and radius bones, and showed that the histology of Protoceratops remained rather uniform throughout ontogeny. It was characterized by simple fibrolamellar bone—bony tissue with an irregular, fibrous texture and filled with blood vessels—with prominent woven-fibered bone and low bone remodeling. Most bones of Protoceratops preserve a large abundance of bone fibers (including Sharpey's fibres), which likely gave strength to the organ and enhanced its elasticity. The team also find that the growth rate of the femur increased at the subadult stage, suggesting changes in bone proportions, such as the elongation of the hindlimbs. This growth rate is mostly similar to that of other small herbivorous dinosaurs such as primitive Psittacosaurus or Scutellosaurus. Movement In 1996, Tereshchenko reconstructed the walking model of Protoceratops where he considered the most likely scenario to be Protoceratops as an obligate quadruped given the proportions of its limbs. The main gait of Protoceratops was probably trot-like mostly using its hindlimbs and it is unlikely to have used an asymmetric gait. If trapped in a specific situation (like danger or foraging), Protoceratops could have employed a rapid, facultative bipedalism. He also noted that the flat and wide pedal unguals of Protoceratops may have allowed efficient walking through loose terrain, such as sand which was common on its surroundings. Tereshchenko using speed equations also estimated the average maximum walking speed of Protoceratops at about 3 km/h (kilometres per hour). Upon the analysis of the forelimbs of several ceratopsians, Phil Senter in 2007 suggested that the hands of Protoceratops could reach the ground when the hindlimbs were upright, and the overall forelimb morphology and range of motion may reflect that it was at least a facultative (optional) quadruped. The forelimbs of Protoceratops could sprawl laterally but not for quadrupedal locomotion, which was accomplished with the elbows tucked in. In 2010 Alexander Kuznetsov and Tereshchenko analyzed several vertebrae series of Protoceratops to estimate overall mobility, and concluded that Protoceratops had greater lateral mobility in the presacral (pre-hip) vertebrae series and reduced vertical mobility in the cervical (neck) region. The fossilized footprint associated with the specimen ZPAL Mg D-II/3 described by Niedźwiedzki in 2012 indicates that Protoceratops was digitigrade, meaning that it walked with its toes supporting the body weight. In 2019 however, Słowiak and team described the limb elements of ZPAL Mg D-II/3, which represents a sub-adult individual, and noted a mix of characters typical of bipedal ceratopsians such as a narrow glenoid with scapular blade and an arched femur. The absence of these traits in mature individuals indicates that young Protoceratops were capable of facultative bipedal locomotion and adults had an obligate quadrupedal stance. Even though adult Protoceratops were stocky and quadruped, their tibia-femur length ratio—the tibia being longer than femur, a trait present in bipedal ceratopsians—suggests the ability to occasionally stand on their hindlimbs. Słowiak and team also suggested that the flat and wide hand unguals (claw bone) of Protoceratops may have been useful for moving on loose terrain (such as sand) without sinking. Digging behavior Longrich in 2010 proposed that Protoceratops may have used its hindlimbs to dig burrows or take shelter under bushes and/or scrapes to escape the hottest temperatures of the day. A digging action with the hindlimbs was likely facilitated by the strong caudofemoralis muscle and its large feet equipped with flat, shovel-like unguals. As this behavior would have been common in Protoceratops, it predisposed individuals to become entombed alive during the sudden collapse of their burrows and high energy sand-bearing events—such as sandstorms—and thus explaining the standing in-situ posture of some specimens. Additionally, Longrich suggested that a backward burrowing could explain the preservation of some specimens pointing forward with curved tails. In 2019, Victoria M. Arbour and David C. Evans cited the robusticity of the ulna of Ferrisaurus as a useful feature for digging, which may have been also true for Protoceratops. Tail function Gregory and Mook in 1925 suggested that Protoceratops was partially aquatic because of its large feet—being larger than the hands—and the very long neural spines found in the caudal (tail) vertebrae. Brown and Schlaikjer in 1940 indicated that the expansion of the distal (lower) ischial end may reflect a strong ischiocaudalis muscle, which together with the high tail neural spines were used for swimming. Barsbold in his brief 1974 description of the Fighting Dinosaurs specimen accepted this hypothesis and suggested that Protoceratops was amphibious (water-adapted) and had well-developed swimming capacities based on its side to side flattened tail with very high neural spines. Jack Bowman Bailey in 1997 disagreed with previous aquatic hypotheses and indicated that the high caudal neural spines were instead more reminiscent of bulbous tails of some desert lizard species (such as Heloderma or Uromastyx), which are related to store fat with metabolic water in the tail. He considered a swimming adaptation unlikely given the arid settings of the Djadokhta Formation. In 2008, based on the occurrence of some Protoceratops specimens in fluvial (river-deposited) sediments from the Djadokhta Formation and (vertebral centra that are saddle-shaped at both ends) caudal vertebrae of protoceratopsids, Tereshchenko concluded that the elevated caudal spines are a swimming adaptation. He proposed that protoceratopsids moved through water using their laterally-flattened tails as a paddle to aid in swimming. According to Tereschenko, Bagaceratops was fully aquatic while Protoceratops was only partially aquatic. Longrich in 2010 argued that the high tail and frill of Protoceratops may have helped it to shed excess heat during the day—acting as large-surface structures—when the animal was active in order to survive in the relatively arid environments of the Djadokhta Formation without highly developed cooling mechanisms. In 2011, during the description of Koreaceratops, Yuong-Nam Lee and colleagues found the above swimming hypotheses hard to prove based on the abundance of Protoceratops in eolian (wind-deposited) sediments that were deposited in prominent arid environments. They also pointed out that while taxa such as Leptoceratops and Montanoceratops are recovered from fluvial sediments, they are estimated to be some of the poorest swimmers. Lee and colleagues concluded that even though the tail morphology of Koreaceratops—and other basal ceratopsians—does not argues against swimming habits, the cited evidence for it is insufficient. Tereschhenko in 2013 examined the structure of the caudal vertebrae spines of Protoceratops, concluding that it had adaptations for terrestrial and aquatic habits. Observations made found that the high number of caudal vertebrae may have been useful for swimming and use the tail to counter-balance weight. He also indicated that the anterior caudals were devoid of high neural spines and had increased mobility—a mobility that stars to decrease towards the high neural spines—, which suggest that the tail could be largely raised from its base. It is likely that Protoceratops raised its tail as a signal (display) or females could use this method during egg laying to expand and relax the cloaca. In 2016, Hone and team indicated that the tail of Protoceratops, particularly the mid region with elevated neural spines, could have been used in display to impress potential mates and/or for species recognition. The tail may have been related with structures like the frill for displaying behavior. Kim with team in 2019 cited the elongated tail spines as well-suited for swimming. They indicated that both Bagaceratops and Protoceratops may have used their tails in a similar fashion during similar situations, such as swimming, given how similar their postcranial skeletons were. The team also suggested that a swimming adaptation could have been useful to avoid aquatic predators, such as crocodylomorphs. Social behavior Tomasz Jerzykiewiczz in 1993 reported several monospecific (containing only one dominant species) death assemblages of Protoceratops from the Bayan Mandahu and Djadokhta formations. A group of five medium-sized and adult Protoceratops was observed at the Bayan Mandahu locality. Individuals within this assemblage were lying on their bellies with their heads facing upwards, side by side parallel-aligned, and inclined about 21 degrees from the horizontal plane. Two other groups were found at the Tugriken Shireh locality; one group containing six individuals and another group of about 12 skeletons. In 2014, David W. E. Hone and colleagues reported and described two blocks containing death assemblages of P. andrewsi from Tugriken Shireh. The first block (MPC-D 100/526) comprises four juvenile individuals in close proximity with their heads pointing upwards, and the second block (MPC-D 100/534) is composed of two sub-adults with a horizontal orientation. Based on previous assemblages and the two blocks, the team determined that Protoceratops was a social dinosaur that formed herds throughout its life and such herds would have varied in composition, with some including adults, sub-adults, siblings from a single nest or local members of a herd joining shortly after hatching. However, as the group could have loss members by predation or other factors, the remnants individuals would aggregate into larger groups to increase their survival. Hone and colleagues in particular suggested that juveniles would aggregate primarily as a defense against predators and an increased protection from the multiple adults within the group. The team also indicated that, while Protoceratops provides direct evidence for the formation of single cohort aggregations throughout its lifespan, it cannot be ruled out the possibility that some Protoceratops were solitary. Sexual dimorphism and display Brown and Schlaikjer in 1940 upon their large analysis of Protoceratops noted the potential presence of sexual dimorphism among specimens in P. andrewsi, concluding that this condition could be entirely subjective or represent actual differences between sexes. Individuals with a high nasal horn, massive prefrontals, and frontoparietal depression were tentatively determined as males. Females were mostly characterized by the lack of well-developed nasal horns. In 1972 Kurzanov made comparisons between P. andrewsi skulls from Bayn Dzak and Tugriken Shireh, noting differences on the nasal horn within populations. Peter Dodson in 1996 used anatomical characters of the skull in P. andrewsi to quantify areas subject to ontogenic changes and sexual dimorphism. In total, 40 skull characters were measured and compared, including regions like the frill and nasal horn. Dodson found most of these characters to be highly variable across specimens, especially the frill which he interpreted to have had a bigger role in displaying behavior than simply serving as a site of masticatory muscles. He considered unlikely such interpretation based on the relative fragility of some frill bones and the large individual variation, which may have affected the development of those muscles. The length of the frill was found by Dodson to have a rather irregular growth in specimens, as juvenile AMNH 6419 was observed with a frill length smaller than other juveniles. He agreed with Brown and Schlaikjer in that a high, well-developed nasal horn represents a male trait and the opposite indicates females. In addition, Dodson suggested that traits like the nasal horn and frill in male Protoceratops may have been important visual displays for attracting females and repelling other males, or even predators. Lastly, he noted that both males and females had not significant disparity in body size, and that sexual maturity in Protoceratops could be recognised at the moment when males can be distinguished from females. In 2001, Lambert and team upon the description of P. hellenikorhinus also noted variation within individuals. For instance, some specimens (e.g., holotype IMM 95BM1/1) preserve high nasal bones with a pair of horns; relatively short antorbital length; and vertically oriented nostrils. Such traits were regarded as representing male P. hellenikorhinus. The other group of skulls is characterized by low nasals that have undeveloped horns; a relatively longer antorbital length; and more oblique nostrils. These individuals were considered as females. The team however, was not able to produce deeper analysis regarding sexual dimorphism in P. hellenikorhinus due to the lack of complete specimens. Also in 2001, Tereschhenko analized several specimens of P. andrewsi to evaluate sexual dimorphism. He found 19 anatomical differences in the vertebral column and pelvic region of regarded male and female Protoceratops individuals, which he considered to represent actual sexual characters. In 2012, Naoto Handa and colleagues described four specimens of P. andrewsi from the Udyn Sayr locality of the Djadokhta Formation. They indicated that sexual dimorphism in this population was marked by a prominent nasal horn in males—trait also noted by other authors—relative wider nostrils in females, and a wider neck frill in males. Despite maintaining the skull morphology of most Protoceratops specimens (such as premaxillary teeth), the neck frill in this population was straighter with a near triangular shape. Handa and team in addition found variation across this Udyn Sayr sample and classified them in three groups. First group includes individuals with a well-developed bony ridge on the lateral surface of the squamosal bone, and the posterior border of the squamosal is backwards oriented. Second group had a fairly rounded posterior border of the squamosal, and a long and well-developed bony ridge on the posterior border of the parietal bone. Lastly, the third group was characterized by a curved posterior border of the squamosal and a notorious rugose texture on the top surface of the parietal. Such skull traits were regarded as marked intraspecific variation within Protoceratops, and they differ from other populations across the Djadokhta Formation (like Tugriken Shireh), being unique to the Udyn Sayr region. These neck frill morphologies differ from those of Protoceratops from the Djadokhta Formation in the adjacent dinosaur locality Tugrikin Shire. The morphological differences among the Udyn Sayr specimens may indicate intraspecific variation of Protoceratops. A large and well-developed bony ridge on the parietal has been observed on another P. andrewsi specimen, MPC-D 100/551, also from Udyn Sayr. However, Leonardo Maiorino with team in 2015 performed a large geometric morphometric analysis using 29 skulls of P. andrewsi to evaluate actual sexual dimorphism. Obtained results indicated that other than the nasal horn—which remained as the only skull trait with potential sexual dimorphism—all previously suggested characters to differentiate hyphotetical males from females were more linked to ontogenic changes and intraspecific variation independent of sex, most notably the neck frill. The geometrics showed no consistent morphological differences between specimens that were regarded as males and females by previous authors, but also a slight support for differences in the rostrum across the sample. Maiorino and team nevertheless, cited that the typical regarded Protoceratops male, AMNH 6438, pretty much resembles the rostrum morphology of AMNH 6466, a typical regarded female. However, they suggested that authentic differences between sexes could be still present in the postcranial skeleton. Although previously suggested for P. hellenikorhinus, the team argued that the sample used for this species was not sufficient, and given that sexual dimorphism was not recovered in P. andrewsi, it is unlikely that it occurred in P. hellenikorhinus. In 2016, Hone and colleagues analyzed 37 skulls of P. andrewsi, finding that the neck frill of Protoceratops (in both length and width) underwent positive allometry during ontongeny, that is, a faster growth/development of this region than the rest of the animal. The jugal bones also showed a trend towards an increase in relative size. These results suggest that they functioned as socio-sexual dominance signals, or, they were mostly used in display. The use of the frill as a displaying structure may be related to other anatomical features of Protoceratops such as the premaxillary teeth (at least for P. andrewsi) which could have been used in display or intraspecific combat, or the high neural spines of tail. On the other hand, Hone and team argued that if neck frills were instead used for protective purposes, a large frill may have acted as an aposematic (warning) signal to predators. However, such strategies are most effective when the taxon is rare in the overall environment, opposed to Protoceratops which appears to be an extremely abundant and medium-sized dinosaur. Tereschenko in 2018 examined the cervical vertebrae series of six P. andrewsi specimens. Most of them had differences in the same exact vertebra, such as the shape and proportions of the vertebral centra and orientation of neural arches. According these differences, four groups were identified, concluding that individual variation was extended to the vertebral column of Protoceratops. In 2020 nevertheless, Andrew C. Knapp and team conducted morphometric analyses of a large sample of P. andrewsi specimens, primarily confluding that the neck frill of Protoceratops has no indicators or evidence for being sexually dimorphic. Obtained results showed instead that several regions of the skull of Protoceratops independently varied in their rate of growth, ontogenetic shape and morphology; a high growth of the frill during ontogeny in relation to other body regions; and a large variability of the neck frill independent of size. Knapp and team noted that results of the frill indicate that this structure had a major role in signaling within the species, consistent with selection of potential mates with quality ornamentation and hence reproductive success, or dominance signaling. Such use of the frill may suggest that intraspecific social behavior was highly important for Protoceratops. Results also support the general hypothesis that the neck frill of ceratopsians functioned as a socio-sexual signal structure. Reproduction In 1989, Walter P. Coombs concluded that crocodilians, ratite and megapode birds were suitable modern analogs for dinosaur nesting behavior. He largely considered elongatoolithid eggs to belong to Protoceratops because adult skeletons were found in close proximity to nests, interpreting this as an evidence for parental care. Furthermore, Coombs considered the large concentration of Protoceratops eggs at small regions as an indicator of marked philopatric nesting (nesting in the same area). The nest of Protoceratops would have been excavated with the hindlimbs and was built in a mound-like, crater-shaped center structure with the eggs arranged in semicircular fashion. Richard A. Thulborn in 1992 analyzed the different types of eggs and nests—the majority of them, in fact, elongatoolithid—referred to Protoceratops and their structure. He identified types A and B, both of them sharing the elongated shape. Type A eggs differed from type B eggs in having a pinched end. Based on comparisons with other ornithischian dinosaurs such as Maiasaura and Orodromeus—known from more complete nests—Thulborn concluded that most depictions of Protoceratops nests were based on incompletely preserved clutches and mostly on type A eggs, which were more likely to have been laid by an ornithopod. He concluded that nests were built in a shallow mound with the eggs laid radially, contrary to popular restorations of crater-like Protoceratops nests. In 2011, the first authentic nest of Protoceratops (MPC-D 100/530) from the Tugriken Shireh locality was described by David E. Fastovsky and team. As some individuals are closely appressed along the well-defined margin of the nest, it may have had a circular or semi-circular shape—as previously hypothetized—with a diameter of . Most of the individuals within the nest had nearly the same age, size and growth, suggesting that they belonged to a single nest, rather than an aggregate of individuals. Fastovsky and team also suggested that even though the individuals were young, they were not perinates based on the absence of eggshell fragments and their large size compared to even more smaller juveniles from this locality. The fact that the individuals likely spend some time in the nest after hatching for growth suggests that Protoceratops parents might have cared for their young at nests during at least the early stages of life. As Protoceratops was a relatively basal (primitive) ceratopsian, the finding may imply that other ceratopsians provided care for their young as well. In 2017, Gregory M. Erickson and colleagues determined the incubation periods of P. andrewsi and Hypacrosaurus by using lines of arrested growth (LAGS; lines of growth) of the teeth in embryonic specimens (Protoceratops egg clutch MPC-D 100/1021). The results suggests a mean embryonic tooth replacement period of 30.68 days and relatively plesiomorphically (ancestral-shared) long incubation times for P. andrewsi, with a minimum incubation time of 83.16 days. Norell and team in 2020 analyzed again this clutch and concluded that Protoceratops laid soft-shelled eggs. Most embryos within this clutch have a flexed position and the outlines of eggs are also present, suggesting that they were buried in ovo (in the egg). The outlines of eggs and embryos indicates ellipsoid-shaped eggs in life with dimensions about long and wide. Several of the embryos were associated with a black to white halo (circumference). Norell and team performed histological examinations to its chemical composition, finding traces of proteinaceous eggshells, and when compared to other sauropsids the team concluded that they were not biomineralized in life and thus soft-shelled. Given that soft-shelled eggs are more vulnerable to deshydratation and crushing, Protoceratops may have buried its eggs in moisturized sand or soil. The growing embryos therefore relied on external heat and parental care. Paleopathology In 2018, Tereshchenko examined and described several articulated cervical vertebrae of P. andrewsi and reported the presence of two abnormally fused vertebrae (specimen PIN 3143/9). The fusion of the vertebrae was likely a product of disease or external damage. Predator–prey interactions Barsbold in 1974 shortly described the Fighting Dinosaurs specimen and discussed possible scenarios. The Velociraptor has its right leg pinned under the Protoceratops body with its left sickle claw oriented into the throat region. The Protoceratops bit the right hand of the predator, implying that it was unable to escape. Barsbold suggested that both animals drowned as they fell into a swamp-like body of water or, the relatively quicksand-like bottom of a lake could have kept them together during the last moments of their fight. Osmólska in 1993 proposed another two hypotheses to explain their preservation. During the death struggle, a large dune may have collapsed simultaneously burying both Protoceratops and Velociraptor. Another proposal is that the Velociraptor was scavenging an already dead Protoceratops when it got buried and eventually killed by indeterminate circumstances. In 1995, David M. Unwin and colleagues cast doubt on previous explanations especially a scavenging hypothesis as there were numerous indications of a concurrent death event. For instance, the Protoceratops has a semi-erect stance and its skull is nearly horizontal, which could have not been possible if the animal was already dead. The Velociraptor has its right hand trapped within the jaws of the Protoceratops and the left one grasping the Protoceratops skull. Moreover, it lies on the floor with its feet directed to the prey's belly and throat areas, indicating that this Velociraptor was not scavenging. Unwin and colleagues examined the sediments surrounding the specimen and suggested that the two were buried alive by a powerful sandstorm. They interpreted the interaction as the Protoceratops being grasped and dispatched with kicks delivered by the low-lying Velociraptor. They also considered possible that populations of Velociraptor were aware of crouching behaviors in Protoceratops during high-energy sandstorms and used it for successful hunts. Kenneth Carpenter in 1998 considered the Fighting Dinosaurs specimen to be conclusive evidence for theropods as active predators and not scavengers. He suggested another scenario where the multiple wounds delivered by the Velociraptor on the Protoceratops throat had the latter animal bleeding to death. As a last effort, the Protoceratops bit the right hand of the predator and trapped it beneath its own weight, causing the eventual death and desiccation of the Velociraptor. The missing limbs of the Protoceratops were afterwards taken by scavengers. Lastly, both animals were buried by sand. Given that the Velociraptor is relatively complete, Carpenter suggested that it may have been completely or partially buried by sand. In 2010, David Hone with team reported a new interaction between Velociraptor and Protoceratops based on tooth marks. Several fossils were collected at the Gate locality of the Bayan Mandahu Formation in 2008, including teeth and body remains of protoceratopsid and velociraptorine dinosaurs. The team referred these elements to Protoceratops and Velociraptor mainly based on their abundance across the unit, although they admitted that reported remains could represent different, yet related taxa (in this case, Linheraptor instead of Velociraptor). At least eight body fossils of Protoceratops present active teeth marks, which were interpreted as feeding traces. Much in contrast to the Fighting Dinosaurs specimen, the tooth marks are inferred to have been produced by the dromaeosaurid during late-stage carcass consumption either during scavenging or following a group kill. The team stated that feeding by Velociraptor upon Protoceratops was probably a relatively common occurrence in these environments, and that this ceratopsian actively formed part of the diet of Velociraptor. In 2016, Barsbold re-examined the Fighting Dinosaurs specimen and found several anomalies within the Protoceratops individual: both coracoids have small bone fragments indicatives of a breaking of the pectoral girdle; the right forelimb and scapulocoracoid are torn off to the left and backward relative to its torso. He concluded that the prominent displacement of pectoral elements and right forelimb was caused by an external force that tried to tear them out. Since this event likely occurred after the death of both animals or during a point where movement was not possible, and the Protoceratops is missing other body elements, Barsbold suggested that scavengers were the most likely authors. Because Protoceratops is considered to have been a herding animal, another hypothesis is that members of a herd tried to pull out the already buried Protoceratops, causing the joint dislocation of limbs. However, Barsbold pointed out that there are no related traces within the overall specimen to support this latter interpretation. Lastly, he restored the course of the fight with the Protoceratops power-slamming the Velociraptor, which used its feet claws to damage the throat and belly regions and its hand claws to grasp the herbivore's head. Before their burial, the deathmatch ended up on the ground with the Velociraptor lying on its back right under the Protoceratops. After burial, either Protoceratops herd or scavengers tore off the buried Protoceratops to the left and backward, making both predator and prey to be slightly separated. Daily activity In 2010, Nick Longrich examined the relatively large orbital ratio and sclerotic ring of Protoceratops, which he suggested as evidence for a nocturnal lifestyle. Based on the size of its sclerotic ring, Protoceratops had an unusually large eyeball among protoceratopsids. In birds, a medium-sized sclerotic ring indicates that the animal is a predator, a large sclerotic ring indicates that it is nocturnal, and the largest ring size indicates it is an active nocturnal predator. Eye size is an important adaptation in predators and nocturnal animals because a larger eye ratio poses a higher sensitivity and resolution. Because of the energy necessary to maintain a larger eyeball and the weakness of the skull that corresponds with a larger orbit, Longrich argues that this structure may have been an adaptation for a nocturnal lifestyle. The jaw morphology of Protoceratops—more suitable for processing plant material—and its extreme abundance indicate it was not a predator, so if it was a diurnal animal, then it would have been expected to have a much smaller sclerotic ring size. However, in 2011, Lars Schmitz and Ryosuke Motani measured the dimensions of the sclerotic ring and eye socket in fossil specimens of dinosaurs and pterosaurs, as well as some living species. They noted that whereas photopic (diurnal) animals have smaller sclerotic rings, scotopic (nocturnal) animals tend to have more enlarged rings. Mesopic (cathemeral) animals—which are irregularly active throughout the day and night—are between these two ranges. Schmitz and Motani separated ecological and phylogenetic factors and by examining 164 living species and noticed that eye measurements are quite accurate when inferring diurnality, cathemerality, or nocturnality in extinct tetrapods. The results indicated that Protoceratops was a cathemeral herbivore and Velociraptor primarily nocturnal, suggesting that the Fighting Dinosaurs deathmatch may have occurred at twilight or under low-light conditions. Lastly, Schmitz and Motani concluded that ecological niche was a potential main driver in the development of daily activity. However, a subsequent study in 2021 found that Protoceratops had a greater capability of nocturnal vision than did Velociraptor. Paleoenvironment Bayan Mandahu Formation Based on general similarities between the vertebrate fauna and sediments of Bayan Mandahu and the Djadokhta Formation, the Bayan Mandahu Formation is considered to be Late Cretaceous in age, roughly Campanian. The dominant lithology is reddish-brown, poorly cemented, fine grained sandstone with some conglomerate, and caliche. Other facies include alluvial (stream-deposited) and eolian (wind-deposited) sediments. It is likely that sediments at Bayan Mandahu were deposited by short-lived rivers and lakes on an alluvial plain (flat land consisting of sediments deposited by highland rivers) with a combination of dune field paleoenvironments, under a semi-arid climate. The formation is known for its vertebrate fossils in life-like poses, most of which are preserved in unstructured sandstone, indicating a catastrophic rapid burial. The paleofauna of Bayan Mandahu is very similar in composition to the nearby Djadokhta Formation, with both formations sharing several of the same genera, but differing in the exact species. In this formation, P. hellenikorhinus is the representative species, and it shared its paleoenvironment with numerous dinosaurs such as dromaeosaurids Linheraptor and Velociraptor osmolskae; oviraptorids Machairasaurus and Wulatelong; and troodontids Linhevenator, Papiliovenator, and Philovenator. Other dinosaur members include the alvarezsaurid Linhenykus; ankylosaurid Pinacosaurus mephistocephalus; and closely related protoceratopsid Bagaceratops. Additional fauna from this unit comprises nanhsiungchelyids turtles, and a variety of squamates and mammals. Djadokhta Formation Protoceratops is known from most localities of the Djadokhta Formation in Mongolia, which dates back to the Late Cretaceous about 71 million to 75 million years ago, being deposited during a rapid sequence of polarity changes in the late part of the Campanian stage. Dominant sediments at Djadokhta include dominant reddish-orange and pale orange to light gray, medium to fine-grained sands and sandstones, caliche, and sparse fluvial (river-deposited) processes. Based on these components, the paleoenvironments of the Djadokhta Formation are interpreted as having a hot, semiarid climate with large dune fields/sand dunes and several short-lived water bodies, similar to the modern Gobi Desert. It is estimated that at the end of the Campanian age and into the Maastrichtian the climate would shift to the more mesic (humid/wet) conditions seen in the Nemegt Formation. The Djadokhta Formation is separated into a lower Bayn Dzak Member and upper Turgrugyin Member. Protoceratops is largely known from both members, having P. andrewsi as a dominant and representative species in the overall formation. The Bayn Dzak member (mostly the Bayn Dzak locality) has yielded the dromaeosaurids Halszkaraptor and Velociraptor mongoliensis; oviraptorid Oviraptor; ankylosaurid Pinacosaurus grangeri; and troodontid Saurornithoides. Ukhaa Tolgod, a highly fossiliferous locality is also included in the Bayn Dzak member. and its dinosaur paleofauna is composed of alvarezsaurids Kol and Shuvuuia; ankylosaurid Minotaurasaurus; birds Apsaravis and Gobipteryx; dromaeosaurid Tsaagan; oviraptorids Citipati and Khaan; troodontids Almas and Byronosaurus; and a new, unnamed protoceratopsid closely related to Protoceratops. In the Turgrugyin Member (mainly Tugriken Shireh locality), P. andrewsi shared its paleoenvironment with the bird Elsornis; dromaeosaurids Mahakala and Velociraptor mongoliensis; and ornithomimid Aepyornithomimus. P. andrewsi is also abundant at Udyn Sayr, where Avimimus and Udanoceratops have been recovered. The relatively low dinosaur paleodiversity, small body size of most dinosaurs, and arid settings of the Djadokhta Formation compared to those of the Nemegt Formation, suggest that Protoceratops and contemporaneous biota lived in a stressed paleoenvironment (physical factors that generate adverse impacts on the ecosystem). In addition, the high occurrence of protoceratopsid fossils in arid-deposited formations indicates that these ceratopsians preferred warm environments. Although P. andrewsi was the predominant protoceratopsid on this formation, tentative remains of P. hellenikorhinus have been reported from the Udyn Sayr and Bor Tolgoi localities, suggesting that both species co-existed. Whereas P. andrewsi is found in aeolian sediments (Bayn Dzak or Tugriken Shireh), P. hellenikorhinus is found in the aeolian-fluvial sediments. As the latter type of sediments is also found in the Bayan Mandahu Formation, it is likely that P. hellenikorhinus preferred environments combining humid and arid conditions. Taphonomy In 1993 Jerzykiewiczz suggested that many articulated Protoceratops specimens died in the process of trying to free themselves from massive sand bodies that trapped them during sandstorms events and were not transported by environmental factors. He cited the distinctive posture of some Protoceratops involving the body and head arched upwards with forelimbs tucked in at their sides—a condition known as "standing" in particular cases—the absence of sedimentary structures in sediments preserving the individuals, and the Fighting Dinosaurs taphonomic history itself as evidence for this catastrophic preservation. Given that this posture is exhibited by populations from both Bayan Mandahu and Djadokhta formations, Jerzykiewiczz indicated that this behavior was not unique to any locality. He also considered it unlikely that these Protoceratops individuals died after burying themselves in the sand given that these specimens are only found in structureless sandstones; an arched posture would pose hard breathing conditions; and burrowers are known to excavate headfirst and sub horizontally. Fastovsky in 1997 examined the geology at Tugriken Shireh providing insights into the taphonomy of Protoceratops. He agreed in that the preservation of Protoceratops specimens indicate that they underwent a catastrophic event such as desert storms, and carcasses were not relocated by scavengers or environmental factors. Several isolated burrows found in sediments at this locality have also been reported penetrating in the bone surface of some buried Protoceratops individuals. Fastovsky pointed out these two factors combined indicate that this site was host to high biotic activity, mainly composed of arthropod scavengers who were also involved in the recycling of Protoceratops carcasses. The flexed position of most buried Protoceratops is indicative of desiccation and shrinking of ligaments/tendons in the legs, necks, and tails after death. In 1998 during a conference abstract at the Society of Vertebrate Paleontology, James I. Kirkland and team reported multiple arthropod pupae casts and borings (tunnels) on a largely articulated Protoceratops specimen from Tugriken Shireh, found in 1997. A notorious amount of pupae were found in clusters and singly along the bone surfaces, mostly in the joint areas, where the trace makers would have feed on dried ligaments, tendons and cartilage. The examined pupae from the specimen are more cylindrical structures with rounded ends. The pupae found in this Protoceratops individual were reported as measuring as much a long and wide and compare best with pupae attributed to solitary wasps. Additionally, the reported borings have a structure that differs from traces made by dermestid beetles. The team indicated that both pupae and boring traces reflect a marked ecological relationship between dinosaur carcasses and a relatively large necrophagous insect taxon. Later in 2010, Kirkland and Kenneth Bader redescribed and discussed the numerous feeding traces from this Protoceratops specimen, which they nicknamed Fox Site Protoceratops. They found at least three types of feeding traces on this individual; nearly circular borings—which they found instead to correlate best with feeding traces made by dermestid beetles—of in diameter; semicircular shaped notches at the edge of bones; and destruction of articular surfaces, mostly at the joints of the limbs. The co-workers also noted that the Fox Site Protoceratops preserves associated traces in the encasing sediment, indicative of necrophagous activity after the animal was buried. Kirkland and Bader concluded that adults of a large beetle taxon would detect decaying carcasses buried below the sand and dig down to feed and lay their eggs. After emerging from the eggs, larvae would have fed on the carcass prior to pupating. The last larvae to emerge would have feed on the dried tendons and cartilage in the joint areas—thereby explaining the notorious poor preservation of these areas in the specimen—and subsequently chewing on the bone itself, prior to pupating. After reaching full maturity, adult beetles would have then dig back to the surface, most likely leaving borings through bones, and finally beginning to search for new carcasses and thus continuing the recycling of Protoceratops carcasses. In 2010 the paleontologists Yukihide Matsumoto and Mototaka Saneyoshi reported multiple borings and bite traces on joint areas of articulated Bagaceratops and Protoceratops specimens from the Tugriken Shireh locality of the Djadokhta Formation and Hermiin Tsav locality of the Barun Goyot Formation, respectively. They interpreted the damaged areas in the Protoceratops specimen as product of active feeding by burrowing arthropods, most likely insects. These specimens were formally described and discussed in 2011 by Saneyoshi and team, including fossils from Velociraptor and an ankylosaurid. Reported traces were identified as pits, notches, borings, and channels across the skeletons, most notably at limb joint areas. The team indicated that it is very likely that these were made by scavenging insects, however, relatively large borings (about wide) in the ribs and scapulae of one Protoceratops specimen (MPC-D100/534) indicates that insects were not the only scavengers involved in the bone damage, but also mammals. Given the dry/harsh paleoenvironmental conditions of units like the Djadokhta Formation, medium to large-sized dinosaur carcasses may have been an important source of nutrition for small animals. Saneyoshi and team emphasized that the high frequency of feeding traces at the limb joints of numerous specimens and reports of previous studies, indicates that small animals may have targeted the collagen found in the joint cartilage of dried dinosaur carcasses as a source of nitrogen, which was low in the desert-dry conditions of these dinosaur fossils. In 2011 Fastovsky with colleagues concluded that the juveniles within the nest MPC-D 100/530 were rapidly overwhelmed by a strong sand-bearing event and entombed alive. The sediments of the nest suggest a deposition through a dune-shift or strong sandstorms, and the orientation of the individuals indicates that sediments were brought from a prevailing west-southwest wind. Most individuals are preserved with their forelimbs splayed and hindlimbs are extended, an arrangement that suggests that young Protoceratops tried to push against the powerful airstream in the initially loose sand. Prior to or during burial, some may have tried to climb on top of others. Because it is generally accepted that most fossil specimens at Tugriken Shireh were preserved by rapidly migrating dunes and sandstorms, Fastovsky with colleagues suggested that the lee side borders of the nest would have been the area where air was sand-free and consequently, all young Protoceratops may have struggled to reach this area, resulting in their final burial and eventual death. Hone and colleagues in 2014 indicated that two assemblages of Protoceratops at Tugriken Shireh (MPC-D 100/526 and 100/534) suggest that individuals died simultaneously, rather than accumulating over time. For instance, the block of four juveniles preserves the individuals with near-identical postures, spatial positions, and all of them have their heads facing upwards, which indicates that they were alive at the time of burial. During burial, the animals were most likely not completely restricted in their movements at all, given that the individuals of MPC-D 100/526 are in relatively normal life positions and have not been disturbed. At least two individuals within this block are preserved with their arms at a level above the legs, suggestive of attempts of trying to move upwards with the purpose of free themselves. The team also noted the presence of borings on the skulls and skeletons of both assemblages, and these may have been produced by insect larvae after the animals died. In 2016 Meguru Takeuchi and team reported numerous fossilized feeding traces preserved on skeletons of Protoceratops from the Bayn Dzak, Tugriken Shireh, and Udyn Sayr localities, and also from other dinosaurs. Preserved traces were reported as pits, notches, borings, and tunnels, which they attributed to scavengers. The diameter of the feeding traces preserved on a Protoceratops skull from Bayn Dzak was bigger than traces reported among other specimens, indicating that the scavengers responsible for these traces were notoriously different from other trace makers preserved on specimens. Cultural significance Possible Influence on Griffin Legend The folklorist and historian of science Adrienne Mayor of Stanford University has suggested that the exquisitely preserved fossil skeletons of Protoceratops, Psittacosaurus and other beaked dinosaurs, found by ancient Scythian nomads who mined gold in the Tian Shan and Altai Mountains of Central Asia, may have played a role in the image of the mythical creature known as the griffin. Griffins were described as wolf- or lion-sized quadrupeds with large claws and a raptor-bird-like beak; they laid their eggs in nests on the ground. Dodson in 1996 pointed out Greek writers began describing the griffin around 675 B.C., at the time the first Greek writings about Scythia nomads appeared, although contact with Scythian nomads would have occurred earlier, in the Bronze Age when Greeks imported tin from Afghanistan, transported on the caravan routes across the Gobi and other deserts. Griffins were described as "guarding" the gold deposits in the arid hills and red sandstone formations of the wilderness below the Tien Shan and Altai mountains. The region of Mongolia and China, where many Protoceratops and other dinosaur fossils are found, is rich in placer gold runoff from the neighboring mountains, lending some credence to the theory that these fossils played a role in griffin descriptions of the seventh century BC to Roman times. Mayor in 2001 and 2011 refined the hypothesis of Protoceratops as an influence on the griffin legend by analyzing written details and artistic imagery. She also cited some other Greek histories about mythological creatures may have been influenced by fossil discoveries by ancient people, such as cyclopes and giants. In 2016 this hypothesis was criticized by the British paleontologist and paleoartist Mark P. Witton, as it ignores pre-Greek "griffin art and accounts." (No written accounts of griffins are known before ca 675 BC, when the word gryps/griffin is first attested.) Witton goes on to point out that the wings of traditional griffins are positioned above the shoulder blades, not behind the neck as the frills of Protoceratops, that the bodies of griffins much more closely resemble the bodies of modern big cats than they do those of Protoceratops, and that the gold deposits of central Asia occur hundreds of kilometers from the known Protoceratops fossil remains, among many other inconsistencies. It is simpler, he argues, to understand the griffin as a mythical combination of well-known extant animal species than as an ancient misunderstanding of fossilized collections of bones. Witton later co-published with Richard Hing a 2024 paper expanding on his points regarding the tenuous link between griffins and Protoceratops.
Biology and health sciences
Ornitischians
Animals
1064597
https://en.wikipedia.org/wiki/Pygmy%20marmoset
Pygmy marmoset
Pygmy marmosets are two species of small New World monkeys in the genus Cebuella. They are native to rainforests of the western Amazon Basin in South America. These primates are notable for being the smallest monkeys in the world, at just over . They are generally found in evergreen and river-edge forests and are gum-feeding specialists, or gummivores. About 83% of the pygmy marmoset population lives in stable troops of two to nine individuals, including a dominant male, a breeding female, and up to four successive litters of offspring. The modal size of a standard stable troop would be six individuals. Although most groups consist of family members, some may also include one or two additional adult members. Members of the group communicate using a complex system including vocal, chemical, and visual signals. Three main calling signals depend on the distance the call needs to travel. These monkeys may also make visual displays when threatened or to show dominance. Chemical signaling using secretions from glands on the chest and genital area allow the female to indicate to the male when she is able to reproduce. The female gives birth to twins twice a year and the parental care is shared among the group. The pygmy marmoset has been viewed as somewhat different from typical marmosets, most of which are classified in the genera Callithrix (where they were placed in a subgenus) and Mico, and thus is accorded its own genus, Cebuella, within the family Callitrichidae. Their biggest threats are habitat loss and the pet trade. Evolution and taxonomy Debate has arisen among primatologists concerning the proper genus in which to place the pygmy marmoset. An examination of the interstitial retinol binding protein nuclear gene (IRBP) in three marmoset species showed that Callithrix as constructed in the 1990s also needed to include C. pygmaea to be monophyletic, and that the times of separation of pygmaea and the argentata and jacchus species groups from one another are less than 5 million years ago, as might be expected for species of the same genus. However, subsequent separation of the argentata and jacchus species groups into different genera (the argentata group having been moved to Mico) justifies maintaining a separate genus for the pygmy marmosets, as Callithrix is no longer paraphyletic. The two species described of the pygmy marmoset are: Few morphological differences occur between these species, as they may differ only slightly in color, and they are separated only by geographical barriers, including large rivers in South America. The evolution of this genus diverged in terms of body mass from typical primates, with a high rate of body-mass reduction. This involves large decreases in prenatal and postnatal growth rates, furthering the thought that progenesis played a role in the evolution of this animal. Physical description Pygmy marmosets are the smallest true monkey, with a head-body length ranging from and a tail of . The average adult body weight is just over with the only sexual dimorphism of females being a little heavier. The fur colour is a mixture of brownish-gold, grey, and black on its back and head and yellow, orange, and tawny on its underparts. Its tail has black rings and its face has flecks of white on its cheeks and a white vertical line between its eyes. It has many adaptations for arboreal living, including the ability to rotate its head 180° and sharp, claw-like nails used to cling to branches and trunks of trees. Its dental morphology is adapted to feeding on gum, with specialised incisors that are used to gouge trees and stimulate sap flow. The cecum is larger than usual to allow for the greater time gum needs to break down in the stomach. Pygmy marmosets walk on all four limbs and can leap up to between branches. Ecology Geographic range and habitat Pygmy marmosets can be found in much of the western Amazon Basin, in Brazil, Colombia, Ecuador, Peru, and Bolivia. The western pygmy marmoset, C. pygmaea, occurs in the state of Amazonas, Brazil, eastern Peru, southern Colombia, and north-eastern Ecuador. The eastern pygmy marmoset, C. niveiventris, is also found in Amazonas, but also in Acre, Brazil, eastern Peru, and northern Bolivia. The distribution of both species is often limited by rivers. They typically live in the understory of the mature evergreen forests and often near rivers. Population density is correlated with food-tree availability. They can be found between ground level and about into the trees, but generally do not enter the top of the canopy. They are often found in areas having standing water for more than three months of the year. Diet These monkeys have a specialized diet of tree gum. They gnaw holes in the bark of appropriate trees and vines with their specialized dentition to elicit the production of gum. When the sap puddles up in the hole, they lap it up with their tongues. They also lie in wait for insects, especially butterflies, which are attracted to the sap holes. They supplement their diet with nectar and fruit. A group's home range is , and feeding is usually concentrated on one or two trees at a time. When those become depleted, a group moves to a new home range. Brown-mantled tamarins are generally sympatric with pygmy marmosets and often raid pygmy marmosets' gum holes. Pygmy marmosets have adapted insect-like claws, known as tegulae, to engage in a high degree of claw-clinging behaviors associated with plant exudate exploitation. Claw-clinging is primarily used during feeding, but also during plant exudate foraging. Behaviour A pygmy marmoset group, ranging from two to nine members, contains one or two adult males and one or two adult females, including a single breeding female and her offspring. Interbirth interval ranges from 149–746 days. In contrast to other callitrichines, no relationship exists between the number of adult males and the number of infants and offspring. A significant positive relationship exists, though, between the number of juveniles and the number of adult and subadult group members. Young marmosets typically remain in the group for two consecutive birth cycles. The pygmy marmoset uses special types of communication to give alerts and warning to its family members. These include chemical, vocal, and visual types of communication. It is believed to serve to promote group cohesion and avoidance of other family groups. Social systems Infant pygmy marmosets, along with their parents, twins, and other siblings, form co-operative care groups. Babbling, or vocalizing, by the infant marmoset is a key part of its relationships with its family members and is a major part of its development. As the infant develops, the babbling gradually changes to resemble and eventually become adult vocalization. Many similarities are seen between the development of vocalization in infant pygmy marmosets and speech in infant humans. Vocalizing gives the infant advantages, such as increased care, and allows the entire family to coordinate their activities without seeing each other. Siblings also participate in infant care. Infant marmosets require the most attention, so having more family members participating in the care decreases the cost for any individual and also teaches parenting skills to the juvenile marmosets. Members of the group, usually female, may even put off their own reproduction through a temporary cessation of ovulation to care for the offspring of others in the group. The ideal number of caregivers for an infant marmoset has been shown to be around five individuals. Caregivers are responsible for finding food for the infants and helping the father watch for predators. Pygmy marmosets are not seasonal breeders and usually give birth to twins once or twice a year. Single births, however, occur in 16% and triplet births occur in 8% of pregnancies. The pygmy marmoset is usually monogamous, though some variation happens within the species in terms of breeding systems. Polyandry also occurs, as male marmosets are responsible for carrying the infants on their backs. Having a second male to carry the offspring can be beneficial, as marmoset litters are often twins and this decreases the physiological cost to any particular male. The daily range of pygmy marmosets, however, is relatively small, which decreases the rate of polyandry. Male and female pygmy marmosets show differences in foraging and feeding behavior, although male and female dominance and aggressive behavior vary within the species. Males have less time to search out food sources and forage due to the constraints of their infant-caring responsibilities and predator vigilance. Without an infant to carry, female pygmy marmosets have greater freedom to forage, giving them an apparent feeding priority, which may serve to compensate mothers for the energetic costs of carrying and lactating for two offspring at a time. Since breeding priority is also given to females without offspring, the argument is weakened. Instead, female feeding priority may have evolved through sexual selection. Females may choose mates that invest more time in infant care and predator vigilance. Such males have less time to look for food, allowing the female feeding priority. Communication Pygmy marmosets are well known for their communication abilities, including an intricate system of calls. The trill is used during feeding, foraging, and when travelling and the group is close together. The J-call is a series of fast notes repeated by the caller and is used at medium distances. Both calls are used as contact calls. The long call is used when the group is spread out over distances greater than 10 m or in response to a neighboring group. The pygmy marmoset uses the trill for short-distance communication, J-calls for intermediate distances, and long calls for long distances; these have respectively decreasing frequencies. They interpret these calls not only by type, but also through subtle sonic variance, by individual calling. Research based on audio playback tests shows that calls recorded from different individuals in captivity varied significantly in all seven auditory parameters analyzed for each type of call. Behavioral responses to trills were greatest when the caller was the dominant male of the group. Responses to J-calls were greatest when the caller was the monkey's mate or a same-sex monkey from outside the group. Varying responses to individual callers were only observed when the call was given spontaneously from another animal rather than being played back from a recording, with one exception. That exception was that male monkeys responded to playbacks of their own calls differently from those of other monkeys, when the call was played back from a familiar location. The pygmy marmoset is thought to react at first to the type of call that is being made and then adjusts its behavior slightly to react to the specific individual that is making the call. This allows the marmoset to react appropriately to all calls, but show some variation when the call gives extra information. Environmental factors play a role in communication by affecting the frequency of the signal and how far the signal can travel and still be audible to communicate the desired message. Since pygmy marmosets are often found in the rain forest, plant life and the humid atmosphere add to the normal absorption and scattering of sound. Because low-frequency calls are affected less by the disturbances than high-frequency ones, they are used for communication across longer distances. The pygmy marmoset changes the characteristics of its calls when its social environment is changed. Adult marmosets show modifications in the structure of their calls, which mimic those of their group members. In addition to changes of existing calls, novel calls may be heard from marmosets after pairing. Pygmy marmosets have other ways to communicate information about matters such as a female's ovulatory state. New World monkeys do not show genital swelling during ovulation as female Old World monkeys do. Instead, a lack of female aggression towards males can serve as a signal of ovulation. Scent glands on her chest, anus, and genitals are also rubbed on surfaces, which leave chemical signals about the reproductive state of the female. Pygmy marmosets also perform visual displays such as strutting, back-arching, and piloerection when they feel threatened or to show dominance. Conservation Both species of pygmy marmosets are listed as vulnerable on the IUCN Red List of Threatened Species. They are threatened by habitat loss in some areas of their range, and by the pet trade in others (i.e. Ecuador). Interaction between humans and pygmy marmosets is associated with a number of behavioral changes in the animal, including social play and vocalization, both of which are important to communication between animals in the species. Particularly in areas of heavy tourism, pygmy marmosets have a tendency to be less noisy, less aggressive, and less playful with other individuals. They are also pushed into higher strata of the rainforest than they would normally prefer. Tourism in areas native to the pygmy marmoset is also correlated with increased capture of the animal. Due to its small size and relatively docile nature, captured pygmy marmosets are often found in exotic pet trades. Capture causes even more behavioral variations, including a decrease in both the number and the sound level of vocalizations. Pygmy marmosets can also be found at local zoos, where they exist in groups. As pets Finger monkeys' (pygmy marmoset) value is associated with their being the smallest primate in the world. Listed as Vulnerable to extinction, they are rare to find in the market for purchase. Prices range from $1,000 to $4,000. Generally, a pygmy marmoset's lifespan is 15 to 20 years; they are known to have a shorter life in the wild mainly because they fall out of trees. Another expense for these creatures as pets is the necessary essentials to maintain them. Creating an environment similar to where they are from is important. For food, these creatures as pets are often fed fruits, insects, and smaller lizards. As pets, a baby pygmy marmoset needs to be fed every two hours for at least two weeks. Understanding their natural diet is also important because it helps maintain their good health from the necessary protein, calcium, and other nutrients they need to survive. In the United States, each state has different regulations regarding ownership of pet monkeys. Another factor that needs to be considered is that a regular veterinarian might not be able to help provide medical evaluations or care; one would need to seek out a veterinarian with a primate specialization. In South America, either importing or exporting these creatures is illegal. Understanding the laws within those countries is important when considering owning or taking care of a pygmy marmoset. Many people do not agree that pygmy marmosets should be pets. The argument is usually that they have a longer lifespan when they are in good care from a human. However, the UK RSPCA says they should "not be considered as pets in the accepted sense of the word. They are wild, undomesticated animals that cannot be house trained or fully tamed". In popular culture Fingerlings, the hit toy of Christmas 2017 produced by WowWee, is based on pygmy marmosets.
Biology and health sciences
New World monkeys
Animals
1064839
https://en.wikipedia.org/wiki/Bose%20gas
Bose gas
An ideal Bose gas is a quantum-mechanical phase of matter, analogous to a classical ideal gas. It is composed of bosons, which have an integer value of spin and abide by Bose–Einstein statistics. The statistical mechanics of bosons were developed by Satyendra Nath Bose for a photon gas and extended to massive particles by Albert Einstein, who realized that an ideal gas of bosons would form a condensate at a low enough temperature, unlike a classical ideal gas. This condensate is known as a Bose–Einstein condensate. Introduction and examples Bosons are quantum mechanical particles that follow Bose–Einstein statistics, or equivalently, that possess integer spin. These particles can be classified as elementary: these are the Higgs boson, the photon, the gluon, the W/Z and the hypothetical graviton; or composite like the atom of hydrogen, the atom of 16O, the nucleus of deuterium, mesons etc. Additionally, some quasiparticles in more complex systems can also be considered bosons like the plasmons (quanta of charge density waves). The first model that treated a gas with several bosons, was the photon gas, a gas of photons, developed by Bose. This model leads to a better understanding of Planck's law and the black-body radiation. The photon gas can be easily expanded to any kind of ensemble of massless non-interacting bosons. The phonon gas, also known as Debye model, is an example where the normal modes of vibration of the crystal lattice of a metal, can be treated as effective massless bosons. Peter Debye used the phonon gas model to explain the behaviour of heat capacity of metals at low temperature. An interesting example of a Bose gas is an ensemble of helium-4 atoms. When a system of 4He atoms is cooled down to temperature near absolute zero, many quantum mechanical effects are present. Below 2.17 K, the ensemble starts to behave as a superfluid, a fluid with almost zero viscosity. The Bose gas is the most simple quantitative model that explains this phase transition. Mainly when a gas of bosons is cooled down, it forms a Bose–Einstein condensate, a state where a large number of bosons occupy the lowest energy, the ground state, and quantum effects are macroscopically visible like wave interference. The theory of Bose-Einstein condensates and Bose gases can also explain some features of superconductivity where charge carriers couple in pairs (Cooper pairs) and behave like bosons. As a result, superconductors behave like having no electrical resistivity at low temperatures. The equivalent model for half-integer particles (like electrons or helium-3 atoms), that follow Fermi–Dirac statistics, is called the Fermi gas (an ensemble of non-interacting fermions). At low enough particle number density and high temperature, both the Fermi gas and the Bose gas behave like a classical ideal gas. Macroscopic limit The thermodynamics of an ideal Bose gas is best calculated using the grand canonical ensemble. The grand potential for a Bose gas is given by: where each term in the sum corresponds to a particular single-particle energy level εi; gi is the number of states with energy εi; z is the absolute activity (or "fugacity"), which may also be expressed in terms of the chemical potential μ by defining: and β defined as: where kB is the Boltzmann constant and T is the temperature. All thermodynamic quantities may be derived from the grand potential and we will consider all thermodynamic quantities to be functions of only the three variables z, β (or T), and V. All partial derivatives are taken with respect to one of these three variables while the other two are held constant. The permissible range of z is from negative infinity to +1, as any value beyond this would give an infinite number of particles to states with an energy level of 0 (it is assumed that the energy levels have been offset so that the lowest energy level is 0). Macroscopic limit, result for uncondensed fraction Following the procedure described in the gas in a box article, we can apply the Thomas–Fermi approximation, which assumes that the average energy is large compared to the energy difference between levels so that the above sum may be replaced by an integral. This replacement gives the macroscopic grand potential function , which is close to : The degeneracy dg may be expressed for many different situations by the general formula: where α is a constant, Ec is a critical energy, and Γ is the gamma function. For example, for a massive Bose gas in a box, and the critical energy is given by: where Λ is the thermal wavelength, and f is a degeneracy factor ( for simple spinless bosons). For a massive Bose gas in a harmonic trap we will have and the critical energy is given by: where V(r) = mω2r2/2 is the harmonic potential. It is seen that Ec is a function of volume only. This integral expression for the grand potential evaluates to: where Lis(x) is the polylogarithm function. The problem with this continuum approximation for a Bose gas is that the ground state has been effectively ignored, giving a degeneracy of zero for zero energy. This inaccuracy becomes serious when dealing with the Bose–Einstein condensate and will be dealt with in the next sections. As will be seen, even at low temperatures the above result is still useful for accurately describing the thermodynamics of just the uncondensed portion of the gas. Limit on number of particles in uncondensed phase, critical temperature The total number of particles is found from the grand potential by This increases monotonically with z (up to the maximum z = +1). The behaviour when approaching z = 1 is however crucially dependent on the value of α (i.e., dependent on whether the gas is 1D, 2D, 3D, whether it is in a flat or harmonic potential well). For , the number of particles only increases up to a finite maximum value, i.e., Nm is finite at : where ζ(α) is the Riemann zeta function (using ). Thus, for a fixed number of particles Nm, the largest possible value that β can have is a critical value βc. This corresponds to a critical temperature , below which the Thomas–Fermi approximation breaks down (the continuum of states simply can no longer support this many particles, at lower temperatures). The above equation can be solved for the critical temperature: For example, for the three-dimensional Bose gas in a box ( and using the above noted value of Ec) we get: For , there is no upper limit on the number of particles (Nm diverges as z approaches 1), and thus for example for a gas in a one- or two-dimensional box ( and respectively) there is no critical temperature. Inclusion of the ground state The above problem raises the question for : if a Bose gas with a fixed number of particles is lowered down below the critical temperature, what happens? The problem here is that the Thomas–Fermi approximation has set the degeneracy of the ground state to zero, which is wrong. There is no ground state to accept the condensate and so particles simply 'disappear' from the continuum of states. It turns out, however, that the macroscopic equation gives an accurate estimate of the number of particles in the excited states, and it is not a bad approximation to simply "tack on" a ground state term to accept the particles that fall out of the continuum: where N0 is the number of particles in the ground state condensate. Thus in the macroscopic limit, when , the value of z is pinned to 1 and N0 takes up the remainder of particles. For there is the normal behaviour, with . This approach gives the fraction of condensed particles in the macroscopic limit: Limitations of the macroscopic Bose gas model The above standard treatment of a macroscopic Bose gas is straightforward, but the inclusion of the ground state is somewhat inelegant. Another approach is to include the ground state explicitly (contributing a term in the grand potential, as in the section below), this gives rise to an unrealistic fluctuation catastrophe: the number of particles in any given state follow a geometric distribution, meaning that when condensation happens at and most particles are in one state, there is a huge uncertainty in the total number of particles. This is related to the fact that the compressibility becomes unbounded for . Calculations can instead be performed in the canonical ensemble, which fixes the total particle number, however the calculations are not as easy. Practically however, the aforementioned theoretical flaw is a minor issue, as the most unrealistic assumption is that of non-interaction between bosons. Experimental realizations of boson gases always have significant interactions, i.e., they are non-ideal gases. The interactions significantly change the physics of how a condensate of bosons behaves: the ground state spreads out, the chemical potential saturates to a positive value even at zero temperature, and the fluctuation problem disappears (the compressibility becomes finite). See the article Bose–Einstein condensate. Approximate behaviour in small gases For smaller, mesoscopic, systems (for example, with only thousands of particles), the ground state term can be more explicitly approximated by adding in an actual discrete level at energy ε=0 in the grand potential: which gives instead . Now, the behaviour is smooth when crossing the critical temperature, and z approaches 1 very closely but does not reach it. This can now be solved down to absolute zero in temperature. Figure 1 shows the results of the solution to this equation for , with , which corresponds to a gas of bosons in a box. The solid black line is the fraction of excited states for and the dotted black line is the solution for . The blue lines are the fraction of condensed particles N0/N. The red lines plot values of the negative of the chemical potential μ and the green lines plot the corresponding values of z. The horizontal axis is the normalized temperature τ defined by It can be seen that each of these parameters become linear in τα in the limit of low temperature and, except for the chemical potential, linear in 1/τα in the limit of high temperature. As the number of particles increases, the condensed and excited fractions tend towards a discontinuity at the critical temperature. The equation for the number of particles can be written in terms of the normalized temperature as: For a given N and τ, this equation can be solved for τα and then a series solution for z can be found by the method of inversion of series, either in powers of τα or as an asymptotic expansion in inverse powers of τα. From these expansions, we can find the behavior of the gas near and in the Maxwell–Boltzmann as T approaches infinity. In particular, we are interested in the limit as N approaches infinity, which can be easily determined from these expansions. This approach to modelling small systems may in fact be unrealistic, however, since the variance in the number of particles in the ground state is very large, equal to the number of particles. In contrast, the variance of particle number in a normal gas is only the square-root of the particle number, which is why it can normally be ignored. This high variance is due to the choice of using the grand canonical ensemble for the entire system, including the condensate state. Thermodynamics Expanded out, the grand potential is: All thermodynamic properties can be computed from this potential. The following table lists various thermodynamic quantities calculated in the limit of low temperature and high temperature, and in the limit of infinite particle number. An equal sign (=) indicates an exact result, while an approximation symbol indicates that only the first few terms of a series in is shown. It is seen that all quantities approach the values for a classical ideal gas in the limit of large temperature. The above values can be used to calculate other thermodynamic quantities. For example, the relationship between internal energy and the product of pressure and volume is the same as that for a classical ideal gas over all temperatures: A similar situation holds for the specific heat at constant volume The entropy is given by: Note that in the limit of high temperature, we have which, for α = 3/2 is simply a restatement of the Sackur–Tetrode equation. In one dimension bosons with delta interaction behave as fermions, they obey Pauli exclusion principle. In one dimension Bose gas with delta interaction can be solved exactly by Bethe ansatz. The bulk free energy and thermodynamic potentials were calculated by Chen-Ning Yang. In one dimensional case correlation functions also were evaluated. In one dimension Bose gas is equivalent to quantum non-linear Schrödinger equation.
Physical sciences
States of matter
Physics
1064876
https://en.wikipedia.org/wiki/Potter%27s%20wheel
Potter's wheel
In pottery, a potter's wheel is a machine used in the shaping (known as throwing) of clay into round ceramic ware. The wheel may also be used during the process of trimming excess clay from leather-hard dried ware that is stiff but malleable, and for applying incised decoration or rings of colour. Use of the potter's wheel became widespread throughout the Old World but was unknown in the Pre-Columbian New World, where pottery was handmade by methods that included coiling and beating. A potter's wheel may occasionally be referred to as a "potter's lathe". However, that term is better used for another kind of machine that is used for a different shaping process, turning, similar to that used for shaping of metal and wooden articles. The pottery wheel is an important component to create arts and craft products. The techniques of jiggering and jolleying can be seen as extensions of the potter's wheel: in jiggering, a shaped tool is slowly brought down onto the plastic clay body that has been placed on top of the rotating plaster mould. The jigger tool shapes one face, the mould the other. The term is specific to the shaping of flat ware, such as plates, whilst a similar technique, jolleying, refers to the production of hollow ware, such as cups. History Prior to using a wheel all of these civilizations used techniques such as pinching, coiling, paddling, and shaping to create ceramic forms. In addition, several of these techniques continued to be used on pots on or off the wheel to decorate or create more rounded or symmetrical shapes. Most early ceramic ware was hand-built using a simple coiling technique in which clay was rolled into long threads that were then pinched and smoothed together to form the body of a vessel. In the coiling method of construction, all the energy required to form the main part of a piece is supplied indirectly by the hands of the potter. Early ceramics built by coiling were often placed on mats or large leaves to allow them to be worked more conveniently. The evidence of this lies in mat or leaf impressions left in the clay of the base of the pot. This arrangement allowed the potter to rotate the vessel during construction, rather than walk around it to add coils of clay. The oldest forms of the potter's wheel (called tourneys or slow wheels) were probably developed as an extension to this procedure. Tournettes, in use around 3500 BC in the Near East, were turned slowly by hand or by foot while coiling a pot. Only a small range of vessels were fashioned on the tournette, suggesting that it was used by a limited number of potters. The introduction of the slow wheel increased the efficiency of hand-powered pottery production. In the mid to late 3rd millennium BC the fast wheel was developed, which operated on the flywheel principle. It utilised energy stored in the rotating mass of the heavy stone wheel itself to speed the process. This wheel was wound up and charged with energy by kicking, or pushing it around with a stick, providing angular momentum. The fast wheel enabled a new process of pottery-making to develop, called throwing, in which a lump of clay was placed centrally on the wheel and then squeezed, lifted and shaped as the wheel turned. The process tends to leave rings on the inside of the pot and can be used to create thinner-walled pieces and a wider variety of shapes, including stemmed vessels, so wheel-thrown pottery can be distinguished from handmade. Potters could now produce many more pots per hour, a first step towards industrialization. Many modern scholars suggest that the first potter's wheel was first developed by the ancient Sumerians in Mesopotamia. A stone potter's wheel found at the Sumerian city of Ur in modern-day Iraq has been dated to about 3129 BC, but fragments of wheel-thrown pottery of an even earlier date have been recovered in the same area. However, southeastern Europe and China have also been claimed as possible places of origin. A potter's wheel in western Ukraine, from the Cucuteni–Trypillia culture, has been dated to the middle of the 5th millennium BC, and is the oldest ever found, and which further precedes the earliest use of the potter's wheel in Mesopotamia by several hundred years. On the other hand, Egypt is considered as "being the place of origin of the potter's wheel. It was here that the turntable shaft was lengthened about 3000 BC and a flywheel added. The flywheel was kicked and later was moved by pulling the edge with the left hand while forming the clay with the right. This led to the counterclockwise motion for the potter's wheel which is almost universal." Thus, the exact origin of the wheel is not wholly clear yet. In the Iron Age, the potter's wheel in common use had a turning platform about over the floor, connected by a long axle to a heavy flywheel at ground level. This arrangement allowed the potter to keep the turning wheel rotating by kicking the flywheel with the foot, leaving both hands free for manipulating the vessel under construction. However, from an ergonomic standpoint, sweeping the foot from side to side against the spinning hub is rather awkward. At some point, an alternative solution was invented that involved a crankshaft with a lever that converted up-and-down motion into rotary motion. In Japan the potter's wheel first showed by in the Asuka or Sueki period (552–710 CE) where wares were more sophisticated and complicated. In addition to the new technology of the wheel, firing was also changed to a much higher temperature in a rudimentary kiln. The industrialization continued through the Nara period (710–794) and into the Heian, or Fujiwara, period (794–1185). With higher temperature firings, new glazes followed (green, yellowish brown, and white), in addition new styles and techniques of glazing emerged. Ceramic wares that emerged from China were processed with a very similar beginning as Japan. The history of Chinese pottery began in the Neolithic era about 4300 BC down to 2000 BC. Unlike Japan, which focused on production of everyday wares, China created mostly decorative pieces with few opportunities for industrialization and production of ceramic wares. Because China focused on decorative wares, most of their pottery was centered around porcelain instead of earthen wares seem almost everywhere else, and they used the potter's wheel for the development of porcelain clay culture. Porcelain took off during the Ming Dynasty and the Qing Dynasty (1644-1911), when the iconic blue and white porcelain ceramics emerged. Several places in China mix traditional elements and methods with modern design and technologies. Native Americans have been creating ceramics by hand and in more modern eras started incorporating a wheel into their work. Pottery can be identified in the Southwest of North American dating back to 150 CE and has been an important part of Native American culture for over 2,000 years. Historically Native Americans have been using the coiling method to achieve their decorative and functional pieces, and the technology to create an electric wheel did not show up until the arrival of Europeans. However, smaller turntables or slow wheels could have been used occasionally. The use of the motor-driven wheel has become common in modern times, particularly with craft potters and educational institutions, although human-powered ones are still in use and are preferred by some studio potters. Industrialization Social consequences that can arrive of these technological advancements include increased economic advancements in the sales of pottery created using the potter’s wheel and industrialization of the ceramics processes. The potter’s wheel greatly increased the production rate of ceramics, which allowed for more products to be created. With the industrialization of ceramics in Japan, ceramics also lost some of its historical value, and some techniques and meanings of the ceramics were lost in the process. Techniques of throwing A skilled potter can quickly throw a vessel from up to of clay. Alternatively, by throwing a vessel and adding coils of clay then throwing again, pots may be made even taller, with the heat of a blowlamp being used to firm each thrown section before adding the next coil. Similarly, multiple sections may be thrown and combined to create large vessels. Large wheels and masses of clay can also allow for multiple people to work on a pot simultaneously, which can create very large ceramic pieces. This practice is used in Jingdezhen, China, where 3 or more potters may work on one pot at the same time. There are a variety of methods to throwing, though almost all involve the following steps in some form: center the clay on the wheel, open a hole in the clay, creating a donut shaped ring of clay around the base of the pot, then raise or shape the walls to create the pots final shape. The specifics of these steps, including the motions of the hands, can vary from culture to culture, as well as from potter to potter. In most cultures, the wheel spins counterclockwise and the right hand is placed on the outside of the pot as it is thrown. Japanese pottery is thrown oppositely, with the wheel spinning clockwise and the right hand on the interior of the pot. However, modern wheels powered by electric motors often allow for rotation in either direction, allowing the potter to choose which direction works best for their technique, hand dominance and personal preferences. The potter's wheel in myth In Ancient Egyptian mythology, the deity Khnum was said to have formed the first humans on a potter's wheel.
Technology
Industrial machinery
null
9662955
https://en.wikipedia.org/wiki/Convection%20%28heat%20transfer%29
Convection (heat transfer)
Convection (or convective heat transfer) is the transfer of heat from one place to another due to the movement of fluid. Although often discussed as a distinct method of heat transfer, convective heat transfer involves the combined processes of conduction (heat diffusion) and advection (heat transfer by bulk fluid flow). Convection is usually the dominant form of heat transfer in liquids and gases. Note that this definition of convection is only applicable in Heat transfer and thermodynamic contexts. It should not be confused with the dynamic fluid phenomenon of convection, which is typically referred to as Natural Convection in thermodynamic contexts in order to distinguish the two. Overview Convection can be "forced" by movement of a fluid by means other than buoyancy forces (for example, a water pump in an automobile engine). Thermal expansion of fluids may also force convection. In other cases, natural buoyancy forces alone are entirely responsible for fluid motion when the fluid is heated, and this process is called "natural convection". An example is the draft in a chimney or around any fire. In natural convection, an increase in temperature produces a reduction in density, which in turn causes fluid motion due to pressures and forces when the fluids of different densities are affected by gravity (or any g-force). For example, when water is heated on a stove, hot water from the bottom of the pan is displaced (or forced up) by the colder denser liquid, which falls. After heating has stopped, mixing and conduction from this natural convection eventually result in a nearly homogeneous density, and even temperature. Without the presence of gravity (or conditions that cause a g-force of any type), natural convection does not occur, and only forced-convection modes operate. The convection heat transfer mode comprises two mechanism. In addition to energy transfer due to specific molecular motion (diffusion), energy is transferred by bulk, or macroscopic, motion of the fluid. This motion is associated with the fact that, at any instant, large numbers of molecules are moving collectively or as aggregates. Such motion, in the presence of a temperature gradient, contributes to heat transfer. Because the molecules in aggregate retain their random motion, the total heat transfer is then due to the superposition of energy transport by random motion of the molecules and by the bulk motion of the fluid. It is customary to use the term convection when referring to this cumulative transport and the term advection when referring to the transport due to bulk fluid motion. Types Two types of convective heat transfer may be distinguished: Free or natural convection: when fluid motion is caused by buoyancy forces that result from the density variations due to variations of thermal ±temperature in the fluid. In the absence of an internal source, when the fluid is in contact with a hot surface, its molecules separate and scatter, causing the fluid to be less dense. As a consequence, the fluid is displaced while the cooler fluid gets denser and the fluid sinks. Thus, the hotter volume transfers heat towards the cooler volume of that fluid. Familiar examples are the upward flow of air due to a fire or hot object and the circulation of water in a pot that is heated from below. Forced convection: when a fluid is forced to flow over the surface by an internal source such as fans, by stirring, and pumps, creating an artificially induced convection current. In many real-life applications (e.g. heat losses at solar central receivers or cooling of photovoltaic panels), natural and forced convection occur at the same time (mixed convection). Internal and external flow can also classify convection. Internal flow occurs when a fluid is enclosed by a solid boundary such as when flowing through a pipe. An external flow occurs when a fluid extends indefinitely without encountering a solid surface. Both of these types of convection, either natural or forced, can be internal or external because they are independent of each other. The bulk temperature, or the average fluid temperature, is a convenient reference point for evaluating properties related to convective heat transfer, particularly in applications related to flow in pipes and ducts. Further classification can be made depending on the smoothness and undulations of the solid surfaces. Not all surfaces are smooth, though a bulk of the available information deals with smooth surfaces. Wavy irregular surfaces are commonly encountered in heat transfer devices which include solar collectors, regenerative heat exchangers, and underground energy storage systems. They have a significant role to play in the heat transfer processes in these applications. Since they bring in an added complexity due to the undulations in the surfaces, they need to be tackled with mathematical finesse through elegant simplification techniques. Also, they do affect the flow and heat transfer characteristics, thereby behaving differently from straight smooth surfaces. For a visual experience of natural convection, a glass filled with hot water and some red food dye may be placed inside a fish tank with cold, clear water. The convection currents of the red liquid may be seen to rise and fall in different regions, then eventually settle, illustrating the process as heat gradients are dissipated. Newton's law of cooling Convection-cooling is sometimes loosely assumed to be described by Newton's law of cooling. Newton's law states that the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings while under the effects of a breeze. The constant of proportionality is the heat transfer coefficient. The law applies when the coefficient is independent, or relatively independent, of the temperature difference between object and environment. In classical natural convective heat transfer, the heat transfer coefficient is dependent on the temperature. However, Newton's law does approximate reality when the temperature changes are relatively small, and for forced air and pumped liquid cooling, where the fluid velocity does not rise with increasing temperature difference. Convective heat transfer The basic relationship for heat transfer by convection is: where is the heat transferred per unit time, A is the area of the object, h is the heat transfer coefficient, T is the object's surface temperature, and Tf is the fluid temperature. The convective heat transfer coefficient is dependent upon the physical properties of the fluid and the physical situation. Values of h have been measured and tabulated for commonly encountered fluids and flow situations.
Physical sciences
Thermodynamics
Physics
13699607
https://en.wikipedia.org/wiki/Moment%20%28unit%29
Moment (unit)
A moment () is a medieval unit of time. The movement of a shadow on a sundial covered 40 moments in a solar hour, a twelfth of the period between sunrise and sunset. The length of a solar hour depended on the length of the day, which, in turn, varied with the season. Although the length of a moment in modern seconds was therefore not fixed, on average, a medieval moment corresponded to 90 seconds. A solar day can be divided into 24 hours of either equal or unequal lengths, the former being called natural or equinoctial, and the latter artificial. The hour was divided into four (quarter-hours), 10 , or 40 . The unit was used by medieval computists before the introduction of the mechanical clock and the base 60 system in the late 13th century. The unit would not have been used in everyday life. For medieval commoners the main marker of the passage of time was the call to prayer at intervals throughout the day. The earliest reference found to the moment is from the 8th century writings of the Venerable Bede, who describes the system as 1 solar hour = 4 = 5 lunar = 10 = 15 = 40 . Bede was referenced five centuries later by both Bartholomeus Anglicus in his early encyclopedia (On the Properties of Things), as well as Roger Bacon, by which time the moment was further subdivided into 12 ounces of 47 atoms each, although no such divisions could ever have been used in observation with equipment in use at the time.
Physical sciences
Time
Basics and measurement
1632972
https://en.wikipedia.org/wiki/Nuclear%20transfer
Nuclear transfer
Nuclear transfer is a form of cloning. The step involves removing the DNA from an oocyte (unfertilised egg), and injecting the nucleus which contains the DNA to be cloned. In rare instances, the newly constructed cell will divide normally, replicating the new DNA while remaining in a pluripotent state. If the cloned cells are placed in the uterus of a female mammal, a cloned organism develops to term in rare instances. This is how Dolly the Sheep and many other species were cloned. Cows are commonly cloned to select those that have the best milk production. On 24 January 2018, two monkey clones were reported to have been created with the technique for the first time. Despite this, the low efficiency of the technique has prompted some researchers, notably Ian Wilmut, creator of Dolly the cloned sheep, to abandon it. Tools and reagents Nuclear transfer is a delicate process that is a major hurdle in the development of cloning technology. Materials used in this procedure are a microscope, a holding pipette (small vacuum) to keep the oocyte in place, and a micropipette (hair-thin needle) capable of extracting the nucleus of a cell using a vacuum. For some species, such as mouse, a drill is used to pierce the outer layers of the oocyte. Various chemical reagents are used to increase cloning efficiency. Microtubule inhibitors, such as nocodazole, are used to arrest the oocyte in M phase, during which its nuclear membrane is dissolved. Chemicals are also used to stimulate oocyte activation. When applied the membrane is completely dissolved. Somatic cell nuclear transfer Somatic Cell Nuclear Transfer (SCNT) is the process by which the nucleus of an oocyte (egg cell) is removed and is replaced with the nucleus of a somatic (body) cell (examples include skin, heart, or nerve cell). The two entities fuse to become one and factors in the oocyte cause the somatic nucleus to reprogram to a pluripotent state. The cell contains genetic information identical to the donated somatic cell. After stimulating this cell to begin dividing, in the proper conditions an embryo will develop. Stem cells can be extracted 5–6 days later and used for research. Reprogramming Genomic reprogramming is the key biological process behind nuclear transfer. Currently unidentified reprogramming factors present in oocytes are capable of initiating a cascade of events that can reset the mature, specialized cell back to an undifferentiated, embryonic state. These factors are thought to be mainly proteins of the nucleus.
Technology
Biotechnology
null
1633043
https://en.wikipedia.org/wiki/Agaricus
Agaricus
Agaricus is a genus of mushroom-forming fungi containing both edible and poisonous species, with over 400 members worldwide and possibly again as many disputed or newly-discovered species. The genus includes the common ("button") mushroom (Agaricus bisporus) and the field mushroom (A. campestris), the dominant cultivated mushrooms of the West. Description Members of Agaricus are characterized by having a fleshy cap or pileus, from the underside of which grow a number of radiating plates or gills, on which are produced the naked spores. They are distinguished from other members of their family, Agaricaceae, by their chocolate-brown spores. Members of Agaricus also have a stem or stipe, which elevates it above the object on which the mushroom grows, or substrate, and a partial veil, which protects the developing gills and later forms a ring or annulus on the stalk. Taxonomy Several origins of genus name Agaricus have been proposed. It possibly originates from ancient Sarmatia Europaea, where people Agari, promontory Agarum and a river Agarus were known (all located on the northern shore of Sea of Azov, probably, near modern Berdiansk in Ukraine). Note also Greek , agarikón, "a sort of tree fungus" (There has been an Agaricon Adans. genus, treated by Donk in Persoonia 1:180.) For many years, members of the genus Agaricus were given the generic name Psalliota, and this can still be seen in older books on mushrooms. All proposals to conserve Agaricus against Psalliota or vice versa have so far been considered superfluous. Dok reports Linnaeus' name is devalidated (so the proper author citation apparently is "L. per Fr., 1821") because Agaricus was not linked to Tournefort's name. Linnaeus places both Agaricus Dill. and Amanita Dill. in synonymy, but truly a replacement for Amanita Dill., which would require A. quercinus, not A. campestris be the type. This question is compounded because Fries himself used Agaricus roughly in Linnaeus' sense (which leads to issues with Amanita), and A. campestris was eventually excluded from Agaricus by Karsten and was apparently in Lepiota at the time Donk wrote this, commenting that a type conservation might become necessary. The alternate name for the genus, Psalliota, derived from the Greek psalion/ψάλιον, "ring", was first published by Fries (1821) as trib. Psalliota. The type is Agaricus campestris (widely accepted, except by Earle, who proposed A. cretaceus). Paul Kummer (not Quélet, who merely excluded Stropharia) was the first to elevate the tribe to a genus. Psalliota was the tribe containing the type of Agaricus, so when separated, it should have caused the rest of the genus to be renamed, but this is not what happened. Phylogeny The use of phylogenetic analysis to determine evolutionary relationships amongst Agaricus species has increased the understanding of this taxonomically difficult genus, although much work remains to be done to fully delineate infrageneric relationships. Prior to these analyses, the genus Agaricus, as circumscribed by Rolf Singer, was divided into 42 species grouped into five sections based on reactions of mushroom tissue to air or various chemical reagents, as well as subtle differences in mushroom morphology. Restriction fragment length polymorphism analysis demonstrated this classification scheme needed revision. Subdivisions As of 2018, this genus is divided into 6 subgenera and more than 20 sections: Subgenus Agaricus Section Agaricus This is the group around the type species of the genus, the popular edible A. campestris which is common across the Holarctic temperate zone, and has been introduced to some other regions. One of the more ancient lineages of the genus, it contains species typically found in open grassland such as A. cupreobrunneus, and it also includes at least one undescribed species. Their cap surface is whitish to pale reddish-brown and smooth to slightly fibrous, the flesh usually without characteristic smell, fairly soft, whitish, and remaining so after injury, application of KOH, or Schäffer's test (aniline and HNO3). A. annae may also belong here, as might A. porphyrocephalus, but the flesh of the latter blushes red when bruised or cut, and it has an unpleasant smell of rotten fish when old; these traits are generally associated with subgenus Pseudochitonia, in particular section Chitonioides. The A. bresadolanus/radicatus/romagnesii group which may be one or several species is sometimes placed here, but may be quite distinct and belong to subgenus Spissicaules. Subgenus Flavoagaricus Section Arvense Konrad & Maubl. (sometimes named Arvensis) Traditionally contained about 20 rather large species similar to the horse mushroom A. arvensis in six subgroups. Today, several additional species are recognized – in particular in the A. arvensis species complex – and placed here, such as A. aestivalis, A. augustus, A. caroli, A. chionodermus, A. deserticola (formerly Longula texensis), A. fissuratus, A. inapertus (formerly Endoptychum depressum), A. macrocarpus, A. nivescens, A. osecanus, A. silvicola and the doubtfully distinct A. essettei, A. urinascens, and the disputed taxa A. abruptibulbus, A. albertii, A. altipes, A. albolutescens, A. brunneolus, A. excellens and A. macrosporus. It also includes A. subrufescens which started to be widely grown and traded under various obsolete and newly-invented names in the early 21st century, as well as the Floridan A. blazei with which the Brazilian A. subrufescens was often confused in the past. They have versatile heterothallic life cycles, are found in a variety of often rather arid habitats, and typically have a smooth white to scaly light brown cap. The flesh, when bruised, usually turns distinctly yellow to pinkish in particular on the cap, while the end of the stalk may remain white; a marked yellow stain is caused by applying KOH. Their sweetish smell of almond extract or marzipan due to benzaldehyde and derived compounds distinguishes them from the section Xanthodermatei, as does a bright dark-orange to brownish-red coloration in Schäffer's test. Many members of this subgenus are highly regarded as food, and even medically beneficial, but at least some are known to accumulate cadmium and other highly toxic chemicals from the environment, and may not always be safe to eat. Subgenus Minores A group of buff-white to reddish-brown species. Often delicate and slender, the typical members of this subgenus do not resemble the larger Agaricus species at a casual glance, but have the same telltale chocolate-brown gills at spore maturity. Their flesh has a barely noticeable to pronounced sweetish smell, typically almond-like, turns yellowish to brownish-red when cut or bruised at least in the lower stalk, yellow to orange with KOH, and orange to red in Schäffer's test. Species such as A. aridicola (formerly known as Gyrophragmium dunalii), A. colpeteii, A. columellatus (formerly Araneosa columellata), A. diminutivus, A. dulcidulus, A. lamelliperditus, A. luteomaculatus, A. porphyrizon, A. semotus and A. xantholepis are included here, but delimitation to and indeed distinctness from subgenus Flavoagaricus is a long-standing controversy. Unlike these however, subgenus Minores contains no choice edible species, and may even include some slightly poisonous ones; most are simply too small to make collecting them for food worthwhile, and their edibility is unknown. Section Leucocarpi Includes A. leucocarpus. Section Minores Includes A. comtulus and A. huijsmanii. Unnamed section Includes A. candidolutescens and an undescribed relative. Subgenus Minoriopsis Somewhat reminiscent of subgenus Minores and like it closely related to subgenus Flavoagaricus, it contains species such as A. martinicensis and A. rufoaurantiacus. Subgenus Pseudochitonia This highly diverse clade of mid-sized to largish species makes up much the bulk of the genus' extant diversity, and this subgenus contains numerous as of yet undescribed species. It includes both the most prized edible as well as the most notoriously poisonous Agaricus, and some of its sections are in overall appearance more similar to the more distantly related Agaricus proper and Flavoagaricus than to their own closest relatives. Some species in this subgenus, such as A. goossensiae and A. rodmanii, are not yet robustly assigned to one of the sections. Section Bohusia Includes A. bohusii which resembles one of the dark-capped Flavoagaricus or Xanthodermatei but does not stain yellow with the standard (10%) KOH testing solution. It is a woodland species, edible when young, but when mature and easily distinguished from similar species it may be slightly poisonous. Other members of this section include A. crassisquamosus, A. haematinus, and A. pseudolangei. Section Brunneopicti A section notable for containing a considerable number of undescribed species in addition to A. bingensis, A. brunneopictus, A. brunneosquamulosus, A. chiangmaiensis, A. duplocingulatus, A. megacystidiatus, A. niveogranulatus, A. sordidocarpus, A. subsaharianus, and A. toluenolens. Section Chitonioides Contains species such as A. bernardii and the doubtfully distinct A. bernardiiformis, A. gennadii, A. nevoi, A. pequinii, A. pilosporus and A. rollanii, which strongly resemble the members of section Duploannulatae and are as widely distributed. However, their flesh tends to discolor more strongly red when bruised or cut, with the discoloration slowly getting stronger. Their smell is usually also more pronounced umami-like, in some even intensely so. Some are edible and indeed considered especially well-tasting, while the unusual A. maleolens which may also belong here has an overpowering aroma which renders it inedible except perhaps in small amounts as a vegan fish sauce substitute. Section Crassispori Related to section Xanthodermatei as traditionally circumscribed, it includes such species as A. campestroides, A. lamellidistans, and A. variicystis. Section Cymbiformes He, Chuankid, Hyde, Cheewangkoon & Zhao A section proposed in 2018, it is closely related to the traditional section Xanthodermatei. The type species A. angusticystidiatus from Thailand is a smallish beige Agaricus with characteristic boat-shaped basidiospores. It has a strong unpleasant smell like members of section Xanthodermatei, but unlike these, its flesh does not change color when bruised, but turns dark reddish-brown when cut, and neither application of KOH nor Schäffer's test elicit a change in color. Section Duploannulatae (also known as section Bivelares or Hortenses) Traditionally often included in section Agaricus as subsection Bitorques, it seems to belong to a much younger radiation. It unites robust species, usually with a thick, almost fleshy ring, which inhabit diverse but often nutrient-rich locations. Some are well-known edibles; as they are frequently found along roads and in similar polluted places, they may not be safe to eat if collected from the wild. Their flesh is rather firm, white, with no characteristic smell, in some species turning markedly reddish when bruised or cut (though this may soon fade again), and generally changing color barely if at all after application of KOH or Schäffer's test. Based on DNA analysis of ITS1, ITS2, and 5.8S sequences, the studied species of this section could be divided into six distinct clades, four of which correspond to well-known species from the temperate Northern Hemisphere: A. bisporus, A. bitorquis (and the doubtfully distinct A. edulis), A. cupressicola and A. vaporarius. The other two clades comprise the A. devoniensis (including A. subperonatus) and A. subfloccosus (including A. agrinferus) species complexes. Additional members of this section not included in that study are A. cappellianus, A. cupressophilus, A. subsubensis, A. taeniatus, A. tlaxcalensis, and at least one undescribed species. The cultivated mushrooms traded as A. sinodeliciosus also belong here, though their relationship to the A. devoniensis complex and A. vaporarius is unclear. Section Flocculenti Includes A. erectosquamosus and A. pallidobrunneus; a more distant undescribed relative of these two may also belong in this section. Section Hondenses (disputed) Traditionally included in section Xanthodermatei sensu lato, this clade may be included therein as the most basal branch, or considered a section in its own right. It includes such species as A. biannulatus, A. freirei and its North American relatives A. grandiomyces, A. hondensis, and probably also A. phaeolepidotus. They are very similar to section Xanthodermatei sensu stricto in all aspects, except for a weaker discoloration tending towards reddish rather than chrome yellow when bruised. Section Nigrobrunnescentes Includes A. biberi, A. caballeroi, A. desjardinii, A. erthyrosarx, A. fuscovelatus, A. nigrobrunnescens, A. padanus, A. pattersoniae, and probably also A. boisselettii. Section Rubricosi Includes A. dolichopus, A. kunmingensis, A. magnivelaris, A. variabilicolor, and at least two undescribed species. Section Sanguinolenti Usually found in woodland. Brownish cap with a fibrous surface, typically felt-like but sometimes scaly. The fairly soft flesh turns pink, blood-red or orange when cut or scraped, in particular the outer layer of the stalk, but does not change color after application of KOH or Schäffer's test. Some North American species traditionally placed here, such as A. amicosus and A. brunneofibrillosus, do not seem to be closely related to the section's type species A. silvaticus (including A. haemorrhoidarius which is sometimes considered a distinct species), and represent at least a distinct subsection. Other species often placed in this section are A. benesii, A. dilutibrunneus, A. impudicus, A. koelerionensis, A. langei and A. variegans; not all of these may actually belong here. They are generally (though not invariably) regarded as edible and tasty. Section Trisulphurati (disputed) Includes the A. trisulphuratus species complex which is often placed in genus Cystoagaricus, but seems to be a true Agaricus closely related to the traditional section Xanthodermatei. Their stalk is typically bright yellow-orange, quite unlike that of other Agaricus, as is the scaly cap. A.trisulphuratus was the type species of the obsolete polyphyletic subgenus Lanagaricus, whose former species are now placed in various other sections. Section Xanthodermatei As outlined by Singer in 1948, this section includes species with various characteristics similar to the type species A. xanthodermus. The section forms a single clade based on analysis of ITS1+2. They are either bright white all over, or have a cap densely flecked with brownish scales or tufts of fibers. The ring is usually large but thin and veil-like. Most inhabit woodland, and in general they have a more or less pronounced unpleasant smell of phenolic compounds such as hydroquinone. As food, they should all be avoided, because even though they are occasionally reported to be eaten without ill effect, the chemicals they contain give them a acrid, metallic taste, especially when cooked, and are liable to cause severe gastrointestinal upset. Their flesh at least in the lower stalk turns pale yellow to intensely reddish-ochre when bruised or cut; more characteristic however is the a bright yellow reaction with KOH while Schäffer's test is negative. Apart from A. xanthodermus, the core group of this section contains species such as A. atrodiscus, A. californicus, A. endoxanthus and the doubtfully distinct A. rotalis, A. fuscopunctatus, A. iodosmus, A. laskibarii, A. microvolvatulus, A. menieri, A. moelleri, A. murinocephalus, A. parvitigrinus, A. placomyces, A. pocillator, A. pseudopratensis, A. tibetensis, A. tollocanensis, A. tytthocarpus, A. xanthodermulus, A. xanthosarcus, as well as at least 4 undescribed species, and possibly A. cervinifolius and the doubtfully distinct A. infidus. Whether such species as A. bisporiticus, A. nigrogracilis and A. pilatianus are more closely related to the mostly Eurasian core group, or to the more basal lineage here separated as section Hondenses, requires clarification. Subgenus Spissicaules The flesh of members of this subgenus tends to turn more or less pronouncedly yellowish in the lower stalk, where the skin is often rough and scaly, and reddish in the cap. They typically resemble the darker members of subgenus Flavoagaricus, with a sweet smell and mild taste; like that subgenus, Spissicaules belongs to the smaller of the two main groups of the genus, but they form entirely different branch therein. While some species are held to be edible, others are considered unappetizing or even slightly poisonous. Also includes A. lanipes and A. maskae, which probably belong to section Rarolentes or Spissicaules, and possibly also A. bresadolanus and its doubtfully distinct relatives A. radicatus/romagnesii. Section Amoeni Includes A. amoenus and A. gratolens. Section Rarolentes Includes A. albosquamosus and A. leucolepidotus. Section Spissicaules (Hainem.) Kerrigan Includes species such as A. leucotrichus/litoralis (of which A. spissicaulis is a synonym, but see also Geml et al. 2004) and A. litoraloides. Most significantly, some species have a persistent and unpleasant rotting-wood smell entirely unlike the sweet aroma of Flavoagaricus, and while not known to be poisonous, are certainly unpalatable. Section Subrutilescentes Includes A. brunneopilatus, A. linzhinensis and A. subrutilescens. Somewhat similar to section Sanguinolenti or the dark-capped species of section Xanthodermatei, but the flesh does not show a pronounced red or yellow color change when cut or bruised. Edibility is disputed. Selected species The fungal genus Agaricus as late as 2008 was believed to contain about 200 species worldwide but since then, molecular phylogenetic studies have revalidated several disputed species, as well as resolved some species complexes, and aided in discovery and description of a wide range of mostly tropical species that were formerly unknown to science. As of 2020, the genus is believed to contain no fewer than 400 species, and possibly many more. The medicinal mushroom known in Japan as Echigoshirayukidake (越後白雪茸) was initially also thought to be an Agaricus, either a subspecies of Agaricus "blazei" (i.e. A. subrufescens), or a new species. It was eventually identified as sclerotium of the crust-forming bark fungus Ceraceomyces tessulatus, which is not particularly closely related to Agaricus. Several secotioid (puffball-like) fungi have in recent times be recognized as highly aberrant members of 'Agaricus, and are now included here. These typically inhabit deserts where few fungi – and even fewer of the familiar cap-and-stalk mushroom shape – grow. Another desert species, A. zelleri, was erroneously placed in the present genus and is now known as Gyrophragmium californicum. In addition, the scientific names Agaricus and – even more so – Psalliota were historically often used as a "wastebasket taxon" for any and all similar mushrooms, regardless of their actual relationships. Species either confirmed or suspected to belong into this genus include: Agaricus abramsii  Agaricus abruptibulbus – abruptly-bulbous agaricus, flat-bulb mushroom (disputed)  Agaricus aestivalis Agaricus agrinferus (disputed) Agaricus agrocyboides Agaricus alabamensis Agaricus alachuanus Agaricus albidoperonatus Agaricus albertii Bon (1988) (disputed) Agaricus alboargillascens Agaricus alboides  Agaricus albolutescens (disputed) Agaricus albosanguineus Agaricus albosquamosus Agaricus alligator  Agaricus altipes Møller (often united with A.aestivalis) Agaricus amanitiformis  Agaricus amicosus Agaricus amoenomyces Agaricus amoenus  Agaricus andrewii Freeman Agaricus angelicus Agaricus angusticystidiatus Agaricus anisarius Agaricus annae Agaricus annulospecialis Agaricus approximans Agaricus arcticus Agaricus argenteopurpureus Agaricus argenteus Agaricus argentinus Agaricus argyropotamicus Agaricus argyrotectus Agaricus aridicola Geml, Geiser & Royse (2004) (formerly in Gyrophragmium) Agaricus aristocratus Agaricus arizonicus Agaricus armandomyces Agaricus arorae Agaricus arrillagarum  Agaricus arvensis – horse mushroom Agaricus atrodiscus  Agaricus augustus – the prince  Agaricus aurantioviolaceus Agaricus auresiccescens Agaricus australiensis Agaricus austrovinaceus Agaricus azoetes Agaricus babosiae Agaricus badioniveus Agaricus bajan-agtensis Agaricus balchaschensis Agaricus bambusae Agaricus bambusophilus Agaricus basianulosus Agaricus beelii Agaricus bellanniae Agaricus benesii Agaricus benzodorus  Agaricus bernardii – salt-loving mushroom Agaricus bernardiiformis (disputed) Agaricus berryessae Agaricus biannulatus Mua, L.A.Parra, Cappelli & Callac (2012) (Europe) Agaricus biberi Agaricus bicortinatellus Agaricus bilamellatus Agaricus bingensis Agaricus bisporatus Agaricus bisporiticus (Asia)  Agaricus bisporus – cultivated/button/portobello mushroom (includes A.brunnescens)  Agaricus bitorquis – pavement mushroom, banded agaric Agaricus bivelatoides Agaricus bivelatus Agaricus blatteus Agaricus blazei Murrill (often confused with A. subrufescens) Agaricus blockii Agaricus bobosi Agaricus bohusianus L.A.Parra (2005) (Europe) Agaricus bohusii Agaricus boisselettii Agaricus boltonii Agaricus bonii Agaricus bonussquamulosus Agaricus brasiliensis Fr. (often confused with A. subrufescens) Agaricus bresadolanus Agaricus bruchii  Agaricus brunneofibrillosus (formerly in A.fuscofibrillosus) Agaricus brunneofulva Agaricus brunneofulvus Agaricus brunneolus (disputed) Agaricus brunneopictus Agaricus brunneopilatus Agaricus brunneosquamulosus Agaricus brunneostictus Agaricus buckmacadooi Agaricus bugandensis Agaricus bukavuensis Agaricus bulbillosus Agaricus burkillii Agaricus butyreburneus Agaricus caballeroi L.A.Parra, G.Muñoz & Callac (2014) (Spain) Agaricus caesifolius  Agaricus californicus – California agaricus Agaricus callacii Agaricus calongei Agaricus campbellensis  Agaricus campestris – field/meadow mushroom Agaricus campestroides Agaricus campigenus Agaricus candidolutescens Agaricus candussoi Agaricus capensis Agaricus cappellianus Agaricus cappellii Agaricus caribaeus Agaricus carminescens Agaricus carminostictus Agaricus caroli Agaricus catenariocystidiosus Agaricus catenatus Agaricus cellaris Agaricus cervinifolius Agaricus cerinupileus Agaricus chacoensis Agaricus chartaceus Agaricus cheilotulus Agaricus chiangmaiensis Agaricus chionodermus Agaricus chlamydopus Agaricus chryseus Agaricus cinnamomellus Agaricus circumtectus Agaricus ciscoensis Agaricus citrinidiscus Agaricus coccyginus Agaricus collegarum Agaricus colpeteii Agaricus columellatus (formerly in Araneosa) Agaricus comptuloides Agaricus comtulellus Agaricus comtuliformis Agaricus comtulus Agaricus coniferarum Agaricus cordillerensis Agaricus crassisquamosus Agaricus cretacellus Agaricus cretaceus Agaricus croceolutescens Agaricus crocodilinus Agaricus crocopeplus Agaricus cruciquercorum Agaricus cuniculicola  Agaricus cupreobrunneus – brown field mushroom Agaricus cupressicola Agaricus cupressophilus Kerrigan (2008) (California) Agaricus curanilahuensis Agaricus cylindriceps Agaricus deardorffensis Agaricus dennisii Agaricus depauperatus Agaricus deplanatus  Agaricus deserticola G.Moreno, Esqueda & Lizárraga (2010) – gasteroid agaricus (formerly in Longula) Agaricus desjardinii Agaricus devoniensis Agaricus diamantanus Agaricus dicystis Agaricus didymus Agaricus dilatostipes Agaricus dilutibrunneus Agaricus diminutivus Agaricus dimorphosquamatus Agaricus diobensis Agaricus diospyros Agaricus dolichopus Agaricus ducheminii  Agaricus dulcidulus – rosy wood mushroom (sometimes in A.semotus) Agaricus duplocingulatus Agaricus ealaensis Agaricus earlei Agaricus eastlandensis Agaricus eburneocanus Agaricus edmondoi Agaricus elfinensis Agaricus elongatestipes Agaricus eludens Agaricus endoxanthus Agaricus entibigae Agaricus erectosquamosus Agaricus erindalensis Agaricus erthyrosarx Agaricus erythrotrichus Agaricus essettei (disputed) Agaricus eutheloides Agaricus evertens  Agaricus excellens (disputed) Agaricus exilissimus Agaricus eximius Agaricus fiardii Agaricus fibuloides Agaricus ficophilus Agaricus fimbrimarginatus Agaricus fissuratus Agaricus flammicolor Agaricus flavicentrus Agaricus flavidodiscus Agaricus flavistipus Agaricus flavitingens Agaricus flavopileatus Agaricus flavotingens Agaricus flocculosipes Agaricus floridanus Agaricus fontanae Agaricus fragilivolvatus Agaricus freirei Agaricus friesianus Agaricus fulvoaurantiacus Agaricus fuscofolius Agaricus fuscopunctatus (Thailand) Agaricus fuscovelatus Agaricus gastronevadensis Agaricus gemellatus Agaricus gemlii Agaricus gemloides Agaricus gennadii Agaricus gilvus Agaricus glaber Agaricus glabrus Agaricus globocystidiatus Agaricus globosporus Agaricus goossensiae Agaricus grandiomyces Agaricus granularis Agaricus gratolens Agaricus greigensis Agaricus greuteri Agaricus griseicephalus Agaricus griseopunctatus Agaricus griseorimosus Agaricus griseovinaceus Agaricus guachari Agaricus guidottii Agaricus haematinus Agaricus haematosarcus Agaricus hahashimensis Agaricus halophilus Agaricus hannonii Agaricus hanthanaensis Agaricus heimii Agaricus heinemannianus Agaricus heinemanniensis Agaricus heinemannii Agaricus herinkii Agaricus herradurensis Agaricus heterocystis Agaricus hillii Agaricus hispidissimus  Agaricus hondensis – felt-ringed agaricus Agaricus horakianus Agaricus horakii Agaricus hornei Agaricus hortensis Agaricus huijsmanii Courtec. (2008) Agaricus hupohanae Agaricus hypophaeus Agaricus iesu-et-marthae Agaricus ignicolor Agaricus ignobilis  Agaricus impudicus – tufted wood mushroom Agaricus inapertus (formerly in Endoptychum) Agaricus incultorum Agaricus indistinctus Agaricus inedulis Agaricus infelix Agaricus infidus (disputed) Agaricus inilleasper Agaricus inoxydabilis Agaricus inthanonensis Agaricus iocephalopsis Agaricus iodolens Agaricus iodosmus Agaricus iranicus Agaricus jacarandae Agaricus jacobi Agaricus jezoensis Agaricus jingningensis Agaricus jodoformicus Agaricus johnstonii Agaricus julius Agaricus junquitensis Agaricus kai Agaricus kauffmanii Agaricus kerriganii Agaricus kiawetes Agaricus kipukae Agaricus kivuensis Agaricus koelerionensis Agaricus kriegeri Agaricus kroneanus Agaricus kuehnerianus Agaricus kunmingensis Agaricus lacrymabunda Agaricus laeticulus Agaricus lamellidistans Agaricus lamelliperditus Agaricus lanatoniger Agaricus lanatorubescens Agaricus langei (= A.fuscofibrillosus) Agaricus lanipedisimilis Agaricus lanipes – European princess Agaricus laparrae Agaricus laskibarii Agaricus lateriticolor Agaricus leptocaulis Agaricus leptomeleagris Agaricus leucocarpus Agaricus leucolepidotus  Agaricus leucotrichus Møller (disputed) Agaricus lignophilus  Agaricus lilaceps – giant cypress agaricus Agaricus linzhinensis Agaricus litoralis – coastal mushroom (includes A.spissicaulis) Agaricus litoraloides Agaricus lividonitidus Agaricus lodgeae Agaricus lotenensis Agaricus lucifugus Agaricus ludovicii Agaricus lusitanicus Agaricus luteofibrillosus Agaricus luteoflocculosus Agaricus luteomaculatus Agaricus luteopallidus Agaricus luteotactus Agaricus lutosus Agaricus luzonensis Agaricus maclovianus Agaricus macmurphyi Agaricus macrocarpus Agaricus macrolepis (Pilát & Pouzar) Boisselet & Courtec. (2008)  Agaricus macrosporus (disputed) Agaricus macrosporoides Agaricus magni Agaricus magniceps Agaricus magnivelaris Agaricus maiusculus Agaricus malangelus Agaricus maleolens Agaricus mangaoensis Agaricus manilensis Agaricus marisae Agaricus martineziensis Agaricus martinicensis Agaricus maskae Agaricus masoalensis Agaricus matrum Agaricus medio-fuscus Agaricus megacystidiatus Agaricus megalosporus Agaricus meijeri Agaricus melanosporus  Agaricus menieri Agaricus merrillii Agaricus mesocarpus Agaricus microchlamidus  Agaricus micromegathus Agaricus microspermus Agaricus microviolaceus Agaricus microvolvatulus Agaricus midnapurensis Agaricus minimus Agaricus minorpurpureus   Agaricus moelleri – inky/dark-scaled mushroom (formerly in A.placomyces, includes A.meleagris) Agaricus moellerianus Agaricus moelleroides Agaricus moronii Agaricus multipunctum Agaricus murinocephalus (Thailand) Agaricus nanaugustus Kerrigan Agaricus nebularum Agaricus neimengguensis Agaricus nemoricola Agaricus nevoi Agaricus nigrescentibus Agaricus nigrobrunnescens Agaricus nigrogracilis Agaricus nitidipes Agaricus niveogranulatus Agaricus niveolutescens Agaricus nivescens Agaricus nobelianus Agaricus nothofagorum Agaricus novoguineensis Agaricus ochraceidiscus Agaricus ochraceosquamulosus Agaricus ochrascens Agaricus oenotrichus Agaricus oligocystis Agaricus olivellus Agaricus ornatipes Agaricus osecanus Agaricus pachydermus Agaricus padanus Agaricus pallens Agaricus pallidobrunneus Agaricus pampeanus Agaricus panziensis Agaricus parasilvaticus Agaricus parasubrutilescens Agaricus parvibicolor Agaricus parvitigrinus Agaricus patialensis Agaricus patris  Agaricus pattersoniae Agaricus pearsonii Agaricus peligerinus Agaricus pequinii Agaricus perdicinus Agaricus perfuscus  Agaricus perobscurus – American princess Agaricus perrarus Agaricus perturbans Agaricus petchii Agaricus phaeocyclus  Agaricus phaeolepidotus Agaricus phaeoxanthus Agaricus pietatis  Agaricus pilatianus Agaricus pilosporus  Agaricus placomyces (includes A.praeclaresquamosus) Agaricus planipileus Agaricus pleurocystidiatus  Agaricus pocillator Agaricus porosporus Agaricus porphyrizon  Agaricus porphyrocephalus Møller Agaricus porphyropos Agaricus posadensis Agaricus praefoliatus Agaricus praemagniceps Agaricus praemagnus Agaricus praerimosus Agaricus pratensis Agaricus pratulorum Agaricus projectellus Agaricus proserpens Agaricus pseudoargentinus Agaricus pseudoaugustus Agaricus pseudocomptulus Agaricus pseudolangei Agaricus pseudolutosus Agaricus pseudomuralis Agaricus pseudoniger Agaricus pseudopallens Agaricus pseudoplacomyces Agaricus pseudopratensis Agaricus pseudopurpurellus Agaricus pseudoumbrella Agaricus pulcherrimus Agaricus pulverotectus Agaricus punjabensis Agaricus purpurellus Agaricus purpureofibrillosus Agaricus purpureoniger Agaricus purpureosquamulosus Agaricus purpurlesquameus Agaricus putidus Agaricus puttemansii Agaricus radicatus (disputed) Agaricus reducibulbus Agaricus rhoadsii Agaricus rhopalopodius Agaricus riberaltensis Agaricus robustulus Agaricus robynsianus Agaricus rodmanii Agaricus rollanii Agaricus romagnesii (disputed) Agaricus rosalamellatus Agaricus roseocingulatus Agaricus rotalis (disputed) Agaricus rubellus Agaricus rubronanus Kerrigan (1985) (San Mateo county) Agaricus rubribrunnescens Agaricus rufoaurantiacus Agaricus rufolanosus Agaricus rufotegulis Agaricus rufuspileus Agaricus rusiophyllus Agaricus rutilescens Agaricus salicophilus Agaricus sandianus  Agaricus santacatalinensis Agaricus sceptonymus Agaricus scitulus Agaricus semotellus  Agaricus semotus Agaricus sequoiae (Mendocino County, CA, under coast redwood) Agaricus shaferi  Agaricus silvaticus – scaly/blushing wood mushroom, pinewood mushroom (= A.sylvaticus, includes A.haemorrhoidarius)  Agaricus silvicola – wood mushroom (= A.sylvicola) Agaricus silvicolae-similis Agaricus silvipluvialis Agaricus simillimus Agaricus singaporensis Agaricus singeri Agaricus sinodeliciosus Agaricus sipapuensis Agaricus slovenicus Agaricus smithii Agaricus sodalis  Agaricus solidipes Peck, Bull (1904) Agaricus sordido-ochraceus Agaricus sordidocarpus Agaricus spegazzinianus Agaricus stadii Agaricus stellatus-cuticus Agaricus sterilomarginatus Agaricus sterlingii Agaricus stevensii Agaricus stigmaticus Courtec. (2008) Agaricus stijvei Agaricus stramineus Agaricus subalachuanus Agaricus subantarcticus Agaricus subareolatus Agaricus subarvensis Agaricus subcoeruleus Agaricus subcomtulus Agaricus subedulis Agaricus subflabellatus Agaricus subfloccosus Agaricus subfloridanus Agaricus subgibbosus Agaricus subhortensis Agaricus subnitens Agaricus subochraceosquamulosus Agaricus suboreades Agaricus subperonatus (disputed) Agaricus subplacomyces-badius Agaricus subponderosus Agaricus subpratensis  Agaricus subrufescens (includes A.rufotegulis, often confused with A.blazei and A.brasiliensis) – almond mushroom, royal sun agaricus, and various fanciful names Agaricus subrufescentoides  Agaricus subrutilescens – wine-colored agaricus Agaricus subsaharianus L.A.Parra, Hama & De Kesel (2010) Agaricus subsilvicola Agaricus subsquamuliferus Agaricus subsubensis Kerrigan (2008) (California) Agaricus subtilipes Agaricus subvariabilis Agaricus sulcatellus Agaricus sulphureiceps Agaricus summensis Kerrigan (1985) Agaricus suthepensis Agaricus taculensis Agaricus taeniatimpictus Agaricus taeniatus Agaricus tantulus Agaricus tennesseensis Agaricus tenuivolvatus Agaricus tephrolepidus Agaricus termiticola Agaricus termitum Agaricus thiersii Agaricus thujae Agaricus tibetensis Agaricus tlaxcalensis Callac & G.Mata (2008) (Tlaxcala) Agaricus tollocanensis Agaricus toluenolens Agaricus trinitatensis Agaricus trisulphuratus (formerly in Cystoagaricus) Agaricus trutinatus Agaricus tucumanensis Agaricus tytthocarpus Agaricus umboninotus Agaricus unguentolens Agaricus unitinctus  Agaricus urinascens Agaricus valdiviae Agaricus vaporarius Agaricus variabilicolor Agaricus variegans Agaricus variicystis Agaricus valdiviae Agaricus velenovskyi Agaricus veluticeps Agaricus venus Agaricus vinaceovirens (San Francisco Peninsula) Agaricus vinosobrunneofumidus Agaricus viridarius Agaricus viridopurpurascens Agaricus volvatulus Agaricus wariatodes Agaricus weberianus Agaricus wilmotii Agaricus woodrowii Agaricus wrightii Agaricus xanthodermoides Agaricus xanthodermulus  Agaricus xanthodermus – yellow-staining mushroom Agaricus xantholepis Agaricus xanthosarcus Agaricus xeretes Agaricus xuchilensis Agaricus yunnanensis Agaricus zelleri Toxicity A notable group of poisonous Agaricus is the clade around the yellow-staining mushroom, A. xanthodermus. One species reported from Africa, A. aurantioviolaceus, is reportedly deadly poisonous. Far more dangerous is the fact that Agaricus, when still young and most valuable for eating, are easily confused with several deadly species of Amanita (in particular the species collectively called "destroying angels", as well as the white form of the appropriately-named "death cap" Amanita phalloides), as well as some other highly poisonous fungi. An easy way to recognize Amanita is the gills, which remain whitish at all times in that genus. In Agaricus, by contrast, the gills are only initially white, turning dull pink as they mature, and eventually the typical chocolate-brown as the spores are released. Even so, Agaricus should generally be avoided by inexperienced collectors, since other harmful species are not as easily recognized, and clearly recognizable mature Agaricus are often too soft and maggot-infested for eating. When collecting Agaricus for food, it is important to identify every individual specimen with certainty, since one Amanita fungus of the most poisonous species is sufficient to kill an adult human – even the shed spores of a discarded specimen are suspected to cause life-threatening poisoning. Confusing poisonous Amanita with an edible Agaricus is the most frequent cause of fatal mushroom poisonings world-wide. Reacting to some distributors marketing dried agaricus or agaricus extract to cancer patients, it has been identified by the U.S. Food and Drug Administration as a "fake cancer 'cure. The species most often sold as such quack cures is A. subrufescens, which is often referred to by the erroneous name "Agaricus Blazei" and advertised by fanciful trade names such as "God's mushroom" or "mushroom of life", but can cause allergic reactions and even liver damage if consumed in excessive amounts. Uses The genus contains the most widely consumed and best-known mushroom today, A. bisporus, with A. arvensis, A. campestris and A. subrufescens also being well-known and highly regarded. A. porphyrocephalus is a choice edible when young, and many others are edible as well, namely members of sections Agaricus, Arvense, Duploannulatae and Sanguinolenti.
Biology and health sciences
Edible fungi
null
1634146
https://en.wikipedia.org/wiki/Proconsul%20%28mammal%29
Proconsul (mammal)
Proconsul is an extinct genus of primates that existed from 21 to 17 million years ago during the Miocene epoch. Fossil remains are present in Eastern Africa, including Kenya and Uganda. Four species have been classified to date: P. africanus, P. gitongai, P. major and P. meswae. The four species differ mainly in body size. Environmental reconstructions for the Early Miocene Proconsul sites are still tentative and range from forested environments to more open, arid grasslands. The gibbon and great apes, including humans, are held in evolutionary biology to share a common ancestral lineage, which may have included Proconsul. Its name, meaning "before Consul" (Consul being a certain chimpanzee that, at the time of the genus's discovery, was on display in London), implies that it is ancestral to the chimpanzee. It might also be ancestral to the rest of the apes. Description The genus had a mixture of Old World monkey and ape characteristics, so its placement in the ape superfamily Hominoidea is tentative, with some scientists placing Proconsul outside it, before the split of the apes and Old World monkeys. Proconsul's monkey-like features include pronograde posture, indicated by a long flexible back, curved metacarpals, and an above-branch arboreal quadrupedal positional repertoire. The primary feature linking Proconsul with extant apes is its lack of a tail; other "ape-like" features include its enhanced grasping capabilities, stabilized elbow joint and facial structure. Proconsul could not hang effortlessly from tree branches like gibbons and other nonhuman apes do today. Discovery and classification The first specimen, a partial jaw discovered in 1909 by a gold prospector at Koru, near Kisumu in western Kenya, was also the oldest fossil hominoid known until recently, and the first fossil mammal ever found in sub-Saharan Africa. The name, Proconsul, was devised by Arthur Hopwood in 1933 and means "before Consul" – the name of a famous captive chimpanzee in London. At the time Consul was being used as a circus name for performing chimpanzees. The Folies Bergère of 1903 in Paris had a popular performing chimpanzee named Consul, and so did the Belle Vue Zoological Gardens in Manchester, England, in 1894. On the latter's death in that year Ben Brierley wrote a commemorative poem wondering where the "Missing Link" between chimpanzees and men was. Hopwood in 1931 had discovered the fossils of three individuals while expeditioning with Louis Leakey in the vicinity of Lake Victoria. The Consul that he selected to use in the name was neither of the ones mentioned above, but another located in the London Zoo. Consul is being used Linnaean-style to symbolize the chimpanzee. Proconsul is therefore "ancestral to the Chimpanzee" in Hopwood's words. He also added africanus as the specific name. Other fossils discovered later were initially classified as africanus and subsequently reclassified; that is, the total pool of fossils originally considered africanus was split and the fragments lumped with other finds to create a new species. For example, Mary Leakey's famous find of 1948 began as africanus and was split from it to be lumped with Thomas Whitworth's finds of 1951 as heseloni by Alan Walker in 1993. This process creates some confusion for the public, which is told that africanus became heseloni. The finds from Koru and Songhor are still considered africanus. Four species are still defined even though many fossils have jumped species. The family of Proconsulidae was first proposed by Louis Leakey in 1963, a decade after he and Wilfrid Le Gros Clark had defined africanus, nyanzae and major. It was not immediately accepted but ultimately prevailed. The history of hominoid classification in the second half of the 20th century is sufficiently complex to warrant a few books itself. Most of the palaeoanthropologists have changed their minds at least once as new fossils have come to light and new observations have been made, and will probably continue to do so. The classifications found in the literature of one decade are not generally the same as those of another. For example, in 1987 Peter Andrews and Lawrence Martin, established palaeontologists, took the point of view that Proconsul is not a hominoid, but is a sister taxon to it. Reassigned species The species Proconsul heseloni and P. nyanzae have been reclassified in the new genus Ekembo.
Biology and health sciences
Apes
Animals
1634352
https://en.wikipedia.org/wiki/Atomic%20battery
Atomic battery
An atomic battery, nuclear battery, radioisotope battery or radioisotope generator uses energy from the decay of a radioactive isotope to generate electricity. Like a nuclear reactor, it generates electricity from nuclear energy, but it differs by not using a chain reaction. Although commonly called batteries, atomic batteries are technically not electrochemical and cannot be charged or recharged. Although they are very costly, they have extremely long lives and high energy density, so they are typically used as power sources for equipment that must operate unattended for long periods, such as spacecraft, pacemakers, underwater systems, and automated scientific stations in remote parts of the world. Nuclear batteries began in 1913, when Henry Moseley first demonstrated a current generated by charged-particle radiation. In the 1950s and 1960s, this field of research got much attention for applications requiring long-life power sources for spacecraft. In 1954, RCA researched a small atomic battery for small radio receivers and hearing aids. Since RCA's initial research and development in the early 1950s, many types and methods have been designed to extract electrical energy from nuclear sources. The scientific principles are well known, but modern nano-scale technology and new wide-bandgap semiconductors have allowed the making of new devices and interesting material properties not previously available. Nuclear batteries can be classified by their means of energy conversion into two main groups: thermal converters and non-thermal converters. The thermal types convert some of the heat generated by the nuclear decay into electricity; an example is the radioisotope thermoelectric generator (RTG), often used in spacecraft. The non-thermal converters, such as betavoltaic cells, extract energy directly from the emitted radiation, before it is degraded into heat; they are easier to miniaturize and do not need a thermal gradient to operate, so they can be used in small machines. Atomic batteries usually have an efficiency of 0.1–5%. High-efficiency betavoltaic devices can reach 6–8% efficiency. Thermal conversion Thermionic conversion A thermionic converter consists of a hot electrode, which thermionically emits electrons over a space-charge barrier to a cooler electrode, producing a useful power output. Caesium vapor is used to optimize the electrode work functions and provide an ion supply (by surface ionization) to neutralize the electron space charge. Thermoelectric conversion A radioisotope thermoelectric generator (RTG) uses thermocouples. Each thermocouple is formed from two wires of different metals (or other materials). A temperature gradient along the length of each wire produces a voltage gradient from one end of the wire to the other; but the different materials produce different voltages per degree of temperature difference. By connecting the wires at one end, heating that end but cooling the other end, a usable, but small (millivolts), voltage is generated between the unconnected wire ends. In practice, many are connected in series (or in parallel) to generate a larger voltage (or current) from the same heat source, as heat flows from the hot ends to the cold ends. Metal thermocouples have low thermal-to-electrical efficiency. However, the carrier density and charge can be adjusted in semiconductor materials such as bismuth telluride and silicon germanium to achieve much higher conversion efficiencies. Thermophotovoltaic conversion Thermophotovoltaic (TPV) cells work by the same principles as a photovoltaic cell, except that they convert infrared light (rather than visible light) emitted by a hot surface, into electricity. Thermophotovoltaic cells have an efficiency slightly higher than thermoelectric couples and can be overlaid on thermoelectric couples, potentially doubling efficiency. The University of Houston TPV Radioisotope Power Conversion Technology development effort is aiming at combining thermophotovoltaic cells concurrently with thermocouples to provide a 3- to 4-fold improvement in system efficiency over current thermoelectric radioisotope generators. Stirling generators A Stirling radioisotope generator is a Stirling engine driven by the temperature difference produced by a radioisotope. A more efficient version, the advanced Stirling radioisotope generator, was under development by NASA, but was cancelled in 2013 due to large-scale cost overruns. Non-thermal conversion Non-thermal converters extract energy from emitted radiation before it is degraded into heat. Unlike thermoelectric and thermionic converters their output does not depend on the temperature difference. Non-thermal generators can be classified by the type of particle used and by the mechanism by which their energy is converted. Electrostatic conversion Energy can be extracted from emitted charged particles when their charge builds up in a conductor, thus creating an electrostatic potential. Without a dissipation mode the voltage can increase up to the energy of the radiated particles, which may range from several kilovolts (for beta radiation) up to megavolts (alpha radiation). The built up electrostatic energy can be turned into usable electricity in one of the following ways. Direct-charging generator A direct-charging generator consists of a capacitor charged by the current of charged particles from a radioactive layer deposited on one of the electrodes. Spacing can be either vacuum or dielectric. Negatively charged beta particles or positively charged alpha particles, positrons or fission fragments may be utilized. Although this form of nuclear-electric generator dates back to 1913, few applications have been found in the past for the extremely low currents and inconveniently high voltages provided by direct-charging generators. Oscillator/transformer systems are employed to reduce the voltages, then rectifiers are used to transform the AC power back to direct current. English physicist H. G. J. Moseley constructed the first of these. Moseley's apparatus consisted of a glass globe silvered on the inside with a radium emitter mounted on the tip of a wire at the center. The charged particles from the radium created a flow of electricity as they moved quickly from the radium to the inside surface of the sphere. As late as 1945 the Moseley model guided other efforts to build experimental batteries generating electricity from the emissions of radioactive elements. Electromechanical conversion Electromechanical atomic batteries use the buildup of charge between two plates to pull one bendable plate towards the other, until the two plates touch, discharge, equalizing the electrostatic buildup, and spring back. The mechanical motion produced can be used to produce electricity through flexing of a piezoelectric material or through a linear generator. Milliwatts of power are produced in pulses depending on the charge rate, in some cases multiple times per second (35 Hz). Radiovoltaic conversion A radiovoltaic (RV) device converts the energy of ionizing radiation directly into electricity using a semiconductor junction, similar to the conversion of photons into electricity in a photovoltaic cell. Depending on the type of radiation targeted, these devices are called alphavoltaic (AV, αV), betavoltaic (BV, βV) and/or gammavoltaic (GV, γV). Betavoltaics have traditionally received the most attention since (low-energy) beta emitters cause the least amount of radiative damage, thus allowing a longer operating life and less shielding. Interest in alphavoltaic and (more recently) gammavoltaic devices is driven by their potential higher efficiency. Alphavoltaic conversion Alphavoltaic devices use a semiconductor junction to produce electrical energy from energetic alpha particles. Betavoltaic conversion Betavoltaic devices use a semiconductor junction to produce electrical energy from energetic beta particles (electrons). A commonly used source is the hydrogen isotope tritium, which is employed in City Labs' NanoTritium batteries. Betavoltaic devices are particularly well-suited to low-power electrical applications where long life of the energy source is needed, such as implantable medical devices or military and space applications. The Chinese startup Betavolt claimed in January 2024 to have a miniature device in the pilot testing stage. It is allegedly generating 100 microwatts of power and a voltage of 3V and has a lifetime of 50 years without any need for charging or maintenance. Betavolt claims it to be the first such miniaturised device ever developed. It gains its energy from the isotope nickel-63, held in a module the size of a very small coin. As it is consumed, the nickel-63 decays into stable, non-radioactive isotopes of copper, which pose no environmental threat. It contains a thin wafer of nickel-63 providing beta particle electrons sandwiched between two thin crystallographic diamond semiconductor layers. Gammavoltaic conversion Gammavoltaic devices use a semiconductor junction to produce electrical energy from energetic gamma particles (high-energy photons). They have only been considered in the 2010s but were proposed as early as 1981. A gammavoltaic effect has been reported in perovskite solar cells. Another patented design involves scattering of the gamma particle until its energy has decreased enough to be absorbed in a conventional photovoltaic cell. Gammavoltaic designs using diamond and Schottky diodes are also being investigated. Radiophotovoltaic (optoelectric) conversion In a radiophotovoltaic (RPV) device the energy conversion is indirect: the emitted particles are first converted into light using a radioluminescent material (a scintillator or phosphor), and the light is then converted into electricity using a photovoltaic cell. Depending on the type of particle targeted, the conversion type can be more precisely specified as alphaphotovoltaic (APV or α-PV), betaphotovoltaic (BPV or β-PV) or gammaphotovoltaic (GPV or γ-PV). Radiophotovoltaic conversion can be combined with radiovoltaic conversion to increase the conversion efficiency. Pacemakers Medtronic and Alcatel developed a plutonium-powered pacemaker, the Numec NU-5, powered by a 2.5 Ci slug of plutonium 238, first implanted in a human patient in 1970. The 139 Numec NU-5 nuclear pacemakers implanted in the 1970s are expected to never need replacing, an advantage over non-nuclear pacemakers, which require surgical replacement of their batteries every 5 to 10 years. The plutonium "batteries" are expected to produce enough power to drive the circuit for longer than the 88-year halflife of the plutonium-238. The last of these units was implanted in 1988, as lithium-powered pacemakers, which had an expected lifespan of 10 or more years without the disadvantages of radiation concerns and regulatory hurdles, made these units obsolete. Betavoltaic batteries are also being considered as long-lasting power sources for lead-free pacemakers. Radioisotopes used Atomic batteries use radioisotopes that produce low energy beta particles or sometimes alpha particles of varying energies. Low energy beta particles are needed to prevent the production of high energy penetrating Bremsstrahlung radiation that would require heavy shielding. Radioisotopes such as tritium, nickel-63, promethium-147, and technetium-99 have been tested. Plutonium-238, curium-242, curium-244 and strontium-90 have been used. Besides the nuclear properties of the used isotope, there are also the issues of chemical properties and availability. A product deliberately produced via neutron irradiation or in a particle accelerator is more difficult to obtain than a fission product easily extracted from spent nuclear fuel. Plutonium-238 must be deliberately produced via neutron irradiation of Neptunium-237 but it can be easily converted into a stable plutonium oxide ceramic. Strontium-90 is easily extracted from spent nuclear fuel but must be converted into the perovskite form strontium titanate to reduce its chemical mobility, cutting power density in half. Caesium-137, another high yield nuclear fission product, is rarely used in atomic batteries because it is difficult to convert into chemically inert substances. Another undesirable property of Cs-137 extracted from spent nuclear fuel is that it is contaminated with other isotopes of Caesium which reduce power density further. Micro-batteries In the field of microelectromechanical systems (MEMS), nuclear engineers at the University of Wisconsin, Madison have explored the possibilities of producing minuscule batteries which exploit radioactive nuclei of substances such as polonium or curium to produce electric energy. As an example of an integrated, self-powered application, the researchers have created an oscillating cantilever beam that is capable of consistent, periodic oscillations over very long time periods without the need for refueling. Ongoing work demonstrate that this cantilever is capable of radio frequency transmission, allowing MEMS devices to communicate with one another wirelessly. These micro-batteries are very light and deliver enough energy to function as power supply for use in MEMS devices and further for supply for nanodevices. The radiation energy released is transformed into electric energy, which is restricted to the area of the device that contains the processor and the micro-battery that supplies it with energy.
Technology
Energy storage
null
1634411
https://en.wikipedia.org/wiki/Lua
Lua
Lua or LUA may refer to: Science and technology Lua (programming language) Latvia University of Agriculture Last universal common ancestor, in evolution, sometimes abbreviated as Lua Ethnicity and language Lua people, of Laos Lawa people, of Thailand sometimes referred to as Lua Lua language (disambiguation), several languages (including Lua’) Luba-Kasai language, ISO 639 code Lai (surname) (賴), Chinese, sometimes romanised as Lua Places Tenzing-Hillary Airport (IATA code), in Lukla, Nepal One of the Duff Islands People Lua (goddess), a Roman goddess Saint Lua (died c 609) Lua Blanco (born 1987), Brazilian actress and singer Lua Getsinger (1871–1916) A member of Weki Meki band Other uses Lua (martial art), of Hawaii "Lua" (song), by Bright Eyes
Technology
Programming languages
null
1634583
https://en.wikipedia.org/wiki/Numerical%20methods%20for%20partial%20differential%20equations
Numerical methods for partial differential equations
Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs). In principle, specialized methods for hyperbolic, parabolic or elliptic partial differential equations exist. Overview of methods Finite difference method In this method, functions are represented by their values at certain grid points and derivatives are approximated through differences in these values. Method of lines The method of lines (MOL, NMOL, NUMOL) is a technique for solving partial differential equations (PDEs) in which all dimensions except one are discretized. MOL allows standard, general-purpose methods and software, developed for the numerical integration of ordinary differential equations (ODEs) and differential algebraic equations (DAEs), to be used. A large number of integration routines have been developed over the years in many different programming languages, and some have been published as open source resources. The method of lines most often refers to the construction or analysis of numerical methods for partial differential equations that proceeds by first discretizing the spatial derivatives only and leaving the time variable continuous. This leads to a system of ordinary differential equations to which a numerical method for initial value ordinary equations can be applied. The method of lines in this context dates back to at least the early 1960s. Finite element method The finite element method (FEM) is a numerical technique for finding approximate solutions to boundary value problems for differential equations. It uses variational methods (the calculus of variations) to minimize an error function and produce a stable solution. Analogous to the idea that connecting many tiny straight lines can approximate a larger circle, FEM encompasses all the methods for connecting many simple element equations over many small subdomains, named finite elements, to approximate a more complex equation over a larger domain. Gradient discretization method The gradient discretization method (GDM) is a numerical technique that encompasses a few standard or recent methods. It is based on the separate approximation of a function and of its gradient. Core properties allow the convergence of the method for a series of linear and nonlinear problems, and therefore all the methods that enter the GDM framework (conforming and nonconforming finite element, mixed finite element, mimetic finite difference...) inherit these convergence properties. Finite volume method The finite-volume method is a numerical technique for representing and evaluating partial differential equations in the form of algebraic equations [LeVeque, 2002; Toro, 1999]. Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, volume integrals in a partial differential equation that contain a divergence term are converted to surface integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods are conservative. Another advantage of the finite volume method is that it is easily formulated to allow for unstructured meshes. The method is used in many computational fluid dynamics packages. Spectral method Spectral methods are techniques used in applied mathematics and scientific computing to numerically solve certain differential equations, often involving the use of the fast Fourier transform. The idea is to write the solution of the differential equation as a sum of certain "basis functions" (for example, as a Fourier series, which is a sum of sinusoids) and then to choose the coefficients in the sum that best satisfy the differential equation. Spectral methods and finite element methods are closely related and built on the same ideas; the main difference between them is that spectral methods use basis functions that are nonzero over the whole domain, while finite element methods use basis functions that are nonzero only on small subdomains. In other words, spectral methods take on a global approach while finite element methods use a local approach. Partially for this reason, spectral methods have excellent error properties, with the so-called "exponential convergence" being the fastest possible, when the solution is smooth. However, there are no known three-dimensional single domain spectral shock capturing results. In the finite element community, a method where the degree of the elements is very high or increases as the grid parameter h decreases to zero is sometimes called a spectral element method. Meshfree methods Meshfree methods do not require a mesh connecting the data points of the simulation domain. Meshfree methods enable the simulation of some otherwise difficult types of problems, at the cost of extra computing time and programming effort. Domain decomposition methods Domain decomposition methods solve a boundary value problem by splitting it into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. A coarse problem with one or few unknowns per subdomain is used to further coordinate the solution between the subdomains globally. The problems on the subdomains are independent, which makes domain decomposition methods suitable for parallel computing. Domain decomposition methods are typically used as preconditioners for Krylov space iterative methods, such as the conjugate gradient method or GMRES. In overlapping domain decomposition methods, the subdomains overlap by more than the interface. Overlapping domain decomposition methods include the Schwarz alternating method and the additive Schwarz method. Many domain decomposition methods can be written and analyzed as a special case of the abstract additive Schwarz method. In non-overlapping methods, the subdomains intersect only on their interface. In primal methods, such as Balancing domain decomposition and BDDC, the continuity of the solution across subdomain interface is enforced by representing the value of the solution on all neighboring subdomains by the same unknown. In dual methods, such as FETI, the continuity of the solution across the subdomain interface is enforced by Lagrange multipliers. The FETI-DP method is hybrid between a dual and a primal method. Non-overlapping domain decomposition methods are also called iterative substructuring methods. Mortar methods are discretization methods for partial differential equations, which use separate discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints. Finite element simulations of moderate size models require solving linear systems with millions of unknowns. Several hours per time step is an average sequential run time, therefore, parallel computing is a necessity. Domain decomposition methods embody large potential for a parallelization of the finite element methods, and serve a basis for distributed, parallel computations. Multigrid methods Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in (but not limited to) problems exhibiting multiple scales of behavior. For example, many basic relaxation methods exhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in a Fourier analysis approach to multigrid. MG methods can be used as solvers as well as preconditioners. The main idea of multigrid is to accelerate the convergence of a basic iterative method by global correction from time to time, accomplished by solving a coarse problem. This principle is similar to interpolation between coarser and finer grids. The typical application for multigrid is in the numerical solution of elliptic partial differential equations in two or more dimensions. Multigrid methods can be applied in combination with any of the common discretization techniques. For example, the finite element method may be recast as a multigrid method. In these cases, multigrid methods are among the fastest solution techniques known today. In contrast to other methods, multigrid methods are general in that they can treat arbitrary regions and boundary conditions. They do not depend on the separability of the equations or other special properties of the equation. They have also been widely used for more-complicated non-symmetric and nonlinear systems of equations, like the Lamé system of elasticity or the Navier–Stokes equations. Comparison The finite difference method is often regarded as the simplest method to learn and use. The finite element and finite volume methods are widely used in engineering and in computational fluid dynamics, and are well suited to problems in complicated geometries. Spectral methods are generally the most accurate, provided that the solutions are sufficiently smooth.
Mathematics
Differential equations
null
631310
https://en.wikipedia.org/wiki/Mark%20%28unit%29
Mark (unit)
The Mark (from Middle High German: Marc, march, brand) is originally a medieval weight or mass unit, which supplanted the pound weight as a precious metals and coinage weight in parts of Europe in the 11th century. The Mark is traditionally divided into 8 ounces or 16 lots. The Cologne mark corresponded to about 234 grams. Like the German systems, the French poids de marc weight system considered one "Marc" equal to 8 troy ounces. Just as the pound of 12 troy ounces (373 g) lent its name to the pound unit of currency, the mark lent its name to the mark unit of currency. Origin of the term The Etymological Dictionary of the German Language by Friedrich Kluge derives the word from the Proto-Germanic term marka, "weight and value unit" (originally "division, shared").<ref>Kluge, Friedrich (2012). Etymological Dictionary of the German Language. 25th edition, edited by Elmar Seebold, Berlin/Boston, ISBN 978-3-11-022364-4, p. 602 (Google Books).</ref> The etymological dictionary by Wolfgang Pfeifer sees the Old High German marc, "delimitation, sign", as the stem and assumes that marc originally meant "minting" (marking of a certain weight), later denoting the ingot itself and its weight, and finally a coin of a certain weight and value. According to an 1848 trade lexicon, the term Gewichtsmark comes from the fact that "the piece of metal used for weighing was stamped with a sign or symbol". Meyer's 1905 Konversationslexikon similarly derives the origin of the word to the emergence of the mark from the Roman pound of to 11 ounces. Charlemagne, as King of the Franks, carried out a monetary and measures reform towards the end of the 8th century. In particular, he had introduced the Karlspfund ("Charles pound") as the basic unit of coinage and trade which, however, weighed only 8 ounces. In order to prevent a further reduction in the weight of a pound, a sign, the mark, was now stamped on the new weights. The actual weight of these weights, known as marca'', is said to have fluctuated between 196 g and 280 g.
Physical sciences
Mass and weight
Basics and measurement
631494
https://en.wikipedia.org/wiki/Moment%20magnitude%20scale
Moment magnitude scale
The moment magnitude scale (MMS; denoted explicitly with or Mwg, and generally implied with use of a single M for magnitude) is a measure of an earthquake's magnitude ("size" or strength) based on its seismic moment. was defined in a 1979 paper by Thomas C. Hanks and Hiroo Kanamori. Similar to the local magnitude/Richter scale () defined by Charles Francis Richter in 1935, it uses a logarithmic scale; small earthquakes have approximately the same magnitudes on both scales. Despite the difference, news media often use the term "Richter scale" when referring to the moment magnitude scale. Moment magnitude () is considered the authoritative magnitude scale for ranking earthquakes by size. It is more directly related to the energy of an earthquake than other scales, and does not saturatethat is, it does not underestimate magnitudes as other scales do in certain conditions. It has become the standard scale used by seismological authorities like the United States Geological Survey for reporting large earthquakes (typically M > 4), replacing the local magnitude () and surface-wave magnitude () scales. Subtypes of the moment magnitude scale (, etc.) reflect different ways of estimating the seismic moment. History Richter scale: the original measure of earthquake magnitude At the beginning of the twentieth century, very little was known about how earthquakes happen, how seismic waves are generated and propagate through the Earth's crust, and what information they carry about the earthquake rupture process; the first magnitude scales were therefore empirical. The initial step in determining earthquake magnitudes empirically came in 1931 when the Japanese seismologist Kiyoo Wadati showed that the maximum amplitude of an earthquake's seismic waves diminished with distance at a certain rate. Charles F. Richter then worked out how to adjust for epicentral distance (and some other factors) so that the logarithm of the amplitude of the seismograph trace could be used as a measure of "magnitude" that was internally consistent and corresponded roughly with estimates of an earthquake's energy. He established a reference point and the ten-fold (exponential) scaling of each degree of magnitude, and in 1935 published what he called the "magnitude scale", now called the local magnitude scale, labeled . (This scale is also known as the Richter scale, but news media sometimes use that term indiscriminately to refer to other similar scales.) The local magnitude scale was developed on the basis of shallow (~ deep), moderate-sized earthquakes at a distance of approximately , conditions where the surface waves are predominant. At greater depths, distances, or magnitudes the surface waves are greatly reduced, and the local magnitude scale underestimates the magnitude, a problem called saturation. Additional scales were developed – a surface-wave magnitude scale () by Beno Gutenberg in 1945, a body-wave magnitude scale () by Gutenberg and Richter in 1956, and a number of variants – to overcome the deficiencies of the scale, but all are subject to saturation. A particular problem was that the scale (which in the 1970s was the preferred magnitude scale) saturates around and therefore underestimates the energy release of "great" earthquakes such as the 1960 Chilean and 1964 Alaskan earthquakes. These had magnitudes of 8.5 and 8.4 respectively but were notably more powerful than other M 8 earthquakes; their moment magnitudes were closer to 9.6 and 9.3, respectively. Single couple or double couple The study of earthquakes is challenging as the source events cannot be observed directly, and it took many years to develop the mathematics for understanding what the seismic waves from an earthquake can tell about the source event. An early step was to determine how different systems of forces might generate seismic waves equivalent to those observed from earthquakes. The simplest force system is a single force acting on an object. If it has sufficient strength to overcome any resistance it will cause the object to move ("translate"). A pair of forces, acting on the same "line of action" but in opposite directions, will cancel; if they cancel (balance) exactly there will be no net translation, though the object will experience stress, either tension or compression. If the pair of forces are offset, acting along parallel but separate lines of action, the object experiences a rotational force, or torque. In mechanics (the branch of physics concerned with the interactions of forces) this model is called a couple, also simple couple or single couple. If a second couple of equal and opposite magnitude is applied their torques cancel; this is called a double couple. A double couple can be viewed as "equivalent to a pressure and tension acting simultaneously at right angles". In 1923 Hiroshi Nakano showed that certain aspects of seismic waves could be explained in terms of a double couple model. This led to a three-decade-long controversy over the best way to model the seismic source: as a single couple, or a double couple. While Japanese seismologists favored the double couple, most seismologists favored the single couple. Although the single couple model had some shortcomings, it seemed more intuitive, and there was a belief – mistaken, as it turned out – that the elastic rebound theory for explaining why earthquakes happen required a single couple model. In principle these models could be distinguished by differences in the radiation patterns of their S waves, but the quality of the observational data was inadequate for that. but not from a single couple. This was confirmed as better and more plentiful data coming from the World-Wide Standard Seismograph Network (WWSSN) permitted closer analysis of seismic waves. Notably, in 1966 Keiiti Aki showed that the seismic moment of the 1964 Niigata earthquake as calculated from the seismic waves on the basis of a double couple was in reasonable agreement with the seismic moment calculated from the observed physical dislocation. Dislocation theory A double couple model suffices to explain an earthquake's far-field pattern of seismic radiation, but tells us very little about the nature of an earthquake's source mechanism or its physical features. While slippage along a fault was theorized as the cause of earthquakes (other theories included movement of magma, or sudden changes of volume due to phase changes), observing this at depth was not possible, and understanding what could be learned about the source mechanism from the seismic waves requires an understanding of the source mechanism. Modeling the physical process by which an earthquake generates seismic waves required much theoretical development of dislocation theory, first formulated by the Italian Vito Volterra in 1907, with further developments by E. H. Love in 1927. More generally applied to problems of stress in materials, an extension by F. Nabarro in 1951 was recognized by the Russian geophysicist A. V. Vvedenskaya as applicable to earthquake faulting. In a series of papers starting in 1956 she and other colleagues used dislocation theory to determine part of an earthquake's focal mechanism, and to show that a dislocation – a rupture accompanied by slipping – was indeed equivalent to a double couple. In a pair of papers in 1958, J. A. Steketee worked out how to relate dislocation theory to geophysical features. Numerous other researchers worked out other details, culminating in a general solution in 1964 by Burridge and Knopoff, which established the relationship between double couples and the theory of elastic rebound, and provided the basis for relating an earthquake's physical features to seismic moment. Seismic moment Seismic moment – symbol – is a measure of the fault slip and area involved in the earthquake. Its value is the torque of each of the two force couples that form the earthquake's equivalent double-couple. (More precisely, it is the scalar magnitude of the second-order moment tensor that describes the force components of the double-couple.) Seismic moment is measured in units of Newton meters (N·m) or Joules, or (in the older CGS system) dyne-centimeters (dyn-cm). The first calculation of an earthquake's seismic moment from its seismic waves was by Keiiti Aki for the 1964 Niigata earthquake. He did this two ways. First, he used data from distant stations of the WWSSN to analyze long-period (200 second) seismic waves (wavelength of about 1,000 kilometers) to determine the magnitude of the earthquake's equivalent double couple. Second, he drew upon the work of Burridge and Knopoff on dislocation to determine the amount of slip, the energy released, and the stress drop (essentially how much of the potential energy was released). In particular, he derived an equation that relates an earthquake's seismic moment to its physical parameters: with being the rigidity (or resistance to moving) of a fault with a surface area of over an average dislocation (distance) of . (Modern formulations replace with the equivalent , known as the "geometric moment" or "potency".) By this equation the moment determined from the double couple of the seismic waves can be related to the moment calculated from knowledge of the surface area of fault slippage and the amount of slip. In the case of the Niigata earthquake the dislocation estimated from the seismic moment reasonably approximated the observed dislocation. Seismic moment is a measure of the work (more precisely, the torque) that results in inelastic (permanent) displacement or distortion of the Earth's crust. It is related to the total energy released by an earthquake. However, the power or potential destructiveness of an earthquake depends (among other factors) on how much of the total energy is converted into seismic waves. This is typically 10% or less of the total energy, the rest being expended in fracturing rock or overcoming friction (generating heat). Nonetheless, seismic moment is regarded as the fundamental measure of earthquake size, representing more directly than other parameters the physical size of an earthquake. As early as 1975 it was considered "one of the most reliably determined instrumental earthquake source parameters". Introduction of an energy-motivated magnitude Mw Most earthquake magnitude scales suffered from the fact that they only provided a comparison of the amplitude of waves produced at a standard distance and frequency band; it was difficult to relate these magnitudes to a physical property of the earthquake. Gutenberg and Richter suggested that radiated energy Es could be estimated as , (in Joules). Unfortunately, the duration of many very large earthquakes was longer than 20 seconds, the period of the surface waves used in the measurement of . This meant that giant earthquakes such as the 1960 Chilean earthquake (M 9.5) were only assigned an . Caltech seismologist Hiroo Kanamori recognized this deficiency and took the simple but important step of defining a magnitude based on estimates of radiated energy, , where the "w" stood for work (energy): Kanamori recognized that measurement of radiated energy is technically difficult since it involves the integration of wave energy over the entire frequency band. To simplify this calculation, he noted that the lowest frequency parts of the spectrum can often be used to estimate the rest of the spectrum. The lowest frequency asymptote of a seismic spectrum is characterized by the seismic moment, . Using an approximate relation between radiated energy and seismic moment (which assumes stress drop is complete and ignores fracture energy), (where E is in Joules and is in Nm), Kanamori approximated by Moment magnitude scale The formula above made it much easier to estimate the energy-based magnitude , but it changed the fundamental nature of the scale into a moment magnitude scale. USGS seismologist Thomas C. Hanks noted that Kanamori's scale was very similar to a relationship between and that was reported by combined their work to define a new magnitude scale based on estimates of seismic moment where is defined in newton meters (N·m). Current use Moment magnitude is now the most common measure of earthquake size for medium to large earthquake magnitudes, but in practice, seismic moment (), the seismological parameter it is based on, is not measured routinely for smaller quakes. For example, the United States Geological Survey does not use this scale for earthquakes with a magnitude of less than 3.5, which includes the great majority of quakes. Popular press reports most often deal with significant earthquakes larger than . For these events, the preferred magnitude is the moment magnitude , not Richter's local magnitude . Definition The symbol for the moment magnitude scale is , with the subscript "w" meaning mechanical work accomplished. The moment magnitude is a dimensionless value defined by Hiroo Kanamori as where is the seismic moment in dyne⋅cm (10−7 N⋅m). The constant values in the equation are chosen to achieve consistency with the magnitude values produced by earlier scales, such as the local magnitude and the surface wave magnitude. Thus, a magnitude zero microearthquake has a seismic moment of approximately , while the Great Chilean earthquake of 1960, with an estimated moment magnitude of 9.4–9.6, had a seismic moment between and . Seismic moment magnitude (M wg or Das Magnitude Scale ) and moment magnitude (M w) scales To understand the magnitude scales based on Mo detailed background of Mwg and Mw scales is given below. Mw scale Hiroo Kanamori defined a magnitude scale (Log W0 = 1.5 Mw + 11.8, where W0 is the minimum strain energy) for great earthquakes using Gutenberg Richter Eq. (1). Log Es = 1.5 Ms + 11.8                                                                                     (A) Hiroo Kanamori used W0 in place of Es (dyn.cm) and consider a constant term (W0/Mo = 5 × 10−5) in Eq. (A) and estimated Ms and denoted as Mw (dyn.cm). The energy Eq. (A) is derived by substituting m = 2.5 + 0.63 M in the energy equation Log E = 5.8 + 2.4 m (Richter 1958), where m is the Gutenberg unified magnitude and M is a least squares approximation to the magnitude determined from surface wave magnitudes. After replacing the ratio of seismic Energy (E) and Seismic Moment (Mo), i.e., E/Mo = 5 × 10−5, into the Gutenberg–Richter energy magnitude Eq. (A), Hanks and Kanamori provided Eq. (B): Log M0 = 1.5 Ms + 16.1                                                                                   (B) Note that Eq. (B) was already derived by Hiroo Kanamori and termed it as Mw. Eq. (B) was based on large earthquakes; hence, in order to validate Eq. (B) for intermediate and smaller earthquakes, Hanks and Kanamori (1979) compared this Eq. (B) with Eq. (1) of Percaru and Berckhemer (1978) for the magnitude 5.0 ≤ Ms ≤ 7.5 (Hanks and Kanamori 1979). Note that Eq. (1) of Percaru and Berckhemer (1978) for the magnitude range 5.0 ≤ Ms ≤ 7.5 is not reliable due to the inconsistency of defined magnitude range (moderate to large earthquakes defined as Ms ≤ 7.0 and Ms = 7–7.5) and scarce data in lower magnitude range (≤ 7.0) which rarely represents the global seismicity (e.g., see Figs. 1A, B, 4 and Table 2 of Percaru and Berckhemer 1978). Furthermore, Equation (1) of Percaru and Berckhemer 1978) is only valid for (≤ 7.0). Relations between seismic moment, potential energy released and radiated energy Seismic moment is not a direct measure of energy changes during an earthquake. The relations between seismic moment and the energies involved in an earthquake depend on parameters that have large uncertainties and that may vary between earthquakes. Potential energy is stored in the crust in the form of elastic energy due to built-up stress and gravitational energy. During an earthquake, a portion of this stored energy is transformed into energy dissipated in frictional weakening and inelastic deformation in rocks by processes such as the creation of cracks heat radiated seismic energy The potential energy drop caused by an earthquake is related approximately to its seismic moment by where is the average of the absolute shear stresses on the fault before and after the earthquake (e.g., equation 3 of ) and is the average of the shear moduli of the rocks that constitute the fault. Currently, there is no technology to measure absolute stresses at all depths of interest, nor method to estimate it accurately, and is thus poorly known. It could vary highly from one earthquake to another. Two earthquakes with identical but different would have released different . The radiated energy caused by an earthquake is approximately related to seismic moment by where is radiated efficiency and is the static stress drop, i.e., the difference between shear stresses on the fault before and after the earthquake (e.g., from equation 1 of ). These two quantities are far from being constants. For instance, depends on rupture speed; it is close to 1 for regular earthquakes but much smaller for slower earthquakes such as tsunami earthquakes and slow earthquakes. Two earthquakes with identical but different or would have radiated different . Because and are fundamentally independent properties of an earthquake source, and since can now be computed more directly and robustly than in the 1970s, introducing a separate magnitude associated to radiated energy was warranted. Choy and Boatwright defined in 1995 the energy magnitude where is in J (N·m). Comparative energy released by two earthquakes Assuming the values of are the same for all earthquakes, one can consider as a measure of the potential energy change ΔW caused by earthquakes. Similarly, if one assumes is the same for all earthquakes, one can consider as a measure of the energy Es radiated by earthquakes. Under these assumptions, the following formula, obtained by solving for the equation defining , allows one to assess the ratio of energy release (potential or radiated) between two earthquakes of different moment magnitudes, and : . As with the Richter scale, an increase of one step on the logarithmic scale of moment magnitude corresponds to a 101.5 ≈ 32 times increase in the amount of energy released, and an increase of two steps corresponds to a 103 = 1,000 times increase in energy. Thus, an earthquake of of 7.0 contains 1,000 times as much energy as one of 5.0 and about 32 times that of 6.0. Comparison with TNT equivalents To make the significance of the magnitude value plausible, the seismic energy released during the earthquake is sometimes compared to the effect of the conventional chemical explosive TNT. The seismic energy results from the above-mentioned formula according to Gutenberg and Richter to or converted into Hiroshima bombs: For comparison of seismic energy (in joules) with the corresponding explosion energy, a value of 4.2 x 109 joules per ton of TNT applies. The table illustrates the relationship between seismic energy and moment magnitude. The end of the scale is at the value 10.6, corresponding to the assumption that at this value the Earth's crust would have to break apart completely. Subtypes of Mw Various ways of determining moment magnitude have been developed, and several subtypes of the scale can be used to indicate the basis used. – Based on moment tensor inversion of long-period (~10 – 100 s) body-waves. – From a moment tensor inversion of complete waveforms at regional distances (~1,000 miles). Sometimes called RMT. – Derived from a centroid moment tensor inversion of intermediate- and long-period body- and surface-waves. – Derived from a centroid moment tensor inversion of the W-phase. () – Developed by Seiji Tsuboi for quick estimation of the tsunami potential of large near-coastal earthquakes from measurements of the P waves, and later extended to teleseismic earthquakes in general. – A duration-amplitude procedure which takes into account the duration of the rupture, providing a fuller picture of the energy released by longer lasting ("slow") ruptures than seen with . –Rapidly estimates earthquake magnitude by combining maximum displacements of teleseismic P wave and source durations.
Physical sciences
Seismology
Earth science
631508
https://en.wikipedia.org/wiki/Siphonophorae
Siphonophorae
Siphonophorae (from Greek siphōn 'tube' + pherein 'to bear') is an order within Hydrozoa, which is a class of marine organisms within the phylum Cnidaria. According to the World Register of Marine Species, the order contains 175 species described thus far. Siphonophores are highly polymorphic and complex organisms. Although they may appear to be individual organisms, each specimen is in fact a colonial organism composed of medusoid and polypoid zooids that are morphologically and functionally specialized. Zooids are multicellular units that develop from a single fertilized egg and combine to create functional colonies able to reproduce, digest, float, maintain body positioning, and use jet propulsion to move. Most colonies are long, thin, transparent floaters living in the pelagic zone. Like other hydrozoans, some siphonophores emit light to attract and attack prey. While many sea animals produce blue and green bioluminescence, a siphonophore in the genus Erenna was only the second life form found to produce a red light (the first one being the scaleless dragonfish Chirostomias pliopterus). Anatomy and morphology Colony characteristics Siphonophores are colonial hydrozoans that do not exhibit alternation of generations but instead reproduce asexually through a budding process. Zooids are the multicellular units that build the colonies. A single bud called the pro-bud initiates the growth of a colony by undergoing fission. Each zooid is produced to be genetically identical; however, mutations can alter their functions and increase diversity of the zooids within the colony. Siphonophores are unique in that the pro-bud initiates the production of diverse zooids with specific functions. The functions and organizations of the zooids in colonies widely vary among the different species; however, the majority of colonies are bilaterally arranged with dorsal and ventral sides to the stem. The stem is the vertical branch in the center of the colony to which the zooids attach. Zooids typically have special functions, and thus assume specific spatial patterns along the stem. General morphology Siphonophores typically exhibit one of three standard body plans matching the suborders: Cystonectae, Physonectae, and Calycophorae. Cystonects have a long stem with the attached zooids. Each group of zooids has a gastrozooid. The gastrozooid has a tentacle used for capturing and digesting food. The groups also have gonophores, which are specialized for reproduction. They use a pneumatophore, a gas-filled float, on their anterior end and drift at the surface of the water or stay afloat in the deep sea. Physonects have a pneumatophore and nectosome, which harbors the nectophores used for jet propulsion. The nectophores pump water backwards in order to move forward. Calycophorans differ from cystonects and physonects in that they have two nectophores and no pneumatophore. Instead they often possess oil-filled glands which likely help with buoyancy. Siphonophores possess multiple types of zooid. Scientists have determined two possible evolutionary hypotheses for this observation: 1. As time has gone on, the amount of zooid types has increased. 2. The last common ancestor had many types of zooids and the diversity seen today is due to loss of zooid types. Research shows no evidence supporting the first hypothesis, and has seen some evidence in support of the second. Zooids Siphonophores can have zooids that are either polyps or medusae. However, zooids are unique and can develop to have different functions. Nectophores Nectophores are medusae that assist in the propulsion and movement of some siphonophores in water. They are characteristic in physonectae and calycophores. The nectophores of these organisms are located in the nectosome where they can coordinate the swimming of colonies. The nectophores have also been observed in working in conjunction with reproductive structures in order to provide propulsion during colony detachment. Bracts Bracts are zooids that are unique to the siphonophorae order. They function in protection and maintaining a neutral buoyancy. However, bracts are not present in all species of siphonophore. Gastrozooids Gastrozooids are polyps that have evolved a function to assist in the feeding of siphonophores. Palpons Palpons are modified gastrozooids that function in digestion by regulating the circulation of gastrovascular fluids. Gonophores Gonophores are zooids that are involved in the reproductive processes of the siphonophores. Pneumatophores The presence of pneumatophores characterizes the subgroups Cystonectae and Physonectae. They are gas-filled floats that are located at the anterior end of the colonies in these species. They function to help the colonies maintain their orientation in water. In the Cystonectae subgroup, the pneumatophores have an additional function of assisting with flotation of the organisms. The siphonophores exhibiting the feature develop the structure in early larval development via invaginations of the flattened planula structure. Further observations of the siphonophore species Nanomia bijuga indicate that the pneumatophore feature potentially also functions to sense pressure changes and regulate chemotaxis in some species. Distribution and habitat Currently, the World Register of Marine Species (WoRMS) identifies 175 species of siphonophores. They can differ greatly in terms of size and shape, which largely reflects the environment that they inhabit. Siphonophores are most often pelagic organisms, yet level species are benthic. Smaller, warm-water siphonophores typically live in the epipelagic zone and use their tentacles to capture zooplankton and copepods. Larger siphonophores live in deeper waters, as they are generally longer and more fragile and must avoid strong currents. They mostly feed on larger prey. The majority of siphonophores live in the deep sea and can be found in all of the oceans. Siphonophore species rarely only inhabit one location. Some species, however, can be confined to a specific range of depths and/or an area of the ocean. Behavior Movement Siphonophores use a method of locomotion similar to jet propulsion. A siphonophore is a complex aggregate colony made up of many nectophores, which are clonal individuals that form by budding and are genetically identical. Depending on where each individual nectophore is positioned within the siphonophore, their function differs. Colonial movement is determined by individual nectophores of all developmental stages. The smaller individuals are concentrated towards the top of the siphonophore, and their function is turning and adjusting the orientation of the colony. Individuals will get larger the older they are. The larger individuals are located at the base of the colony, and their main function is thrust propulsion. These larger individuals are important in attaining the maximum speed of the colony. Every individual is key to the movement of the aggregate colony, and understanding their organization may allow us to make advances in our own multi-jet propulsion vehicles. The colonial organization of siphonophores, particularly in Nanomia bijuga confers evolutionary advantages. A large number of concentrated individuals allows for redundancy. This means that even if some individual nectophores become functionally compromised, their role is bypassed so the colony as a whole is not negatively affected. The velum, a thin band of tissue surrounding the opening of the jet, also plays a role in swimming patterns, shown specifically through research done on the previous mentioned species N. bijuga. The velum becomes smaller and more circular during times of forward propulsion compared to a large velum that is seen during refill periods. Additionally, the position of the velum changes with swimming behaviors; the velum is curved downward in times of jetting, but during refill, the velum is moved back into the nectophore. The siphonophore Namonia bijuga also practices diel vertical migration, as it remains in the deep-sea during the day but rises during the night. Predation and feeding Siphonophores are predatory carnivores. Their diets consist of a variety of copepods, other small crustaceans, and small fish. Generally, the diets of strong swimming siphonophores consist of smaller prey, and the diets of weak swimming siphonophores consist of larger prey. A majority of siphonophores have gastrozooids that have a characteristic tentacle attached to the base of the zooid. This structural feature functions in assisting the organisms in catching prey. Species with large gastrozooids are capable of consuming a broad range of prey sizes. Similar to many other organisms in the phylum of Cnidaria, many siphonophore species exhibit nematocyst stinging capsules on branches of their tentacles called tentilla. The nematocysts are arranged in dense batteries on the side of the tentilla. When the siphonophore encounters potential prey, their tentillum react to where the tentacles create a net by transforming their shape around the prey. The nematocysts then shoot millions of paralyzing, and sometimes fatal, toxin molecules at the trapped prey which is then transferred to the proper location for digestion. Some species of siphonophores use aggressive mimicry by using bioluminescent light so the prey cannot properly identify the predator. There are four types of nematocysts in siphonophore tentilla: heteronemes, haplonemes, desmonemes, and rhopalonemes. Heteronemes are the largest nematocysts and are spines on a shaft close to tubules attached to the center of the siphonophore. Haplonemes have open-tipped tubules with spines, but no distinct shaft. This is the most common nematocyst among siphonophores. Desmonemes do not have spines but instead there are adhesive properties on the tubules to hold onto prey. Rhopalonemes are nematocysts with wide tubules for prey. Due to the scarcity of food in the deep sea environment, a majority of siphonophore species function in a sit-and-wait tactic for food. The gelatinous body plan allows for flexibility when catching prey, but the gelatinous adaptations are based on habitat. They swim around waiting for their long tentacles to encounter prey. In addition, siphonophores in a group denoted Erenna have the ability to generate bioluminescence and red fluorescence while its tentilla twitches in a way to mimic motions of small crustaceans and copepods. These actions entice the prey to move closer to the siphonophore, allowing it to trap and digest it. Reproduction The modes of reproduction for siphonophores vary among the different species, and to this day, several modes remain unknown. Generally, a single zygote begins the formation of a colony of zooids. The fertilized egg matures into a protozooid, which initiates the budding process and creation of a new zooid. This process repeats until a colony of zooids forms around the central stalk. In contrast, several species reproduce using polyps. Polyps can hold eggs and/or sperm and can be released into the water from the posterior end of the siphonophore. The polyps may then be fertilized outside of the organism. Siphonophores use gonophores to make the reproductive gametes. Gonophores are either male or female; however, the types of gonophores in a colony can vary among species. Species are characterized as monoecious or dioecious based on their gonophores. Monoecious species contain male and female gonophores in a single zooid colony, whereas dioecious species harbor male and female gonophores separately in different colonies of zooids. Bioluminescence Nearly all siphonophores have bioluminescent capabilities. Since these organisms are extremely fragile, they are rarely observed alive. Bioluminescence in siphonophores has been thought to have evolved as a defense mechanism. Siphonophores of the deep-sea genus Erenna (found at depths between ) are thought to use their bioluminescent capability for offense too, as a lure to attract fish. This genus is one of the few to prey on fish rather than crustaceans. The bioluminescent organs, called tentilla, on these non-visual individuals emit red fluorescence along with a rhythmic flicking pattern, which attracts prey as it resembles smaller organisms such as zooplankton and copepods. Thus, it has been concluded that they use luminescence as a lure to attract prey. Some research indicates that deep-sea organisms can not detect long wavelengths, and red light has a long wavelength of 680 nm. If this is the case, then fish are not lured by Erenna, and there must be another explanation. However, the deep-sea remains largely unexplored and red light sensitivity in fish such as Cyclothone and the deep myctophid fish should not be discarded. Bioluminescent lures are found in many different species of siphonophores, and are used for a variety of reasons. Species such as Agalma okeni, Athorybia rosacea, Athorybia lucida, and Lychnafalma utricularia use their lures as a mimicry device to attract prey. A. rosacea mimic fish larvae, A. lucida are thought to mimic larvacean houses, and L. utricularia mimic hydromedusa. The species Resomia ornicephala uses their green and blue fluorescing tentilla to attract krill, helping them to outcompete other organisms that are hunting for the same prey. Siphonophores from the genus Erenna use bioluminescent lures surrounded by red fluorescence to attract prey and possibly mimic a fish from the Cyclothone genus. Their prey is lured in through a unique flicking behavior associated with the tentilla. When young, the tentilla of organisms in the Erenna genus contain only bioluminescent tissue, but, as the organism ages, red fluorescent material is also present in these tissues. Taxonomy Organisms in the order of Siphonophorae have been classified into the phylum Cnidaria and the class Hydrozoa. The phylogenetic relationships of siphonophores have been of great interest due to the high variability of the organization of their polyp colonies and medusae. Once believed to be a highly distinct group, larval similarities and morphological features have led researchers to believe that siphonophores had evolved from simpler colonial hydrozoans similar to those in the orders Anthoathecata and Leptothecata. Consequently, they are now united with these in the subclass Hydroidolina. Early analysis divided siphonophores into three main subgroups based on the presence or the absence of two different traits: swimming bells (nectophores) and floats (pneumatophores). The subgroups consisted of Cystonectae, Physonectae, and Calycorphores. Cystonectae had pneumatophores, Calycophores had nectophores, and Physonectae had both. Eukaryotic nuclear small subunit ribosomal gene 18S, eukaryotic mitochondrial large subunit ribosomal gene 16S, and transcriptome analyses further support the phylogenetic division of Siphonophorae into two main clades: Cystonectae and Codonophora. Suborders within Codonophora include Physonectae (consisting of the clades Calycophorae and Euphysonectae), Pyrostephidae, and Apolemiidae. Suborder Calycophorae Abylidae Agassiz, 1862 Clausophyidae Totton, 1965 Diphyidae Quoy & Gaimard, 1827 Hippopodiidae Kölliker, 1853 Prayidae Kölliker, 1853 Sphaeronectidae Huxley, 1859 Tottonophyidae Pugh, Dunn & Haddock, 2018 Suborder Cystonectae Physaliidae Brandt, 1835 Rhizophysidae Brandt, 1835 Suborder Physonectae Agalmatidae Brandt, 1834 Apolemiidae Huxley, 1859 Cordagalmatidae Pugh, 2016 Erennidae Pugh, 2001 Forskaliidae Haeckel, 1888 Physophoridae Eschscholtz, 1829 Pyrostephidae Moser, 1925 Resomiidae Pugh, 2006 Rhodaliidae Haeckel, 1888 Stephanomiidae Huxley, 1859 History Discovery Carl Linnaeus described the first siphonophore, the Portuguese man o' war, in 1758. The discovery rate of siphonophore species was slow in the 18th century, as only four additional species were found. During the 19th century, 56 new species were observed due to research voyages conducted by European powers. The majority of new species found during this time period were collected in coastal, surface waters. During the HMS Challenger expedition, various species of siphonophores were collected. Ernst Haeckel attempted to conduct a write up of all of the species of siphonophores collected on this expedition. He introduced 46 "new species"; however, his work was heavily critiqued because some of the species that he identified were eventually found not to be siphonophores. Nonetheless, some of his descriptions and figures (pictured below) are considered useful by modern biologists. A rate of about 10 new species discoveries per decade was observed during the 20th century. Considered the most important researcher of siphonophores, A. K. Totton introduced 23 new species of siphonophores during the mid-20th century. On April 6, 2020, the Schmidt Ocean Institute announced the discovery of a giant Apolemia siphonophore in submarine canyons near Ningaloo Coast, measuring 15 m (49 ft) diameter with a ring approximately 47 m (154 ft) long, possibly the largest siphonophore, and longest animal, ever recorded. There is no fossil record of siphonophores, though they have evolved and adapted for an extensive time period. Their phylum, Cnidaria, is an ancient lineage that dates back to c. 640 million years ago. Haeckel's siphonophores Ernst Haeckel described numerous siphonophores, and several plates from his Kunstformen der Natur (1904) depict members of the taxon:
Biology and health sciences
Cnidarians
Animals
631930
https://en.wikipedia.org/wiki/Membrane%20transport
Membrane transport
In cellular biology, membrane transport refers to the collection of mechanisms that regulate the passage of solutes such as ions and small molecules through biological membranes, which are lipid bilayers that contain proteins embedded in them. The regulation of passage through the membrane is due to selective membrane permeability – a characteristic of biological membranes which allows them to separate substances of distinct chemical nature. In other words, they can be permeable to certain substances but not to others. The movements of most solutes through the membrane are mediated by membrane transport proteins which are specialized to varying degrees in the transport of specific molecules. As the diversity and physiology of the distinct cells is highly related to their capacities to attract different external elements, it is postulated that there is a group of specific transport proteins for each cell type and for every specific physiological stage. This differential expression is regulated through the differential transcription of the genes coding for these proteins and its translation, for instance, through genetic-molecular mechanisms, but also at the cell biology level: the production of these proteins can be activated by cellular signaling pathways, at the biochemical level, or even by being situated in cytoplasmic vesicles. The cell membrane regulates the transport of materials entering and exiting the cell. Background Thermodynamically the flow of substances from one compartment to another can occur in the direction of a concentration or electrochemical gradient or against it. If the exchange of substances occurs in the direction of the gradient, that is, in the direction of decreasing potential, there is no requirement for an input of energy from outside the system; if, however, the transport is against the gradient, it will require the input of energy, metabolic energy in this case. For example, a classic chemical mechanism for separation that does not require the addition of external energy is dialysis. In this system a semipermeable membrane separates two solutions of different concentration of the same solute. If the membrane allows the passage of water but not the solute the water will move into the compartment with the greatest solute concentration in order to establish an equilibrium in which the energy of the system is at a minimum. This takes place because the water moves from a high solvent concentration to a low one (in terms of the solute, the opposite occurs) and because the water is moving along a gradient there is no need for an external input of energy. The nature of biological membranes, especially that of its lipids, is amphiphilic, as they form bilayers that contain an internal hydrophobic layer and an external hydrophilic layer. This structure makes transport possible by simple or passive diffusion, which consists of the diffusion of substances through the membrane without expending metabolic energy and without the aid of transport proteins. If the transported substance has a net electrical charge, it will move not only in response to a concentration gradient, but also to an electrochemical gradient due to the membrane potential. As few molecules are able to diffuse through a lipid membrane the majority of the transport processes involve transport proteins. These transmembrane proteins possess a large number of alpha helices immersed in the lipid matrix. In bacteria these proteins are present in the beta lamina form. This structure probably involves a conduit through hydrophilic protein environments that cause a disruption in the highly hydrophobic medium formed by the lipids. These proteins can be involved in transport in a number of ways: they act as pumps driven by ATP, that is, by metabolic energy, or as channels of facilitated diffusion. Thermodynamics A physiological process can only take place if it complies with basic thermodynamic principles. Membrane transport obeys physical laws that define its capabilities and therefore its biological utility. A general principle of thermodynamics that governs the transfer of substances through membranes and other surfaces is that the exchange of free energy, ΔG, for the transport of a mole of a substance of concentration C1 in a compartment to another compartment where it is present at C2 is: When C2 is less than C1, ΔG is negative, and the process is thermodynamically favorable. As the energy is transferred from one compartment to another, except where other factors intervene, an equilibrium will be reached where C2=C1, and where ΔG = 0. However, there are three circumstances under which this equilibrium will not be reached, circumstances which are vital for the in vivo functioning of biological membranes: The macromolecules on one side of the membrane can bond preferentially to a certain component of the membrane or chemically modify it. In this way, although the concentration of the solute may actually be different on both sides of the membrane, the availability of the solute is reduced in one of the compartments to such an extent that, for practical purposes, no gradient exists to drive transport. A membrane electrical potential can exist which can influence ion distribution. For example, for the transport of ions from the exterior to the interior, it is possible that: Where F is Faraday's constant and ΔP the membrane potential in volts. If ΔP is negative and Z is positive, the contribution of the term ZFΔP to ΔG will be negative, that is, it will favor the transport of cations from the interior of the cell. So, if the potential difference is maintained, the equilibrium state ΔG = 0 will not correspond to an equimolar concentration of ions on both sides of the membrane. If a process with a negative ΔG is coupled to the transport process then the global ΔG will be modified. This situation is common in active transport and is described thus: Where ΔGb corresponds to a favorable thermodynamic reaction, such as the hydrolysis of ATP, or the co-transport of a compound that is moved in the direction of its gradient. Transport types Passive diffusion and active diffusion As mentioned above, passive diffusion is a spontaneous phenomenon that increases the entropy of a system and decreases the free energy. The transport process is influenced by the characteristics of the transport substance and the nature of the bilayer. The diffusion velocity of a pure phospholipid membrane will depend on: concentration gradient, hydrophobicity, size, charge, if the molecule has a net charge. temperature Active and co-transport In active transport a solute is moved against a concentration or electrochemical gradient; in doing so the transport proteins involved consume metabolic energy, usually ATP. In primary active transport the hydrolysis of the energy provider (e.g. ATP) takes place directly in order to transport the solute in question, for instance, when the transport proteins are ATPase enzymes. Where the hydrolysis of the energy provider is indirect as is the case in secondary active transport, use is made of the energy stored in an electrochemical gradient. For example, in co-transport use is made of the gradients of certain solutes to transport a target compound against its gradient, causing the dissipation of the solute gradient. It may appear that, in this example, there is no energy use, but hydrolysis of the energy provider is required to establish the gradient of the solute transported along with the target compound. The gradient of the co-transported solute will be generated through the use of certain types of proteins called biochemical pumps. The discovery of the existence of this type of transporter protein came from the study of the kinetics of cross-membrane molecule transport. For certain solutes it was noted that the transport velocity reached a plateau at a particular concentration above which there was no significant increase in uptake rate, indicating a log curve type response. This was interpreted as showing that transport was mediated by the formation of a substrate-transporter complex, which is conceptually the same as the enzyme-substrate complex of enzyme kinetics. Therefore, each transport protein has an affinity constant for a solute that is equal to the concentration of the solute when the transport velocity is half its maximum value. This is equivalent in the case of an enzyme to the Michaelis–Menten constant. Some important features of active transport in addition to its ability to intervene even against a gradient, its kinetics and the use of ATP, are its high selectivity and ease of selective pharmacological inhibition Secondary active transporter proteins Secondary active transporter proteins move two molecules at the same time: one against a gradient and the other with its gradient. They are distinguished according to the directionality of the two molecules: antiporter (also called exchanger or counter-transporter): move a molecule against its gradient and at the same time displaces one or more ions along its gradient. The molecules move in opposite directions. symporter: move a molecule against its gradient while displacing one or more different ions along their gradient. The molecules move in the same direction. Both can be referred to as co-transporters. Pumps A pump is a protein that hydrolyses ATP to transport a particular solute through a membrane, and in doing so, generating an electrochemical gradient membrane potential. This gradient is of interest as an indicator of the state of the cell through parameters such as the Nernst potential. In terms of membrane transport the gradient is of interest as it contributes to decreased system entropy in the co-transport of substances against their gradient. One of the most important pumps in animal cells is the sodium potassium pump, that operates through the following mechanism: binding of three Na+ ions to their active sites on the pump which are bound to ATP. ATP is hydrolyzed leading to phosphorylation of the cytoplasmic side of the pump, this induces a structure change in the protein. The phosphorylation is caused by the transfer of the terminal group of ATP to a residue of aspartate in the transport protein and the subsequent release of ADP. the structure change in the pump exposes the Na+ to the exterior. The phosphorylated form of the pump has a low affinity for Na+ ions so they are released. once the Na+ ions are liberated, the pump binds two molecules of K+ to their respective bonding sites on the extracellular face of the transport protein. This causes the dephosphorylation of the pump, reverting it to its previous conformational state, transporting the K+ ions into the cell. The unphosphorylated form of the pump has a higher affinity for Na+ ions than K+ ions, so the two bound K+ ions are released into the cytosol. ATP binds, and the process starts again. Membrane selectivity As the main characteristic of transport through a biological membrane is its selectivity and its subsequent behavior as a barrier for certain substances, the underlying physiology of the phenomenon has been studied extensively. Investigation into membrane selectivity have classically been divided into those relating to electrolytes and non-electrolytes. Electrolyte selectivity The ionic channels define an internal diameter that permits the passage of small ions that is related to various characteristics of the ions that could potentially be transported. As the size of the ion is related to its chemical species, it could be assumed a priori that a channel whose pore diameter was sufficient to allow the passage of one ion would also allow the transfer of others of smaller size, however, this does not occur in the majority of cases. There are two characteristics alongside size that are important in the determination of the selectivity of the membrane pores: the facility for dehydration and the interaction of the ion with the internal charges of the pore. In order for an ion to pass through a pore it must dissociate itself from the water molecules that cover it in successive layers of solvation. The tendency to dehydrate, or the facility to do this, is related to the size of the ion: larger ions can do it more easily that the smaller ions, so that a pore with weak polar centres will preferentially allow passage of larger ions over the smaller ones. When the interior of the channel is composed of polar groups from the side chains of the component amino acids, the interaction of a dehydrated ion with these centres can be more important than the facility for dehydration in conferring the specificity of the channel. For example, a channel made up of histidines and arginines, with positively charged groups, will selectively repel ions of the same polarity, but will facilitate the passage of negatively charged ions. Also, in this case, the smallest ions will be able to interact more closely due to the spatial arrangement of the molecule (stericity), which greatly increases the charge-charge interactions and therefore exaggerates the effect. Non-electrolyte selectivity Non-electrolytes, substances that generally are hydrophobic and lipophilic, usually pass through the membrane by dissolution in the lipid bilayer, and therefore, by passive diffusion. For those non-electrolytes whose transport through the membrane is mediated by a transport protein the ability to diffuse is, generally, dependent on the partition coefficient K. Partially charged non-electrolytes, that are more or less polar, such as ethanol, methanol or urea, are able to pass through the membrane through aqueous channels immersed in the membrane. There is no effective regulation mechanism that limits this transport, which indicates an intrinsic vulnerability of the cells to the penetration of these molecules. Creation of membrane transport proteins There are several databases which attempt to construct phylogenetic trees detailing the creation of transporter proteins. One such resource is the Transporter Classification database
Biology and health sciences
Cell processes
Biology
632331
https://en.wikipedia.org/wiki/Transit%20of%20Venus
Transit of Venus
A transit of Venus takes place when Venus passes directly between the Sun and the Earth (or any other superior planet), becoming visible against (and hence obscuring a small portion of) the solar disk. During a transit, Venus is visible as a small black circle moving across the face of the Sun. Transits of Venus reoccur periodically. A pair of transits takes place eight years apart in December (Gregorian calendar) followed by a gap of 121.5 years, before another pair occurs eight years apart in June, followed by another gap, of 105.5 years. The dates advance by about two days per 243-year cycle. The periodicity is a reflection of the fact that the orbital periods of Earth and Venus are close to 8:13 and 243:395 commensurabilities. The last pairs of transits occurred on 8 June 2004 and 5–6 June 2012. The next pair of transits will occur on 10–11 December 2117 and 8 December 2125. Transits of Venus were in the past used to determine the size of the Solar System. The 2012 transit has provided research opportunities, particularly in the refinement of techniques to be used in the search for exoplanets. Conjunctions The orbit of Venus has an inclination of 3.39° relative to that of the Earth, and so passes under (or over) the Sun when viewed from the Earth. A transit occurs when Venus reaches conjunction with the Sun whilst also passing through the Earth's orbital plane, and passes directly across the face of the Sun. Sequences of transits usually repeat every 243 years, after which Venus and Earth have returned to nearly the same point in their respective orbits. During the Earth's 243 sidereal orbital periods, which total 88,757.3 days, Venus completes 395 sidereal orbital periods of 224.701 days each, which is equal to 88,756.9 Earth days. This period of time corresponds to 152 synodic periods of Venus. A pair of transits takes place eight years apart in December, followed by a gap of 121.5 years, before another pair occurs eight years apart in June, followed by another gap, of 105.5 years. Other patterns are possible within the 243-year cycle, because of the slight mismatch between the times when the Earth and Venus arrive at the point of conjunction. Prior to 1518, the pattern of transits was 8, 113.5, and 121.5 years, and the eight inter-transit gaps before the AD 546 transit were 121.5 years apart. The current pattern will continue until 2846, when it will be replaced by a pattern of 105.5, 129.5, and 8 years. Thus, the 243-year cycle is relatively stable, but the number of transits and their timing within the cycle vary over time. Since the 243:395 Earth:Venus commensurability is only approximate, there are different sequences of transits occurring 243 years apart, each extending for several thousand years, which are eventually replaced by other sequences. For instance, there is a series which ended in 541 BC, and the series which includes 2117 only started in AD 1631. History of observation of the transits Ancient Indian, Greek, Egyptian, Babylonian, and Chinese observers knew of Venus and recorded the planet's motions. Pythagoras is credited with realizing that the so-called morning and evening stars were really both the planet Venus. There is no evidence that any of these cultures observed planetary transits. It has been proposed that frescoes found at the Maya site at Mayapan may contain a pictorial representation of the 12th or 13th century transits. The Persian polymath Avicenna claimed to have observed Venus as a spot on the Sun. There was a transit on 24 May 1032, but Avicenna did not give the date of his observation, and modern scholars have questioned whether he could have observed the transit from his location; he may have mistaken a sunspot for Venus. He used his alleged transit observation to help establish that Venus was, at least sometimes, below the Sun in Ptolemaic cosmology, i.e., the sphere of Venus comes before the sphere of the Sun when moving out from the Earth in the then prevailing geocentric model. 1631 and 1639 transits The German astronomer Johannes Kepler predicted the 1631 transit in 1627, but his methods were not sufficiently accurate to predict that it could not be seen throughout most of Europe. As a consequence, astronomers were unable to use his prediction to observe the event. The first recorded observation of a transit of Venus was made by the English astronomer Jeremiah Horrocks from his home at Carr House in Much Hoole, near Preston, on 4 December 1639 (24 November O.S.). His friend William Crabtree observed the transit from nearby Broughton. Kepler had predicted transits in 1631 and 1761 and a near miss in 1639. Horrocks corrected Kepler's calculation for the orbit of Venus, realized that transits of Venus would occur in pairs 8 years apart, and so predicted the transit of 1639. Although he was uncertain of the exact time, he calculated that the transit was to begin at approximately 15:00. Horrocks focused the image of the Sun through a simple telescope and onto paper, where he could observe the Sun without damaging his eyesight. After waiting for most of the day, he eventually saw the transit when clouds obscuring the Sun cleared at about 15:15, half an hour before sunset. His observations allowed him to make a well-informed guess for the diameter of Venus and an estimate of the mean distance between the Earth and the Sun (). His observations were not published until 1661, well after Horrocks's death. Horrocks based his calculation on the (false) presumption that each planet's size was proportional to its rank from the Sun, not on the parallax effect as used by the 1761 and 1769 and following experiments. 1761 transit In 1663, the Scottish mathematician James Gregory had suggested in his that observations of a transit of Mercury, at widely spaced points on the surface of the Earth, could be used to calculate the solar parallax, and hence the astronomical unit by means of triangulation. Aware of this, the English astronomer Edmond Halley made observations of such a transit on 28 October O.S. 1677 from the island of Saint Helena, but was disappointed to find that only Richard Towneley in the Lancashire town of Burnley, Lancashire had made another accurate observation of the event, whilst Gallet, at Avignon, had simply recorded that it had occurred. Halley was not satisfied that the resulting calculation of the solar parallax of 45" was accurate. In a paper published in 1691, and a more refined one in 1716, Halley proposed that more accurate calculations could be made using measurements of a transit of Venus, although the next such event was not due until 1761 (6 June N.S., 26 May O.S.). In an attempt to observe the first transit of the pair, astronomers from Britain (William Wales and Captain James Cook), Austria (Maximilian Hell), and France (Jean-Baptiste Chappe d'Auteroche and Guillaume Le Gentil) took part in expeditions to places that included Siberia, Newfoundland, and Madagascar. Most of them observed at least part of the transit. Jeremiah Dixon and Charles Mason succeeded in observing the transit at the Cape of Good Hope, but Nevil Maskelyne and Robert Waddington were less successful on Saint Helena, although they put their voyage to good use by trialling the lunar-distance method of finding longitude. Venus was generally thought to possess an atmosphere prior to the transit of 1761, but the possibility that it could be detected during a transit seems not to have been considered. The discovery of the planet’s atmosphere has long been attributed to the Russian scientist Mikhail Lomonosov, after he observed the 1761 transit from the Imperial Academy of Sciences of St. Petersburg. The attribution to Lomonosov seems to have arisen from comments made in 1966 by the astronomy writer Willy Ley, who wrote that Lomonosov had inferred the existence of an atmosphere from his observation of a luminous arc. The attribution has since then been questioned. 1769 transit For the 1769 transit, scientists travelled to places all over the world. The Czech astronomer Christian Mayer was invited by the Russian empress Catherine the Great to observe the transit in Saint Petersburg with Anders Johan Lexell, while other members of the Russian Academy of Sciences went to eight other locations in the Russian Empire under the general coordination of Stepan Rumovsky. King George III of the United Kingdom had the King's Observatory built near his summer residence at Richmond Lodge, so that he and the Astronomer Royal, Stephen Demainbray, could observe the transit. Hell and his assistant János Sajnovics travelled to Vardø, Norway. Wales and Joseph Dymond went to Hudson Bay to observe the event. In Philadelphia, the American Philosophical Society erected three temporary observatories and appointed a committee led by David Rittenhouse. Observations were made by a group led by Dr. Benjamin West in Providence, Rhode Island, Observations were also made from Tahiti by James Cook and Charles Green at a location still known as Point Venus. D'Auteroche went to San José del Cabo in what was then New Spain to observe the transit with two Spanish astronomers (Vicente de Doz and Salvador de Medina). For his trouble he died in an epidemic of yellow fever there shortly after completing his observations. Only 9 of 28 in the entire party returned home alive. Le Gentil spent over eight years travelling in an attempt to observe either of the transits. Whilst abroad he was declared dead, and as a result he lost his wife and possessions. Upon his return he regained his seat in the French Academy and remarried. Under the influence of the Royal Society, the astronomer Ruđer Bošković travelled to Istanbul, but arrived after the transit had happened. In 1771, using the combined 1761 and 1769 transit data, the French astronomer Jérôme Lalande calculated the astronomical unit to have a value of . The precision was less than had been hoped for because of the black drop effect. The value obtained was still an improvement on the calculations made by Horrocks. Hell published his results in 1770, which included a value for the astronomical unit of . Lalande challenged the accuracy and authenticity of observations obtained by the Hell expedition, but later wrote an article in Journal des sçavans (1778), in which he retracted his comments. 1874 and 1882 transits Observations of the transits of 1874 and 1882 worked to refine the value obtained for the astronomical unit. Three expeditions—from Germany, the United Kingdom, and the United States—were sent to the Kerguelen Archipelago for the 1874 observations. The American astronomer Simon Newcomb combined the data from the last four transits, and he arrived at a value of . 2004 and 2012 transits Scientific organisations led by the European Southern Observatory organised a network of amateur astronomers and students to measure Earth's distance from the Sun during the transit of 2004. The participants' observations allowed a calculation of the astronomical unit (AU) of , which differed from the accepted value by 0.007%. During the 2004 transit, scientists attempted to measure the loss of light as Venus blocked out some of the Sun's light, in order to refine techniques for discovering extrasolar planets. The 2012 transit of Venus provided scientists with research opportunities as well, in particular in regard to the study of exoplanets. The event additionally was the first of its kind to be documented from space, photographed aboard the International Space Station by NASA astronaut Don Pettit. The measurement of the dips in a star's brightness during a transit is one observation that can help astronomers find exoplanets. Unlike the 2004 Venus transit, the 2012 transit occurred during an active phase of the 11-year activity cycle of the Sun, and it gave astronomers an opportunity to practise picking up a planet's signal around a "spotty" variable star. Measurements made of the apparent diameter of a planet such as Venus during a transit allows scientists to estimate exoplanet sizes. Observation made of the atmosphere of Venus from Earth-based telescopes and the Venus Express gave scientists a better opportunity to understand the intermediate level of Venus's atmosphere than was possible from either viewpoint alone, and provided new information about the climate of the planet. Spectrographic data of the atmosphere of Venus can be compared to studies of the atmospheres of exoplanets. The Hubble Space Telescope used the Moon as a mirror to study light from the atmosphere of Venus, and so determine its composition. Future transits Transits usually occur in pairs, because the length of eight Earth years is almost the same as 13 years on Venus. This approximate conjunction is not precise enough to produce a triplet, as Venus arrives 22 hours earlier each time. The last transit not to be part of a pair was in 1396 (the planet passed slightly above the disc of the Sun in 1388); the next one will be in 3089. After 243 years the transits of Venus return. The 1874 transit is a member of the 243-year cycle #1. The 1882 transit is a member of #2. The 2004 transit is a member of #3, and the 2012 transit is a member of #4. The 2117 transit is a member of #1, and so on. However, the ascending node (December transits) of the orbit of Venus moves backwards after each 243 years so the transit of 2854 is the last member of series #3 instead of series #1. The descending node (June transits) moves forwards, so the transit of 3705 is the last member of #2. Over longer periods of time, new series of transits will start and old series will end. Unlike the saros series for lunar eclipses, it is possible for a transit series to restart after a hiatus. The transit series also vary much more in length than the saros series. Grazing and simultaneous transits Sometimes Venus only grazes the Sun during a transit. In this case it is possible that in some areas of the Earth a full transit can be seen while in other regions there is only a partial transit (no second or third contact). The last transit of this type was on 6 December 1631, and the next such transit will occur on 13 December 2611. It is also possible that a transit of Venus can be seen in some parts of the world as a partial transit, while in others Venus misses the Sun. Such a transit last occurred on 19 November 541 BC, and the next transit of this type will occur on 14 December 2854. These effects are due to parallax, since the size of the Earth affords different points of view with slightly different lines of sight to Venus and the Sun. It can be demonstrated by closing an eye and holding a finger in front of a smaller more distant object; when the viewer opens the other eye and closes the first, the finger will no longer be in front of the object. The simultaneous occurrence of transits of Mercury and Venus does occur, but extremely infrequently. Such an event last occurred on 22 September 373,173 BC and will next occur on 26 July 69,163, and again on 29 March 224,504. The simultaneous occurrence of a solar eclipse and a transit of Venus is currently possible, but very rare. The next solar eclipse occurring during a transit of Venus will be on 5 April 15,232. In popular culture The Canadian rock band Three Days Grace titled their fourth studio album Transit of Venus and announced the album title and release date on June 5, 2012, the date of the last transit of Venus. The album's first song, "Sign of the Times", references the transit in the lyric "Venus is passing by". The progressive rock band Big Big Train have a song titled "The Transit of Venus Across the Sun". It is the fifth track on their ninth album Folklore (Big Big Train album). The Transit of Venus March was written by John Philip Sousa in 1883 to commemorate the 1882 transit.
Physical sciences
Celestial mechanics
Astronomy
632354
https://en.wikipedia.org/wiki/Cahora%20Bassa
Cahora Bassa
The Cahora Bassa lake—in the Portuguese colonial era (until 1974) known as Cabora Bassa, from Nyungwe Kahoura-Bassa, meaning "finish the job"—is Africa's fourth-largest artificial lake, situated in the Tete Province in Mozambique. In Africa, only Lake Volta in Ghana, Lake Kariba on the Zambezi upstream of Cahora Bassa, and Egypt's Lake Nasser are bigger in terms of surface water. History Portuguese period The Cahora Bassa System started in the late 1960s as a project of the Portuguese in the Overseas Province of Mozambique. Southern African governments were also involved in an agreement stating that Portugal would build and operate a hydroelectric generating station at Cabora Bassa (as it was then called in Portuguese) together with the high-voltage direct current (HVDC) transmission system required to bring electricity to the border of South Africa. South Africa, on the other hand, undertook to build and operate the Apollo converter station and part of the transmission system required to bring the electricity from the South African-Mozambican border to the Apollo converter station near Pretoria. South Africa was then obliged to buy electricity that Portugal was obliged to supply. During the struggle for independence, construction materials for the dam were repeatedly attacked in a strategic move by Frelimo guerrillas, as its completion would cause the lake to widen so much it would take very long to cross to the other side with their canoes. The dam began to fill in December 1974, after the Carnation Revolution in mainland Portugal and the independence agreement being signed. Mozambique officially became independent from Portugal on 25 June 1975. Until November 2007, the dam was operated by Hidroeléctrica de Cahora Bassa (HCB) and jointly owned by Mozambique, with an 18% equity stake, and Portugal, which held the remaining 82% equity. On 27 November 2007, Mozambique assumed control of the dam from Portugal, when Portugal sold to Mozambique most of its 82 percent stake. Finance Minister Fernando Teixeira dos Santos said Portugal would collect US$950 million (€750 million) from the sale of its part of southern Africa's largest hydropower project. Portugal kept a 15 percent stake, though it planned to sell off another 10 percent at a later stage to an investor that would be proposed by the Mozambican government. Portugal's Prime Minister, José Sócrates, signed the agreement with the Mozambican government, during an official visit to Maputo. The agreement ended decades of dispute between Portugal and its former colony over the company, called Hidroeléctrica de Cahora Bassa. The central disagreement was over the handling of the company's estimated US$2.2 billion (€1.7 billion) debts to Portugal. Mozambican authorities argued they had not guaranteed the debt and therefore should not be liable for the payments. Independent Mozambique Mozambique became independent from Portugal on 25 June 1975. Since closure, the Zambezi, which is the fourth largest floodplain river in Africa, has received a far more regulated flow rate, but disastrous natural floods still occur. The 1978 flood caused 45 deaths, 100,000 people to be displaced and $62 million worth of damage. According to engineering consultants, "This was the first flood since completion of Cahora Bassa, and destroyed the widely held belief that the dam would finally bring flooding under full control". For further details of ecological problems caused by the dam, see the article on the Zambezi River. During the Mozambican Civil War (1977–1992) the transmission lines were sabotaged to the extent that 1,895 towers needed to be replaced and 2,311 refurbished over a distance of 893 km on the Mozambican side of the line. In the 1990s, after the end of the civil war, Hidroeléctrica de Cahora Bassa (HCB) appointed South Africa's Trans-Africa Projects (TAP) to perform the construction management, quality assurance and design support service for the rehabilitation of the project. TAP assisted HCB in awarding the construction contract to a joint venture company comprising Consorzio Italia 2000 and Enel, and a scheduled period of 24 months was set for the project. The lines in South Africa were damaged to a minor extent and only normal maintenance was required by Eskom to get these lines back in operation. Work on the project started in August 1995. The line route in Mozambique passes through dense bush and difficult terrain from Songo to the South African border near Pafuri, with both servitudes infested with landmines from the Mozambican Civil War (1977–1992) that needed to be cleared before construction work could commence. Heavy, unseasonable rainfalls later affected the programme to such an extent that the first line could only be completed in August 1997 and the second in November that same year. During the refurbishment period, TAP developed and implemented various designs and construction methods to improve the overall programme schedules and project costs. In spite of the extreme conditions within which they had to refurbish and reconstruct these lines, the work was completed within schedule and with a limited budget. The lines have, since completion, been subject to numerous tests and energised to its full potential. About 1,100 people were employed during the peak periods of construction. The rainfalls and severe flooding during February 2000 in the Limpopo River valley again caused considerable damage to both lines to the extent that about 10 towers collapsed and need to be reconstructed within the shortest possible timeframe to restore the power supply to South Africa. TAP was again entrusted by HCB with the engineering, procurement and construction management services. TAP managed to temporarily restore power supply through the one line while a more permanent solution could be carried out on the other line. The reconstructed line is used to carry the full line capacity. TAP had to implement unconventional construction techniques to recover temporary supply. The suspension towers next to the river crossing posed a significant challenge for a temporary power solution to obtain the required clearances of the 711 metre level terrain span. On April 27, 2009 four foreign nationals were arrested for putting a "highly corrosive" substance into the lake in an alleged attempt to sabotage the power station. The arrested claimed to be a team from Orgonise Africa, placing orgonite pieces in the lake to improve the quality of etheric energy (life force) of the dam. Since 2005, the area is considered a Lion Conservation Unit. Related economic activities Most of the electricity generated by Cahora Bassa, which is located on the Zambezi River in western Mozambique, is sold to nearby South Africa. In 2006, Cahora Bassa transmitted about 1,920 megawatts of power, but the infrastructure is capable of higher production levels and the company had plans to almost double its output by 2008. In 1994 the total installed capacity in Mozambique was 2,400 MW of which 91% was hydroelectric. A considerable kapenta fishery has developed in the reservoir. The kapenta is assumed to originate from Lake Kariba where it was introduced from Lake Tanganyika. Annual catch of kapenta in the Cahora Bassa dam in 2003 exceeded 10,000 tonnes. Sharks It is widely believed that there is a breeding colony of Zambezi sharks "trapped" inside the reservoir. As the bull shark is known to travel more than 100 km upstream, this phenomenon does not conflict with existing scientific and biological fact. Usually an ocean-dwelling species, bull sharks are perfectly capable of living in fresh water for their entire lifespan. Local tribes have indeed reported sightings (and attacks) by this isolated community of shark, although these have yet to be substantiated with hard evidence.
Technology
Dams
null
632394
https://en.wikipedia.org/wiki/Transit%20of%20Mercury
Transit of Mercury
A transit of Mercury across the Sun takes place when the planet Mercury passes directly between the Sun and a superior planet. During a transit, Mercury appears as a tiny black dot moving across the Sun as the planet obscures a small portion of the solar disk. Because of orbital alignments, transits viewed from Earth occur in May or November. The last four such transits occurred on May 7, 2003; November 8, 2006; May 9, 2016; and November 11, 2019. The next will occur on November 13, 2032. A typical transit lasts several hours. Mercury transits are much more frequent than transits of Venus, with about 13 or 14 per century, primarily because Mercury is closer to the Sun and orbits it more rapidly. On June 3, 2014, the Mars rover Curiosity observed the planet Mercury transiting the Sun, marking the first time a planetary transit has been observed from a celestial body besides Earth. Scientific investigation The orbit of the planet Mercury lies interior to that of the Earth, and thus it can come into an inferior conjunction with the Sun. When Mercury is near the node of its orbit, it passes through the orbital plane of the Earth. If an inferior conjunction occurs as Mercury is passing through its orbital node, the planet can be seen to pass across the disk of the Sun in an event called a transit. Depending on the chord of the transit and the position of the planet Mercury in its orbit, the maximum length of this event is 7h 50m. Transit events are useful for studying the planet and its orbit. Examples of the scientific investigations based on transits of Mercury are: Measuring the scale of the Solar System. Investigations of the variability of the Earth's rotation and of the tidal acceleration of the Moon. Measuring the mass of Venus from secular variations in Mercury's orbit. Looking for long term variations in the solar radius. Investigating the black drop effect, including calling into question the purported discovery of the atmosphere of Venus during the 1761 transit. Assessing the likely drop in light level in an exoplanet transit. Occurrence Transits of Mercury can only occur when the Earth is aligned with a node of Mercury's orbit. Currently that alignment occurs within a few days of May 8 (descending node) and November 10 (ascending node), with the angular diameter of Mercury being about 12″ for May transits, and 10″ for November transits. The average date for a transit increases over centuries as a result of Mercury's nodal precession and Earth's axial precession. Transits of Mercury occur on a regular basis. As explained in 1882 by Newcomb, the interval between passages of Mercury through the ascending node of its orbit is 87.969 days, and the interval between the Earth's passage through that same longitude is 365.254 days. Using continued fraction approximations of the ratio of these values, it can be shown that Mercury will make an almost integral number of revolutions about the Sun over intervals of 6, 7, 13, 33, 46, and 217 years. In 1894 Crommelin noted that at these intervals, the successive paths of Mercury relative to the Sun are consistently displaced northwards or southwards. He noted the displacements as: {| class="wikitable" |+Displacements at subsequent transits ! Interval!! May transits !! November transits |- | After 6 years|| 65′ 37″ S|| 31′ 35″ N |- | After 7 years|| 48′ 21″ N|| 23′ 16″ S |- | Hence after 13 years (6 + 7)|| 17′ 16″ S|| 8′ 19″ N |- | ... 20 years (6 + 2 × 7)|| 31′ 05″ N|| 14′ 57″ S |- | ... 33 years (2 × 6 + 3 × 7)|| 13′ 49″ N|| 6′ 38″ S |- | ... 46 years (3 × 13 + 7)|| 3′ 27″ S||   1′ 41″ N |- | ... 217 years (14 × 13 + 5 × 7)|| 0′ 17″ N || 0′ 14″ N |} Comparing these displacements with the solar diameter (about 31.7′ in May, and 32.4′ in November) the following may be deduced about the interval between transits: For May transits, intervals of 6 and 7 years are not possible. For November transits, an interval of 6 years is possible but rare (the last such pair was 1993 and 1999, with both transits being very close to the solar limb), while an interval of 7 years is to be expected. An interval of 13 years is to be expected for both May and November transits. An interval of 20 years is possible but rare for a May transit, but is to be expected for November transits. An interval of 33 years is to be expected for both May and November transits. A transit having a similar path across the sun will occur 46 (and 171) years later – for both November and May transits. A transit having an almost identical path across the Sun will occur 217 years later – for both November and May transits. Transits that occur 46 years apart can be grouped into a series. For November transits each series includes about 20 transits over 874 years, with the path of Mercury across the Sun passing further north than for the previous transit. For May transits each series includes about 10 transits over 414 years, with the path of Mercury across the Sun passing further south than for the previous transit. Some authors have allocated a series number to transits on the basis of this 46-year grouping. Similarly transits that occur 217 years apart can be grouped into a series. For November transits each series would include about 135 transits over 30,000 years. For May transits each series would include about 110 transits over 24,000 years. For both the May and November series, the path of Mercury across the Sun passes further north than for the previous transit. Series numbers have not been traditionally allocated on the basis of the 217 year grouping. Predictions of transits of Mercury covering many years are available at NASA, SOLEX, and Fourmilab. Observation At inferior conjunction, the planet Mercury subtends an angle of , which, during a transit, is too small to be seen without a telescope. A common observation made at a transit is recording the times when the disk of Mercury appears to be in contact with the limb of the Sun. Those contacts are traditionally referred to as the 1st, 2nd, 3rd and 4th contacts – with the 2nd and 3rd contacts occurring when the disk of Mercury is fully on the disk of the sun. As a general rule, 1st and 4th contacts cannot be accurately detected, while 2nd and 3rd contacts are readily visible within the constraints of the Black Drop effect, irradiation, atmospheric conditions, and the quality of the optics being used. Observed contact times for transits between 1677 and 1881 are given in S Newcomb's analysis of transits of Mercury. Observed 2nd and 3rd contacts times for transits between 1677 and 1973 are given in Royal Greenwich Observatory Bulletin No.181, 359-420 (1975). Partial Sometimes Mercury appears to only graze the Sun during a transit. There are two possible scenarios: Firstly, it is possible for a transit to occur such that, at mid-transit, the disk of Mercury has fully entered the disk of the Sun as seen from some parts of the world, while as seen from other parts of the world the disk of Mercury has only partially entered the disk of the Sun. The transit of November 15, 1999 was such a transit, with the transit being a full transit for most of the world, but only a partial transit for Australia, New Zealand, and Antarctica. The previous such transit was on October 28, 743 and the next will be on May 11, 2391. While these events are very rare, two such transits will occur within years in December 6149 and June 6152. Secondly, it is possible for a transit to occur in which, at mid-transit, the disk of Mercury has partially entered the disk of the Sun as seen from some parts of the world, while as seen from other parts of the world Mercury completely misses the Sun. Such a transit last occurred on May 11, 1937, when a partial transit occurred in southern Africa and southern Asia and no transit was visible from Europe and northern Asia. The previous such transit was on October 21, 1342 and the next will be on May 13, 2608. The possibility that, at mid-transit, Mercury is seen to be fully on the solar disk from some parts of the world, and completely miss the Sun as seen from other parts of the world cannot occur. History The first observation of a Mercury transit was observed on November 7, 1631 by Pierre Gassendi. He was surprised by the small size of the planet compared to the Sun. Johannes Kepler had predicted the occurrence of transits of Mercury and Venus in his ephemerides published in 1630. Images of the November 15, 1999 transit from the Transition Region and Coronal explorer (TRACE) satellite were on Astronomy Picture of the Day (APOD) on November 19. Three APODs featured the May 9, 2016 transit. 1832 event The Shuckburgh telescope of the Royal Observatory, Greenwich in London was used for the 1832 Mercury transit. It was equipped with a micrometer by Dollond and was used for a report of the events as seen through the small refractor. By observing the transit in combination with timing it and taking measures, a diameter for the planet was taken. They also reported the peculiar effects that they compared to pressing a coin into the Sun. The observer remarked: 1907 event For the 1907 Mercury transit, telescopes used at the Paris Observatory included: Foucault-Eichens reflector ( aperture) Foucault-Eichens reflector ( aperture) Martin-Eichens reflector ( aperture) Several small refractors The telescopes were mobile and were placed on the terrace for the several observations. Chronology The table below includes all historical transits of Mercury from 1605 on:
Physical sciences
Celestial mechanics
Astronomy
632500
https://en.wikipedia.org/wiki/Indian%20prawn
Indian prawn
The Indian prawn (Fenneropenaeus indicus, formerly Penaeus indicus) is one of the major commercial prawn species of the world. It is found in the Indo-West Pacific from eastern and south-eastern Africa, through India, Malaysia and Indonesia to southern China and northern Australia. Adult shrimp grow to a length of about and live on the seabed to depths of about . The early developmental stages take place in the sea before the larvae move into estuaries. They return to the sea as sub-adults. The Indian prawn is used for human consumption and is the subject of a sea fishery, particularly in China, India, Indonesia, Vietnam and Thailand. It is also the subject of an aquaculture industry, the main countries involved in this being Saudi Arabia, Vietnam, Iran and India. For this, wild seed is collected or young shrimps are reared in hatcheries and kept in ponds as they grow. The ponds may be either extensive with reliance on natural foods, with rice paddy fields being used in India after the monsoon period, or semi-intensive or intensive, with controlled feeding. Harvesting is by drainage of the pond. Common names F. indicus is known by many common names around the world, including Indian white prawn, Tugela prawn, white prawn, banana prawn, Indian banana prawn and red leg banana prawn, some of which may also apply to the related species Fenneropenaeus merguiensis. The name white shrimp may also refer to other species. Ecology and life cycle F. indicus is a marine decapod with estuarine juveniles. It prefers mud or sandy mud at depths of . It grows to and has a life span of 18 months. After hatching, free-swimming nauplii are obtained, which further passes through protozoea, mysis and then to postlarval stage, which resembles the adult prawn. The postlarvae migrate to the estuaries, feed and grow until they attain a length of 110–120 mm, and these sub adults return to the sea and get recruited into fishery. It is also commonly used in shrimp farming. Fisheries and aquaculture The world's production of shrimp is about 6 million tonnes, of which approximately 3.4 million tonnes is contributed by capture fisheries and 2.4 tonnes by aquaculture. China and four other Asian countries, including India, Indonesia, Vietnam and Thailand, together account for 55% of the capture fisheries. Among the shrimp, the contribution of F. indicus to global fisheries was around 2.4%, and to global farmed shrimp production was 1.2% in 2005. Currently F. indicus is mainly cultured in Saudi Arabia, Vietnam, Islamic Republic of Iran and India. Saudi Arabia was the largest producer in 2005 at nearly 11,300 tonnes with Vietnam not far behind with 10,000 tonnes. In India F. indicus farming declined from 5200 tonnes in 2000 to 1100 tonnes in 2005 due to preference of farmers for P. monodon. Fishery In 2010, Greenpeace International added the Indian prawn to its seafood red list. Although the Indian prawn itself is not threatened, the methods used to capture it result in a large amount of bycatch, which includes endangered species such as sea turtles. Aquaculture Production cycle of F. indicus follows the same steps as for other species of shrimp, i.e., seed production and Grow-out of the post larvae to marketable size. The sources of seeds and grow-out techniques can be differed as desired by the farmer to achieve a balance between the cost of production and the desired quantity of output. Supply of seeds Seeds can be obtained from the wild or by establishment of hatcheries. In traditional paddy field systems the juveniles which have congregated near the sluice gates are allowed to enter the field with the incoming high tide. Among the prawn species entering the field F. indicus constitute around 36%–43%. Earlier wild seeds were also collected and sold to shrimp farmers. Nowadays the dependence on wild seed has been reduced due to establishment of hatcheries and also due to reduction in wild seeds due to overfishing. Broodstock Intensification of cultured shrimp is limited by seed supply. The production of seeds in hatcheries depends on the availability of broodstock and quality of spawners. Spawners for seed production can be obtained from the wild or can be developed by induced maturation in hatcheries. Matured individuals can be collected from the wild during their peak spawning seasons in March/April and July/August in the tropics. A temperature range of and salinity of 30‰–35‰ is ideal for spawning. Although hatcheries in the developing countries still depend on wild seed, maturation can be induced by eyestalk ablation technique where eyestalks of females are unilaterally ablated to stimulate endocrine activity. The ablated females spawn after 4 days, with a peak observed at days 5–6. However it is expensive to raise spawners in captivity and ablated shrimps result in less hardy fry with low survival rate. Even though the fecundity of the ablated females may not differ significantly, the hatch rates of ablated females was found to be markedly less (37.8% to 58.1%) than that of unablated females (69.2%). It is also found that wild females are more fecund per unit weight than ablated females. However quantitatively the number of spawns, eggs and nauplii produced by ablated females is ten, eight and six times respectively that of unablated females. The size of females used for broodstock and spawning should preferably be above be and males above , as they mature at approximately and respectively. Hatchery Circular tanks of 2–5 tonnes capacity are used to rear larvae from nauplius to mysis stage. The salinity of water is maintained at around 32‰ and pH at 8.2. Feed is not provided to nauplius as it is a non-feeding stage. The protozoea stage is supplied with a mixed culture of diatoms dominated by Chaetoceros spp. or Skeletonema spp. at a concentration on around 30,000 to 40,000 cells per ml. The best algal density promoting highest survival, growth and fastest larval development is around 60–70 cells per μL. From the mysis stage they are also fed with artemia nauplii and egg-prawn-custard mix. Post larval rearing can be continued in the same tank and post-larvae (PL) are fed with minced mussel meat, mantis shrimp powder or variety of other fresh feeds of particle size 200–1000 μm till they reach PL-20 (day 20 of post-larva). After PL-20 stage they can be stocked directly into grow-out ponds without acclimatization. Grow-out techniques Grow-out techniques can be extensive, semi-intensive or intensive. Extensive This is the traditional system of shrimp farming which involves stocking of the wild seed with incoming tidal water is practiced in Bangladesh, India, Indonesia, Myanmar, Philippines, and Vietnam. On the southwest coast of India, low-lying coastal paddy fields are used for growing salinity tolerant variety of paddy called ‘pokkali’ and shrimp farming is carried out post monsoon during November to April. It takes an average of 150–180 days for a single crop to be ready to harvest. The estimated production of prawn-cum-paddy culture varies from 400 to 1200 kg/ha for six months period. F. indicus forms about 36%–43% of the total yield of shrimp which can go up to 400–900 kg/ha/yr. Extensive culture can be made more productive by construction of artificial ponds, use of aeration and supplementing with artificial diet. This can increase the productivity to 871.5 kg/ha/320 days in mixed culture of prawns. Monoculture of F.indicus can yield a net profit of up to Rs.8000 (approx. US$180–200) per hectare per annum for 2 crops. Semi-intensive Compared with traditional type of management, semi-intensive production are on a relatively smaller scale with 0.2–2 hectare ponds and also deeper 1.0–1.5 m. Stock densities can range from 20–25 PL/m2 using hatchery derived seeds for monoculture. Natural feeds are grown by application of fertilizers and supplementary feeds are also given during the culture at a rate of 4–5 times a day. Water exchange at the rate of 30%–40% is carried out using pumps. Supplementary aeration is also provided using 4–6 aerators per hectare. A culture period may last from 100–150 days depending on various factors. Intensive Intensive farming is tightly controlled system of farming with very less dependence on natural foods and high level of mechanization. The ponds are also usually very small (0.1–1 ha), and the stocking density very high (50–100 PL/m2). Water exchange of around 30% daily is essential to avoid degradation of water due to high stocking density and feeding rate (5–7 times/day). Production level of around 10,000–20,000 kg/ha/yr can be achieved. A culture period lasts from 120–140 days. Harvesting In traditional farming harvesting is done by fitting conical nets on the sluice gates and opening them during low tide. The shrimp are trapped in the net as the water recedes. The remaining shrimp are harvested by cast netting. In semi-intensive and intensive practices, harvesting is done by complete draining of the pond. The rest of the shrimp are collected by hand. Production costs and market value Production cost depends on type of culture used, scale of production, number of production cycles per year, etc. It is estimated that seed production cost was US$1.6/1000. The cost of adult shrimp can range from US$4–5/kg. The Indian shrimp has a relatively lower market value than P. monodon. Average price of white shrimp is US$5.5/kg for a size range of 21/25 shrimps per kg, while for P. monodon it is US7-13/kg. However, as F. indicus is more easily bred and reared, the relative profit gained by F.indicus may be higher per input than it seems from the above figures. Traditionally the shrimp are exported as head-on, headless, tail-on or frozen in blocks. The profit can be increased by value-addition to the shrimp in the form of shrimp pickles, cutlets, battered, ready-to-cook, etc.
Biology and health sciences
Shrimps and prawns
Animals
633016
https://en.wikipedia.org/wiki/Hevea%20brasiliensis
Hevea brasiliensis
Hevea brasiliensis, the Pará rubber tree, sharinga tree, seringueira, or most commonly, rubber tree or rubber plant, is a flowering plant belonging to the spurge family, Euphorbiaceae, originally native to the Amazon basin, but is now pantropical in distribution due to introductions. It is the most economically important member of the genus Hevea because the milky latex extracted from the tree is the primary source of natural rubber. Description Hevea brasiliensis is a tall deciduous tree growing to a height of up to in the wild. Cultivated trees are usually much smaller because drawing off the latex restricts their growth. The trunk is cylindrical and may have a swollen, bottle-shaped base. The bark is some shade of brown, and the inner bark oozes latex when damaged. The leaves have three leaflets and are spirally arranged. The inflorescences include separate male and female flowers. The flowers are pungent, creamy-yellow and have no petals. The fruit is a capsule that contains three large seeds; it opens explosively when ripe. Rubber tree plantation In the wild the tree can reach a height of up to . The white or yellow latex occurs in latex vessels in the bark, mostly outside the phloem. These vessels spiral up the tree in a right-handed helix which forms an angle of about 30 degrees with the horizontal, and can grow as high as . In plantations the trees are generally smaller for two reasons: 1) trees grow more slowly when they are tapped for latex, and 2) trees are generally cut down after only 30 years, because latex production declines as trees age, and they are no longer economically productive. The tree requires a tropical or subtropical climate with a minimum of about per year of rainfall, and no frost. If frost does occur, the results can be disastrous for production. One frost can cause the rubber from an entire plantation to become brittle and break once it has been refined. Latex tapping The rubber tree takes between seven and ten years to deliver the first harvest. Harvesters make incisions across the latex vessels, just deep enough to tap the vessels without harming the tree's growth, and the latex is collected in small buckets. This process is known as rubber tapping. Latex production is highly variable from tree to tree and across clone types. Wood harvesting As latex production declines with age, rubber trees are generally felled when they reach the age of 25 to 30 years. The earlier practice was to burn the trees, but in recent decades, the wood has been harvested for furniture making. History The South American rubber tree grew only in the Amazon rainforest, and increasing demand and the discovery of the vulcanization procedure in 1839 led to the rubber boom in that region, enriching the cities of Belém, Santarém, and Manaus in Brazil and Iquitos, Peru, from 1840 to 1913. In Brazil, before the name was changed to 'Seringueira' the initial name of the plant was 'pará rubber tree', derived from the name of the province of Grão-Pará. In Peru, the tree was called 'árbol del caucho', and the latex extracted from it was called 'caucho'. The tree was used to obtain rubber by the natives who inhabited its geographical distribution. The Olmec people of Mesoamerica extracted and produced similar forms of primitive rubber from analogous latex-producing trees such as Castilla elastica as early as 3,600 years ago. The rubber was used, among other things, to make the balls used in the Mesoamerican ballgame. Early attempts were made in 1873 to grow H. brasiliensis outside Brazil. After some effort, 12 seedlings were germinated at the Royal Botanic Gardens, Kew. These were sent to India for cultivation, but died. A second attempt was then made, some 70,000 seeds being smuggled to Kew in 1875, by Henry Wickham, in the service of the British Empire. About four percent of these germinated, and in 1876, about 2,000 seedlings were sent, in Wardian cases, to Ceylon (modern day Sri Lanka) and 22 were sent to the botanic gardens in Singapore. Once established outside its native country, rubber was extensively propagated in the British colonies. Rubber trees were brought to the botanical gardens at Buitenzorg, Java, in 1883. By 1898, a rubber plantation had been established in Malaya, with imported Chinese field workers being the dominant work force in rubber production in the early 20th-century. The cultivation of the tree in South America (Amazon) ended early in the 20th century because of indigenous blights that targeted the rubber tree. The blight, called South American leaf blight, is caused by the ascomycete Pseudocercospora ulei, also called Microcyclus ulei, or Dothidella ulei, which is endemic to the Amazon Basin. The blight was considered one of the five most aggressive diseases in commercial crops in South America. Rubber production then moved to parts of the world where it is not indigenous, and therefore not affected by local plant diseases. Today, most rubber tree plantations are in South and Southeast Asia, the top rubber producing countries in 2011 being Thailand, Indonesia, Malaysia, India and Vietnam. Environmental concerns The toxicity of arsenic to insects, bacteria, and fungi has led to the heavy use of arsenic trioxide on rubber plantations, especially in Malaysia. The majority of the rubber trees in Southeast Asia are clones of varieties highly susceptible to the South American leaf blight—Pseudocercospora ulei. For these reasons, environmental historian Charles C. Mann, in his 2011 book, 1493: Uncovering the New World Columbus Created, predicted that the Southeast Asian rubber plantations will be ravaged by the blight in the not-too-distant future, thus creating a potential calamity for international industry. Secondary metabolites Hevea brasiliensis produces cyanogenic glycosides (CGs) as a defense, concentrated in the seeds. (Although effective against other attackers, cyanogenic glycosides are not very effective against fungal pathogens. In rare cases, they are even detrimental. This is the case for the rubber tree, which actually suffers worse from Pseudocercospora ulei when it produces more cyanogenic glycosides. This may be because cyanide inhibits the production of other defensive metabolites. This results in significantly divergent subpopulations with selection for or against cyanogenic glycosides, depending on local likelihoods of fungal or non-fungal pest pressure.) The carbon and nitrogen in CGs are recycled for growth and latex production if needed, and the ease of doing so makes them an attractive nitrogen store - especially if the plant is light-deprived and storage in photosynthesis proteins would thus be unhelpful. The α-hydroxynitriles are likely contained in the cytoplasm. Linamarin is hydrolyzed by an accompanying linamarase, a β-glycosidase. Hevea brasiliensis linamarase does act upon linamarin because it is a monoglucoside, while it does not for linustatin because it is a diglucoside - in fact, the production of lovastatin inhibits linamarase cleavage of linamarin. This allows intra-plant, post-synthesis transport of linustatin without risking premature cleavage.
Biology and health sciences
Malpighiales
Plants
633072
https://en.wikipedia.org/wiki/School%20bus
School bus
A school bus is any type of bus owned, leased, contracted to, or operated by a school or school district. It is regularly used to transport students to and from school or school-related activities, but not including a charter bus or transit bus. Various configurations of school buses are used worldwide; the most iconic examples are the yellow school buses of the United States which are also found in other parts of the world. In North America, school buses are purpose-built vehicles distinguished from other types of buses by design characteristics mandated by federal and state/provincial regulations. In addition to their distinct paint color (National School Bus Glossy Yellow), school buses are fitted with exterior warning lights (to give them traffic priority) and multiple safety devices. Design history 19th century In the second half of the 19th century, many rural areas of the United States and Canada were served by one-room schools. For those students who lived beyond practical walking distance from school, transportation was facilitated in the form of the kid hack; at the time, "hack" was a term referring to certain types of horse-drawn carriages. Essentially re-purposed farm wagons, kid hacks were open to the elements, with little to no weather protection. In 1892, Indiana-based Wayne Works (later Wayne Corporation) produced its first "school car" A purpose-built design, the school car was constructed with perimeter-mounted wooden bench seats and a roof (the sides remained open). As a horse-drawn wagon, the school car was fitted with a rear entrance door (intended to avoid startling the horses while loading or unloading passengers); over a century later, the design remains in use (as an emergency exit). In 1869, Massachusetts became the first state to add transportation to public education; by 1900, 16 other states would transport students to school. 1900–1930 Following the first decade of the twentieth century, several developments would affect the design of the school bus and student transport. As vehicles evolved from horse-drawn to "horseless" propulsion on a wider basis, the wagon bodies of kid hacks and school cars were adapted to truck frames. While transitioning into purpose-built designs, a number of features from wagons were retained, including wood construction, perimeter bench seating, and rear entry doors. Weather protection remained minimal; some designs adopted a tarpaulin stretched above the passenger seating. In 1915, International Harvester constructed its first school bus; today, its successor company Navistar still produces school bus cowled chassis. In 1919, the usage of school buses became funded in all 48 US states. In 1927, Ford dealership owner A.L. Luce produced a bus body for a 1927 Ford Model T. The forerunner of the first Blue Bird school buses, steel was used to panel and frame the bus body; wood was relegated to a secondary material. While fitted with a roof, the primary weather protection of the Luce bus design included roll-up canvas side curtains. 1930s During the 1930s, school buses saw advances in their design and production that remain in use to this day. To better adapt automotive chassis design, school bus entry doors were moved from the rear to the front curbside, becoming a door operated by the driver (to ease loading passengers and improve forward visibility). The rear entry door of the kid hacks were re-purposed as an emergency exit. Following the introduction of the steel-paneled 1927 Luce bus, school bus manufacturing began to transition towards all-steel construction. In 1930, both Superior and Wayne introduced all-steel school buses; the latter introduced safety glass windows for its bus body. As school bus design paralleled the design of light to medium-duty commercial trucks of the time, the advent of forward-control trucks would have their own influence on school bus design. In an effort to gain extra seating capacity and visibility, Crown Coach built its own cabover school bus design from the ground up. Introduced in 1932, the Crown Supercoach seated up to 76 passengers, the highest-capacity school bus of the time. As the 1930s progressed, flat-front school buses began to follow motorcoach design in styling as well as engineering, gradually adopting the term "transit-style" for their appearance. In 1940, the first mid-engined transit school bus was produced by Gillig in California. Developing production standards The custom-built nature of school buses created an inherent obstacle to their profitable mass production on a large scale. Although school bus design had moved away from the wagon-style kid hacks of the generation before, there was not yet a recognized set of industry-wide standards for school buses. In 1939, rural education expert Dr. Frank W. Cyr organized a week-long conference at Teachers College, Columbia University that introduced new standards for the design of school buses. Funded by a $5,000 grant, Dr. Cyr invited transportation officials, representatives from body and chassis manufacturers, and paint companies. To reduce the complexity of school bus production and increase safety, a set of 44 standards were agreed upon and adopted by the attendees (such as interior and exterior dimensions and the forward-facing seating configuration). To allow for large-scale production of school buses among body manufacturers, adoption of these standards allowed for greater consistency among body manufacturers. While many of the standards of the 1939 conference have been modified or updated, one part of its legacy remains a key part of every school bus in North America today: the adoption of a standard paint color for all school buses. While technically named "National School Bus Glossy Yellow", school bus yellow was adopted for use since it was considered easiest to see in dawn and dusk, and it contrasted well with black lettering. While not universally used worldwide, yellow has become the shade most commonly associated with school buses both in North America and abroad. 1940s During WWII school bus manufacturers converted to military production, manufacturing buses and license-built trucks for the military. Following the war, school bus operation would see a number of changes, following developments within education systems. Following WWII and the rise of suburban growth in North America, demand for school busing increased outside of rural areas; in suburbs and larger urban areas, community design often made walking to school impractical beyond a certain distance from home (particularly as students progressed into high school). In all but the most isolated areas, one-room schools from the turn of the century had become phased out in favor of multi-grade schools introduced in urban areas. In another change, school districts shifted bus operation from buses operated by single individuals to district-owned fleets (operated by district employees). 1950s–1960s From 1950 to 1982, the baby boomer generation was either in elementary or high school, leading to a significant increase in student populations across North America; this would be a factor that would directly influence school bus production for over three decades. During the 1950s, as student populations began to grow, larger school buses began to enter production. To increase seating capacity (extra rows of seats), manufacturers began to produce bodies on heavier-duty truck chassis; transit-style school buses also grew in size. In 1954, the first diesel-engined school bus was introduced, with the first tandem-axle school bus in 1955 (a Crown Supercoach, expanding seating to 91 passengers). To improve accessibility, at the end of the 1950s, manufacturers developed a curbside wheelchair lift option to transport wheelchair-using passengers. In modified form, the design remains in use today. During the 1950s and 1960s, manufacturers also began to develop designs for small school buses, optimized for urban routes with crowded, narrow streets along with rural routes too isolated for a full-size bus. For this role, manufacturers initially began the use of yellow-painted utility vehicles such as the International Travelall and Chevrolet Suburban. As another alternative, manufacturers began use of passenger vans, such as the Chevrolet Van/GMC Handi-Van, Dodge A100, and Ford Econoline; along with yellow paint, these vehicles were fitted with red warning lights. While more maneuverable, automotive-based school buses did not offer the reinforced passenger compartment of a full-size school bus. Structural integrity During the 1960s, as with standard passenger cars, concerns began to arise for passenger protection in catastrophic traffic collisions. At the time, the weak point of the body structure was the body joints; where panels and pieces were riveted together, joints could break apart in major accidents, with the bus body causing harm to passengers. After subjecting a bus to a rollover test in 1964, in 1969, Ward Body Works pointing that fasteners had a direct effect on joint quality (and that body manufacturers were using relatively few rivets and fasteners). In its own research, Wayne Corporation discovered that the body joints were the weak points themselves. In 1973, to reduce the risk of body panel separation, Wayne introduced the Wayne Lifeguard, a school bus body with single-piece body side and roof stampings. While single-piece stampings seen in the Lifeguard had their own manufacturing challenges, school buses of today use relatively few side panels to minimize body joints. 1970s During the 1970s, school buses would undergo a number of design upgrades related to safety. While many changes were related to protecting passengers, others were intended to minimize the chances of traffic collisions. To decrease confusion over traffic priority (increasing safety around school bus stops), federal and state regulations were amended, requiring for many states/provinces to add amber warning lamps inboard of the red warning lamps. Similar to a yellow traffic light, the amber lights are activated before stopping (at distance), indicating to drivers that a school bus is about to stop and unload/load students. Adopted by a number of states during the mid-1970s, amber warning lights became nearly universal equipment on new school buses by the end of the 1980s. To supplement the additional warning lights and to help prevent drivers from passing a stopped school bus, a stop arm was added to nearly all school buses; connected to the wiring of the warning lights, the deployable stop arm extended during a bus stop with its own set of red flashing lights. In the 1970s, school busing expanded further, under controversial reasons; a number of larger cities began to bus students in an effort to racially integrate schools. Out of necessity, the additional usage created further demand for bus production. Industry safety regulations From 1939 to 1973, school bus production was largely self-regulated. In 1973, the first federal regulations governing school buses went into effect, as FMVSS 217 was required for school buses; the regulation governed specifications of rear emergency exit doors/windows. Following the focus on school bus structural integrity, NHTSA introduced the four Federal Motor Vehicle Safety Standards for School Buses, applied on April 1, 1977, bringing significant change to the design, engineering, and construction of school buses and a substantial improvement in safety performance. While many changes related to the 1977 safety standards were made under the body structure (to improve crashworthiness), the most visible change was to passenger seating. In place of the metal-back passenger seats seen since the 1930s, the regulations introduced taller seats with thick padding on both the front and back, acting as a protective barrier. Further improvement has resulted from continuing efforts by the U.S. National Highway Traffic Safety Administration (NHTSA) and Transport Canada, as well as by the bus industry and various safety advocates. As of 2020 production, all of these standards remain in effect. As manufacturers sought to develop safer school buses, small school buses underwent a transition away from automotive-based vehicles. The introduction of cutaway van chassis allowed bus manufacturers to mate a van cab with a purpose-built bus body, using the same construction as a full-size school bus. Within the same length as a passenger van, buses such as the Wayne Busette and Blue Bird Micro Bird offered additional seating capacity, wheelchair lifts, and the same body construction as larger school buses. 1980s–1990s For school bus manufacturers, the 1980s marked a period of struggle, following a combination of factors. As the decade began, the end of the baby-boom generation had finished high school; with a decrease in student population growth, school bus manufacturing was left with a degree of overcapacity. Coupled with the recession economy of the early 1980s, the decline in demand for school bus production left several manufacturers in financial ruin. To better secure their future, during the 1990s, school bus manufacturers underwent a period of transition, with several ownership changes leading to joint ventures and alignments between body manufacturers and chassis suppliers. In 1986, with the signing of the Commercial Motor Vehicle Safety Act, school bus drivers across the United States became required to acquire a commercial driver's license (CDL). While CDLs were issued by individual states, the federal CDL requirement ensured that drivers of all large vehicles (such as school buses) had a consistent training level. In contrast to the 1970s focus on structural integrity, design advances during the 1980s and 1990s focused around the driver. In 1979 and 1980, International Harvester and Ford each introduced a new-generation bus chassis, with General Motors following suit in 1984. To increase driver visibility, updates in line with chassis redesigns shifted the bus driver upward, outward, and forward. To decrease driver distraction, interior controls were redesigned with improved ergonomics; automatic transmissions came into wider use, preventing the risk of stalling (in hazardous places such as intersections or railroad crossings). Initially introduced during the late 1960s, crossview mirrors came into universal use, improving the view of the blind spots in front of the bus while loading or unloading. To supplement the rear emergency door in an evacuation, manufacturers introduced additional emergency exits during the 1980s, including roof-mounted escape hatches and outward-opening exit windows. Side-mounted exit doors (originally introduced on rear-engine buses), became offered on front-engine and conventional-body buses as a supplemental exit. Alongside safety, body and chassis manufacturers sought to advance fuel economy of school buses. During the 1980s, diesel engines came into wide use in conventional and small school buses, gradually replacing gasoline-fueled engines. In 1987, International became the first chassis manufacturer to offer diesel engines exclusively, with Ford following suit in 1990. While conventional-style buses remained the most widely produced full-size school bus, interest in forward visibility, higher seating capacity, and shorter turning radius led to a major expansion of market share of the transit-style configuration, coinciding with several design introductions in the late 1980s. Following the 1986 introduction of the Wayne Lifestar, the AmTran Genesis, Blue Bird TC/2000, and Thomas Saf-T-Liner MVP would prove far more successful. During the 1990s, small school buses shifted further away from their van-conversion roots. In 1991, Girardin launched the MB-II, combining a single rear-wheel van chassis with a full cutaway bus body. Following the 1992 redesign of the Ford E-Series and the 1997 launch of Chevrolet Express/GMC Savana cutaway chassis, manufacturers followed suit, developing bodies to optimize loading-zone visibility. As manufacturers universally adopted cutaway bodies for single rear-wheel buses, the use of the Dodge Ram Van chassis was phased out. By 2005 the United States government banned the use of 15-passenger vans for student transport, leading to the introduction of Multi-Function School Activity Buses (MFSAB). To better protect passengers, MFSABs share the body structure and compartmentalized seating layout of school buses. Not intended (nor allowed) for uses requiring traffic priority, they are not fitted with school bus warning lights or stop arms (nor are they painted school bus yellow). Manufacturer transitions In 1980, school buses were manufactured by six body manufacturers (Blue Bird, Carpenter, Superior, Thomas, Ward, Wayne) and three chassis manufacturers (Ford, General Motors, and International Harvester); in California, two manufacturers (Crown and Gillig) manufactured transit-style school buses using proprietary chassis (sold primarily across the West Coast). From 1980 to 2001, all eight bus manufacturers would undergo periods of struggle and ownership changes. In 1980, Ward filed for bankruptcy, reorganizing as AmTran in 1981. The same year, Superior was liquidated by its parent company, closing its doors. Under its company management, Superior was split into two manufacturers, with Mid Bus introducing small buses in 1981 and a reorganized Superior producing full-size buses from 1982 to 1985. At the end of 1989, Carpenter would file for bankruptcy, emerging from it in 1990. In 1991, Crown Coach would close its doors forever; Gillig produced its last school bus in 1993. Following several ownership changes, Wayne Corporation was liquidated in 1992; successor Wayne Wheeled Vehicles was closed in 1995. In 2001, Carpenter closed its doors. During the 1990s, as body manufacturers secured their future, family-owned businesses were replaced by subsidiaries as manufacturers underwent mergers, joint ventures, and acquisitions with major chassis suppliers. In 1991, Navistar began its acquisition of AmTran (fully acquiring it in 1995), phasing out the Ward brand name in 1993. In 1992, Blue Bird would change hands for the first of several times. In 1998, Carpenter was acquired by Spartan Motors and Thomas Built Buses was sold to Freightliner; the latter was the final major school bus manufacturer operating under family control. Alongside the 1981 introduction of Mid Bus, Corbeil commenced production in Canada and the United States in 1985. Following the second (and final) closure of Superior in 1986, New Bus Company acquired the rights to its body design, producing buses from 1988 to 1989. In 1991, TAM-USA was a joint venture to produce the TAM 252 A 121. Assembled in Slovenia with final assembly in California, the TAM vehicle was to be the first American-market school bus imported from Europe. In comparison to body manufacturers, chassis suppliers saw a smaller degree of transition. As International Harvester became Navistar International in 1986, the company released updated bus chassis for 1989; in 1996, it produced its first rear-engine bus chassis since 1973. In late 1996, Freightliner produced its first bus chassis, expanding to four manufacturers for the first time since the exit of Dodge in 1977. Ford and General Motors gradually exited out of cowled-chassis production with Ford producing its last chassis after 1998; General Motors exited the segment after 2003. Both Ford and GM continue production today, concentrating on cutaway-van chassis. 2000–present The beginning of the twenty-first century introduced extensive changes to the production of school buses. Though vehicle assembly saw few direct changes, manufacturer consolidation and industry contraction effectively ended the practice of customers selecting body and chassis manufacturers independently. While the aspect of customer choice was largely ended (as a result of corporate ownership and supply agreements), decreased complexity paved the way for new product innovations previously thought impossible. During the 2010s, while diesel engines have remained the primary source of power, manufacturers expanded the availability of alternative-fuel vehicles, including CNG, propane, gasoline, and electric-power buses. At the beginning of the 2000s, manufacturers introduced a new generation of conventional-style school buses, coinciding with the redesign of several medium-duty truck lines. While Ford and General Motors shifted bus production to cutaway chassis, Freightliner and International released new cowled chassis in 2004 and 2005, respectively. In 2003, Blue Bird introduced the Vision conventional; in line with its transit-style buses, the Vision utilized a proprietary chassis (rather than a design from a medium-duty truck). In 2004, Thomas introduced the Saf-T-Liner C2 (derived from the Freightliner M2), with the body designed alongside its chassis (allowing the use of the production Freightliner dashboard). A trait of both the Vision and C2 (over their predecessors) is improved loading-zone visibility; both vehicles adopted highly sloped hoods and extra glass around the entry door. In 2005, IC introduced a redesigned CE-series to fit the International 3300 chassis; to improve visibility, the windshield was redesigned (eliminating the center post). Between 2004 and 2008, Advanced Energy, an NC based non-profit created by the NC Utilities Commission begun an effort to move to plug-in hybrid school buses. A business and technical feasibility proved the benefits, and in 2006, 20 districts awarded a contract facilitated by Advanced Energy to IC Bus to produce the buses. Although the buses produced significant benefits, the buses were slowly discontinued when the hybrid system manufacture Enova faded into financial challenges. In 2011, Lion Bus (renamed Lion Electric Company) of Saint-Jérôme, Quebec was founded, marking the first entry into the segment in over 20 years by a full-size bus manufacturer. Using a chassis supplied by Spartan Motors, Lion produces conventional-style school buses, its design features several firsts for school bus production. Along with a 102-inch body width, to resist corrosion, Lion uses composite body panels in place of steel. In 2015, Lion introduced the eLion, the first mass-produced school bus with a fully electric powertrain. Small school buses have undergone few fundamental changes to their designs during the 2000s, though the Type B configuration has largely been retired from production. Following the 1998 sale of the General Motors P-chassis to Navistar subsidiary Workhorse, the design began to be phased out in favor of higher-capacity Type A buses. In 2006, IC introduced the BE200 as its first small school bus; a fully cowled Type B, the BE200 shared much of its body with the CE (on a lower-profile chassis). In 2010, IC introduced the AE-series, a cutaway-cab school bus (derived from the International TerraStar). In 2015, the Ford Transit cutaway chassis was introduced (alongside the long-running E350/450); initially sold with a Micro Bird body, the Transit has been offered through several manufacturers. In 2018, the first bus derived from the Ram ProMaster cutaway chassis was introduced; Collins Bus introduced the Collins Low Floor, the first low-floor school bus (of any configuration). Manufacturing segment stability Following the 2001 closure of Carpenter, the manufacturing segment has seen a much lower degree of contraction (with the exception of the 2005 failure of startup manufacturer Liberty Bus). Following the bankruptcy of Corbeil, the company was acquired at the end of 2007 by Collins, reorganizing it as a subsidiary (alongside Mid Bus) and shifting production to its Kansas facilities. The same year, U.S. Bus was reorganized as Trans Tech. In 2008, Starcraft Bus entered the segment, producing school buses on cutaway chassis (a 2011 prototype using a Hino chassis was never produced). In 2009, Blue Bird and Girardin entered into a joint venture, named Micro Bird; Girardin develops and produces the Blue Bird small-bus product line in Canada. The 2011 founding of Lion Bus marked the return of bus production to Canada (with the first Canadian-brand full-size buses sold in the United States). During the 2010s, Collins retired the Mid Bus and Corbeil brands (in 2013 and 2016, respectively). Safety innovations During the 2000s, school bus safety adopted a number of evolutionary advances. To further improve visibility for other drivers, manufacturers began to replace incandescent lights with LEDs for running lights, turn signals, brake lights, and warning lamps. School bus crossing arms, first introduced in the late 1990s, came into wider use. Electronics took on a new role in school bus operation. To increase child safety and security, alarm systems were developed to prevent children from being left on unattended school buses overnight. To track drivers who illegally pass school buses loading and unloading students, in the 2010s, some school buses began to adopt exterior cameras synchronized with the deployment of the exterior stop arms. Onboard GPS tracking devices have taken on a dual role of fleet management and location tracking, allowing for internal management of costs and also to alert waiting parents and students of the real-time location of their bus. Seatbelts in school buses underwent a redesign, with lap-type seatbelts phased out in favor of 3-point seatbelts. Design overview According to the National Highway Traffic Safety Administration (NHTSA) and the National Transportation Safety Board (NTSB), school buses are the safest type of road vehicle. Between 2013 and 2022, there were 976 fatal school bus accidents, resulting in 1,082 deaths and approximately 132,000 injuries. On average, five fatalities involve school-age children on a school bus each year; statistically, a school bus is over 70 times safer than riding to school by car. Many fatalities related to school buses are passengers of other vehicles and pedestrians (only 5% are bus occupants). Since the initial development of consistent school bus standards in 1939, many of the ensuing changes to school buses over the past eight decades have been safety related, particularly in response to more stringent regulations adopted by state and federal governments. Ever since the adoption of yellow as a standard color in 1939, school buses deliberately integrate the concept of conspicuity into their design. When making student dropoffs or pickups, traffic law gives school buses priority over other vehicles; in order to stop traffic, they are equipped with flashing lights and a stop sign. As a consequence of their size, school buses have a number of blind spots around the outside of the vehicle which can endanger passengers disembarking a bus or pedestrians standing or walking nearby. To address this safety challenge, a key point of school bus design is focused on exterior visibility, improving the design of bus windows, mirrors, and the windshield to optimize visibility for the driver. In the case of a collision, the body structure of a school bus is designed with an integral roll cage; as a school bus carries a large number of student passengers, a school bus is designed with several emergency exits to facilitate fast egress. In the United States and Canada, numerous federal and state regulations require school buses to be manufactured as a purpose-built vehicle distinct from other buses. In contrast to buses in use for public transit, dedicated school buses used for student transport are all single-deck, two-axle design (multi-axle designs are no longer in use). Outside of North America, buses utilized for student transport are derived from vehicles used elsewhere in transit systems, including coaches, minibuses, and transit buses. Types There are four types of school buses produced by manufacturers in North America. The smallest school buses are designated Type A (short bus); a larger format (bodied on bare front-engine chassis) are designated Type B buses. Large school buses include Type C (bodied on cowled medium-duty truck chassis) and Type D (bodied on bare "forward control" or "pusher" chassis). Type C buses are the most common design, while Type D buses are the largest vehicles. All school buses are of single deck design with step entry. In the United States and Canada, bus bodies are restricted to a maximum width of and a maximum length of . Seating capacity is affected by both body length and operator specifications, with the largest designs seating up to 90 passengers. Other formats In both public and private education systems, other types of school buses are used for purposes of student transport outside of regular route service. Along with their usage, these buses are distinguished from regular yellow school buses by their exterior design. An "activity bus" is a school bus used for providing transportation for students. In place of home to school route service, an activity bus is primarily used for the purpose for transportation related to extracurricular activities. Depending on individual state/provincial regulations, the bus used for this purpose can either be a regular yellow school bus or a dedicated unit for this purpose. Dedicated activity buses, while not painted yellow, are fitted with the similar interiors as well as the same traffic control devices for dropping off students (at other schools); conversely, it cannot be used in regular route service. A Multi-Function School Activity Bus (MFSAB) is a bus intended for use in both the private sector and the educational system. While sharing a body structure with a school bus, an MFSAB is not designed for use in route service, as it is not fitted with traffic control devices (i.e., red warning lights, stop arm) nor is it painted school bus yellow. Within the educational system, the design is primarily used for extracurricular activities requiring bus transportation; in the private sector, the MFSAB is intended as a replacement for 15-passenger vans (no longer legal for child transport in either the public or private sector). Many examples are derived from Type A buses (with derivatives of full-size school buses also offered). Features Livery To specifically identify them as such, purpose-built school buses are painted a specific shade of yellow, designed to optimize their visibility for other drivers. In addition to "School Bus" signage in the front and rear above the window line, school buses are labeled with the name of the operator (school district or bus contractor) and an identification number. Yellow color Yellow was adopted as the standard color for North American school buses in 1939. In April of that year, rural education specialist Frank W. Cyr organized a national conference at Columbia University to establish production standards for school buses, including a standard exterior color. The color which became known as "school bus yellow" was selected because black lettering on that specific hue was easiest to see in the semi-darkness of early morning and late afternoon. Officially, school bus yellow was designated "National School Bus Chrome"; following the removal of lead from the pigment, it was renamed "National School Bus Glossy Yellow". Outside the United States and Canada, the association of yellow with school buses has led to its use on school-use buses around the world (although not necessarily required by government specification). Some areas establishing school transport services have conducted evaluations of American yellow-style school buses; to better suit local climate conditions, other governments have established their own color requirements, favoring other high-visibility colors (such as white or orange). Retroreflective markings While its yellow exterior makes it more conspicuous than other vehicles, a school bus can remain hard to see in some low-visibility conditions, including sunrise or sunset, poor weather (all seasons), and in rural areas. To further improve their visibility (to other vehicles), many state and provincial governments (for example, Colorado) require the use of yellow reflective tape on school buses. Marking the length, width, height, and in some cases, identifying the bus as a school bus, reflective tape makes the vehicle easier to see in low light, also marking all emergency exits (so rescue personnel can quickly find them in darkness). Other requirements include reflective "School Bus" lettering (or the use of a front-lighted sign). The equivalent requirement in Canada is almost identical; the only difference is that red cannot be used as a retroreflective color. Safety devices To comply with federal and state requirements, school buses are equipped with a number of safety devices to prevent accidents and injuries and for the purposes of security. Mirror systems When driving and when loading/unloading students, a key priority for a school bus driver is maintaining proper sightlines around their vehicle; the blind spots formed by the school bus can be a significant risk to bus drivers and traffic as well as pedestrians. In the United States, approximately of students killed outside of the school bus are not struck by other vehicles, but by their own bus. To combat this problem, school buses are specified with sophisticated and comprehensive mirror systems. In redesigns of school bus bodies, driver visibility and overall sightlines have become important considerations. In comparison to school buses from the 1980s, school buses from the 2000s have much larger windscreens and fewer and/or smaller blind spots. Emergency exits For the purposes of evacuation, school buses are equipped with a minimum of at least one emergency exit (in addition to the main entry door). The rear-mounted emergency exit door is a design feature adopted from horse-drawn wagons (the entrance was rear-mounted to avoid disturbing the horses); in rear-engine school buses, the door is replaced by an exit window mounted above the engine compartment (supplemented by a side-mounted exit door). Additional exits may be located in the roof (roof hatches), window exits, and/or side emergency exit doors. All are opened by the use of quick-release latches which activate an alarm. The number of emergency exits in a school bus is dependent on its seating capacity and also varies by individual state/provincial requirements. The most currently installed is eight on school buses in Kentucky. Buses that are owned or used by Kentucky school districts require, in addition to the main entry door, a rear exit door (or window, for rear-engine buses), a left-side exit door, four exit windows (two on each side), and two roof-mounted exit hatches. The current Kentucky standards were enacted after 27 people died in the Carrolton bus collision on May 14, 1988, in which a former school bus that was converted into a church bus was hit head on by a drunk driver. Video surveillance Since the 1990s, video cameras have become common equipment installed inside school buses. As recording technology has transitioned from VHS to digital cameras, school buses have adopted multiple-camera systems, providing surveillance from multiple vantage points. While primarily used to monitor and record passenger behavior, video cameras have also been used in the investigation of accidents involving school buses. On March 28, 2000, a Murray County, Georgia, school bus was hit by a CSX freight train at an unsignaled railroad crossing; three children were killed. The bus driver claimed to have stopped and looked for approaching trains before proceeding across the tracks, as is required by law, but the onboard camera recorded that the bus had in fact not stopped and had the AM/FM radio playing. In the 2010s, exterior-mounted cameras synchronized with the stop arms have come into use. The cameras photograph vehicles that illegally pass the bus when its stop arm and warning lights are in use (a moving violation). Restraint systems In contrast to cars and other light-duty passenger vehicles, school buses are not typically equipped with active restraint systems, such as seat belts; whether seat belts should be a requirement has been a topic of controversy. Since the 1970s, school buses have used the concept of compartmentalization as a passive restraint system during the late 2000s and 2010s, seatbelt design transitioned, with 3-point restraints replacing lap belts. As of 2015, seatbelts are a requirement in at least five states: California, Florida, New Jersey, New York, and Texas; Canada does not require their installation (at the provincial level). Of the states that equip buses with two-point lap seat belts (Florida, Louisiana, New Jersey and New York), only New Jersey requires seat belt usage by riders. In other states, it is up to the district or operator whether to require riders to use them or not. Passive restraints (compartmentalization) According to the National Highway Transportation Association (NHTSA), studies completed previously on school buses showed that due to their size and heaviness, school buses did not require that safety belts be in place. Information gathered in previous studies showed that a size of a bus, combined with the design of the seat and the material in the space between the seats themselves, showed that there was no need for safety belts on a school bus. A bus is larger and heavier than a normal size passenger vehicle and could distribute the force of the crash evenly. Combined with the space between the seats as well as the design prevented serious injuries from happening. This attribute, does not carry over to a small bus due to its lesser size; buses with a GVWR under 10,000 pounds are required to have safety belts. However, recent accidents involving school buses that have caused serious (if not fatal) injuries have caused the National Transportation Safety Board to conduct new tests to check the legitimacy of this continued practice. After completing these tests due to bus accidents in 2016, they have recommended that new buses being built need to have both a lap and shoulder harness in place. They have also recommended that 42 states add seat belts as a requirement. There are some states that have already added the lap belt. This study made the NTSB recommend adding shoulder harnesses to those states that already have a lap belt in place. In 1967 and 1972, as part of an effort to improve crash protection in school buses, UCLA researchers played a role in the future of school bus interior design. Using the metal-backed seats then in use as a means of comparison, several new seat designs were researched in crash testing. In its conclusion, the UCLA researchers found that the safest design was a 28-inch high padded seatback spaced a maximum of 24 inches apart, using the concept of compartmentalization as a passive restraint. While the UCLA researchers found the compartmentalized seats to be the safest design, they found active restraints (such as seatbelts) to be next in terms of importance of passenger safety. In 1977, FMVSS 222 mandated a change to compartmentalized seats, though the height requirement was lowered to 24 inches. According to the NTSB, the main disadvantage of passive-restraint seats is its lack of protection in side-impact collisions (with larger vehicles) and rollover situations. Though by design, students are protected front to back by compartmentalization, it allows the potential for ejection in other crash situations (however rare). Active restraints (seatbelts) Federal Motor Vehicle Safety Standard (FMVSS) 222 was introduced in 1977, requiring passive restraints and more stringent structural integrity standards; as part of the legislation, seatbelts were exempted from school buses with a gross vehicle weight (GVWR) exceeding 10,000 pounds. In 1987, New York became the first state to require seatbelts on full-size school buses (raising the seat height to 28 inches); the requirement did not mandate their use. In 1992, New Jersey followed suit, becoming the first state to require their use, remaining the only state to do so. Outside of North America, Great Britain mandated seatbelts in 1995 for minibuses used in student transportation. In 2004, California became the first state to require 3-point seatbelts (on small buses; large buses, 2005), with Texas becoming the second in 2010. In 2011, FMVSS 222 was revised to improve occupant protection in small (Type A) school buses. Along with requiring 3-point restraints (in place of lap belts), the revision created design standards for their use in full-size school buses. While previously reducing seating capacity by up to one-third, NHTSA recognized new technology that allows using seatbelts for either three small (elementary-age) children or two larger children (high-school age) per seat. In October 2013, the National Association of State Directors of Pupil Transportation Services (NASDPTS) most recently stated at their annual transportation conference (NAPT) that they now fully support three-point lap-shoulder seat belts on school buses. CBC Television's The Fifth Estate has been critical of a 1984 Transport Canada study, a crash test of a school bus colliding head-on that suggested that seat belts (at the time, which were two-point lap belts) would interfere with the compartmentalization passive safety system. This had become "the most widely cited study" in North America, according to U.S. regulators, and was frequently quoted for decades by school boards and bus manufacturers across the continent as a reason not to install seat belts. Transport Canada has stuck to its stance against installing seat belts on school buses, despite numerous newer studies and actual accidents showing that compartmentalization could not protect against side impacts, rollovers, and being rear-ended; which would have been avoided by implementing three-point seat belts that would have kept occupants from being thrown from their seats. Manufacturing In 2018, 44,381 school buses were sold in North America (compared to 31,194 in 2010). Approximately 70% of production is of Type C configuration. Production (North America) In the United States and Canada, school buses are currently produced by nine different manufacturers. Four of them—Collins Industries, Starcraft Bus, Trans Tech, and Van Con — specialize exclusively in small buses. Thomas Built Buses and Blue Bird Corporation (the latter, through its Micro Bird joint venture with Girardin)—produce both small and large buses. IC Bus and Lion Electric produce full-size buses exclusively. During the 20th century, Canada was home to satellite facilities of several U.S. firms (Blue Bird, Thomas, Wayne), exporting production across North America, with other production imported from the United States. Domestically, Corbeil manufactured full-size and small school buses (1985–2007) and Girardin produced small buses. In 2011, Lion Bus (today, Lion Electric Company/La Compagnie Électrique Lion) was founded as a Quebec-based manufacturer of full-size buses, shifting development to fully-electric vehicles. Operations Every year in the United States and Canada, school buses provide an estimated 8 billion student trips from home and school. Each school day in 2015, nearly 484,000 school buses transported 26.9 million children to and from school and school-related activities; over half of the United States K–12 student population is transported by school bus. Outside North America, purpose-built vehicles for student transport are less common. Depending on location, students ride to school on transit buses (on school-only routes), coaches, or a variety of other buses. While school bus operations vary widely by location, in the United States and Canada, school bus services operate independent of public transport, with their own bus stops and schedules, coordinated with school class times. Licensing School bus drivers in the United States are required to hold a commercial driver's license (CDL). Full-size school buses are generally considered Class B vehicles; most van-based school buses are considered Class C vehicles. In addition to a standard P (passenger) endorsement, school bus drivers must acquire a separate S (school bus) endorsement; along with a written and driving test, the endorsement requires a background check and sex offender registry check. Loading and unloading Coinciding with their seating configuration, school buses have a higher seating capacity than buses of a similar length; a typical full-size school bus can carry from 66 to 90 . In contrast to a transit bus, school buses are equipped with a single entry door at the front of the bus. Several configurations of entry doors are used on school buses, including center-hinged (jack-knife) and outward-opening. Prior to the 2000s, doors operated manually by the driver were the most common, with air or electric-assist becoming nearly universal in current vehicles. School bus routes are designed with multiple bus stops, allowing for the loading (unloading) of several students at a time; the stop at school is the only time that the bus loads (unloads) passengers at once. To inhibit pedestrians from walking into the blind spot created by the hood (or lower bodywork, on Type D buses), crossing arms are safety devices that extend outward from the front bumper when the bus door is open for loading or unloading. By design, these force passengers ()and other pedestrians) to walk forward several feet forward of the bus (into the view of the driver) before they can cross the road in front of the bus. In the past, handrails in the entry way posed a potential risk for to students; as passengers exited a bus, items such as drawstrings or other loose clothing could be caught if the driver was unaware and pulled away with the student caught in the door. To minimize this risk, school bus manufacturers have redesigned handrails and equipment in the stepwell area. In its School Bus Handrail Handbook, the NHTSA described a simple test procedure for identifying unsafe stepwell handrails. Traffic priority When loading and unloading students, school buses have the ability to stop traffic, using a system of warning lights and stop arms-a stop sign that is deployed from the bus to stop traffic when the door is opened. By the mid-1940s, most US states introduced traffic laws requiring motorists to stop for school buses while children were loading or unloading. The justifications for this protocol were: Children (especially younger ones) have normally not yet developed the mental capacity to fully comprehend the hazards and consequences of street-crossing, and under US tort laws, a child cannot legally be held accountable for negligence. For the same reason, adult crossing guards often are deployed in walking zones between homes and schools. It is impractical in many cases to avoid children crossing the traveled portions of roadways after leaving a school bus or to have an adult accompany them. The size of a school bus generally limits visibility for both the children and motorists during loading and unloading. Since at least the mid-1970s, all US states and Canadian provinces and territories have some sort of school bus traffic stop law; although each jurisdiction requires traffic to stop for a school bus loading and unloading passengers, different jurisdictions have different requirements of when to stop. Outside North America, the school bus stopping traffic to unload and load children is not provided for. Instead of being given traffic priority, fellow drivers are encouraged to drive with extra caution around school buses. Warning lights and stop arms Around 1946, the first system of traffic warning signal lights on school buses was used in Virginia, consisting of a pair of sealed beam lights. Instead of colorless glass lenses (similar to car headlamps), the warning lamps utilized red lenses. A motorized rotary switch applied power alternately to the red lights mounted at the left and right of the front and rear of the bus, creating a wig-wag effect. Activation was typically through a mechanical switch attached to the door control. However, on some buses (such as Gillig's Transit Coach models and the Kenworth-Pacific School Coach) activation of the roof warning lamp system was through the use of a pressure-sensitive switch on a manually controlled stop paddle lever located to the left of the driver's seat below the window. Whenever the pressure was relieved by extending the stop paddle, the electric current was activated to the relay. In the 1950s, plastic lenses were developed for the warning lenses, though the warning lights (with colorless glass lenses) used sealed-beam lamps into the mid-2000s, when light-emitting diodes (LEDs) came into use. The warning lamps initially used for school buses consisted of four red warning lights. With the adoption of FMVSS 108 in January 1968, four additional lights, termed advance warning lights, were gradually added to school buses; these were amber in color and mounted inboard of the red warning lights. Intended to signal an upcoming stop to drivers, as the entry door was opened at the stop, they were wired to be overridden by the red lights and the stop sign. Although red & amber systems were adopted by many states and provinces during the 1970s and 1980s, the all-red systems remain in use by some locales such as Saskatchewan and Ontario, Canada, older buses from California, as well as on buses built in Wisconsin before 2005. The Ontario School Bus Association has challenged the effectiveness of Ontario's all-red 8-light warning system, citing that the use of red for both advance and stop warning signals is subject to driver misinterpretation. The Association claims that many motorists only have a vague understanding of Ontario's school bus stopping laws and that few drivers know that it is legal to pass a school bus with its inner (advance) warning lights actuated. Transport Canada's Transport Development Centre compared the effectiveness of the all-red system to the amber-red system and found that drivers are 21% more likely to safely pass a school bus when presented with amber advance signals instead of red signals. Transport Canada states that amber advance signals are proven to be slightly superior to red signals and recommends that all-red warning signals be replaced by the eight-lamp system in the shortest period possible. After the issue had received media attention, a petition has been signed to make the switch from the all-red to amber advance lights on Ontario school busses. The Ministry of Ontario of Transportation (MTO) has not yet provided any plan or timeline for the change. To aid visibility of the bus in inclement weather, school districts and school bus operators add flashing strobe lights to the roof of the bus. Some states (for example, Illinois) require strobe lights as part of their local specifications. During the early 1950s, states began to specify a mechanical stop signal arm which the driver would deploy from the left side of the bus to warn traffic of a stop in progress. The portion of the stop arm protruding in front of traffic was initially a trapezoidal shape with painted on it. The U.S. National Highway Traffic Safety Administration's Federal Motor Vehicle Safety Standard No. 131 regulates the specifications of the stop arm as a double-faced regulation octagonal red stop sign at least across, with white border and uppercase legend. It must be retroreflective and/or equipped with alternately flashing red lights. As an alternative, the legend itself may also flash; this is commonly achieved with red LEDs. FMVSS 131 stipulates that the stop signal arm be installed on the left side of the bus, and placed so that when it is extended, the arm is perpendicular to the side of the bus, with the top edge of the sign parallel to and within of a horizontal plane tangent to the bottom edge of the first passenger window frame behind the driver's window, and that the vertical center of the stop signal arm must be no more than from the side of the bus. One stop signal arm is required; a second may also be installed. The second stop arm, when it is present, is usually mounted near the rear of the bus, and is not permitted to bear a or any other legend on the side facing forward when deployed. The Canadian standard, defined in Canada Motor Vehicle Safety Standard No. 131, is substantially identical to the U.S. standard. In Alberta and Saskatchewan, the use of stop signal arms is banned under traffic bylaws in multiple cities, citing that they provide a false sense of safety to students by encouraging jaywalking in front of the bus rather than safely crossing at an intersection. These bans have been the subject of public debate in cities such as Regina and Prince Albert. Environmental impact As the use of school buses transports students on a much larger scale than by car (on average, the same as 36 separate automobiles), their use reduces pollution in the same manner as carpooling. Through their usage of internal-combustion engines, school buses are not an emissions-free form of transportation (in comparison to biking or walking). As of 2017, over 95% of school buses in North America are powered by diesel-fueled engines. While diesel offers fuel efficiency and safety advantages over gasoline, diesel exhaust fumes have become a concern (related to health problems). Since the early to mid-2000s, emissions standards for diesel engines have been upgraded considerably; a school bus meeting 2017 emissions standards is 60 times cleaner than a school bus from 2002 (and approximately 3,600 times cleaner than a counterpart from 1990). To comply with upgraded standards and regulations, diesel engines have been redesigned to use ultra-low sulfur diesel fuel with selective catalytic reduction becoming a primary emissions control strategy. Alternative fuels Although diesel fuel is most commonly used in large school buses (and even in many smaller ones), alternative fuel systems such as LPG/propane and CNG have been developed to counter the emissions drawbacks that diesel and gasoline-fueled school buses pose to the public health and environment. The use of propane as a fuel for school buses began in the 1970s, largely as a response to the 1970s energy crisis. Initially produced as conversions of gasoline engines (as both require spark ignition), propane fell out of favor in the 1980s as fuel prices stabilized, coupled with the expanded use of diesel engines. In the late 2000s, propane-fueled powertrains reentered production as emissions regulations began to negatively affect the performance of diesel engines. In 2009, Blue Bird Corporation introduced a version of the Blue Bird Vision powered by a LPG-fuel engine. As of 2018, three manufacturers offer a propane-fuel full-size school bus (Blue Bird, IC, and Thomas), along with Ford and General Motors Type A chassis. Compressed natural gas was first introduced for school buses in the early 1990s (with Blue Bird building its first CNG bus in 1991 and Thomas building its first in 1993). As of 2018, CNG is offered by two full-size bus manufacturers (Blue Bird, Thomas) along with Ford and General Motors Type A chassis. In a reversal from the 1990s, gasoline-fuel engines made a return to full-size school buses during the 2010s, with Blue Bird introducing a gasoline-fuel Vision for 2016. As of current production, Blue Bird and IC offer gasoline-fuel full-size buses; gasoline engines are standard equipment in Ford and General Motors Type A chassis. As an alternative, gasoline-fuel engines offer simpler emissions equipment (over diesel engines) and a widely available fuel infrastructure (a drawback of LPG/CNG vehicles). Electric school buses In theory, urban and suburban routes prove advantageous for the use of an electric bus; charging can be achieved before and after the bus is transporting students (when the bus is parked). In the early 1990s, several prototype models of battery-powered buses were developed as conversions of existing school buses; these were built primarily for research purposes. During the 2000s, school bus electrification shifted towards the development of diesel-electric hybrid school buses. Intended as a means to minimize engine idling while loading/unloading passengers and increasing diesel fuel economy, hybrid school buses failed to gain widespread acceptance. A key factor in their market failure was their high price (nearly twice the price of a standard diesel school bus) and hybrid system complexity. In the 2010s, school bus electrification shifted from hybrids to fully electric vehicles, with several vehicles entering production. Trans Tech introduced the 2011 eTrans prototype (based on the Smith Electric Newton cabover truck), later producing the 2014 SSTe, a derivative of the Ford E-450. The first full-size electric school bus was the Lion Bus eLion, introduced in 2015; as of 2018, over 150 examples have been produced. During 2017 and 2018, several body manufacturers introduced prototypes of electric school buses, with electric versions of the Blue Bird All American, Blue Bird Vision, Micro Bird G5 (on Ford E450 chassis), IC CE-Series, and the Thomas Saf-T-Liner C2 previewing production vehicles. During 2018, Blue Bird, Thomas, and IC introduced prototypes of full-size school buses intended for production; Blue Bird intends to offer electric-power versions of its entire product line. LionC all-electric school bus Walking and cycling 'buses' Walking buses and bike bus (known as riding school bus for students) take their names and some of the principle of public transport in a group to travel to school for students under adult supervision. Other uses Outside of student transport itself, the design of a school bus is adapted for use for a variety of applications. Along with newly produced vehicles, conversions of retired school buses see a large range of uses. Qualities desired from school buses involve sturdy construction (as school buses have an all-steel body and frame), a large seating capacity, and wheelchair lift capability, among others. School bus derivatives Church use Churches throughout the United States use buses to transport their congregants, both to church services and events. A wide variety of buses are owned by churches, depending on needs and affordability. Larger buses may often be derived from school buses (newly purchased or second-hand). Other churches often own minibuses, often equipped with wheelchair lifts. When school bus derivatives are used, church bus livery is dictated by federal regulations, which require the removal of "School Bus" lettering and the disabling/removal of stop arms/warning lights. In some states, School Bus Yellow must be painted over entirely. In church use, transporting adults and/or children, traffic law does not give church buses traffic priority in most states (Alabama, Arkansas, Kentucky, Tennessee, and Virginia being the only states where a church bus can stop traffic with flashing red lights). Community outreach In terms of vehicles used for community outreach, school bus bodyshells (both new and second-hand) see use as bookmobiles and mobile blood donation centers (bloodmobiles), among other uses. Both types of vehicles spend long periods of time parked in the same place; to reduce fuel consumption, they often power interior equipment and climate control with an on-board generator in place of the chassis engine. Bookmobiles feature interior shelving for books and library equipment; bloodmobiles feature mobile phlebotomy stations and blood storage Inmate transport buses Larger police agencies may own police buses derived from school bus bodies for a number of purposes. Along with buses with high-capacity seating serving as officer transports (in large-scale deployments), other vehicles derived from buses may have little seating, serving as temporary mobile command centers; these vehicles are built from school bus bodyshells and fitted with agency-specified equipment. Prisoner transport vehicles are high-security vehicles used to transport prisoners; a school bus bodyshell is fitted with a specially designed interior and exterior with secure windows and doors. Uses of retired school buses As of 2016, the average age of a school bus in the United States is 9.3 years. School buses can be retired from service due to a number of factors, including vehicle age or mileage, mechanical condition, emissions compliance, or any combination of these factors. In some states and provinces, school bus retirement is called for at specific age or mileage intervals, regardless of mechanical condition. In recent years, budget concerns in many publicly funded school districts have necessitated that school buses be kept in service longer. When a school bus is retired from school use, it can see a wide variety of usage. While a majority are scrapped for parts and recycling (a requirement in some states), better-running examples are put up for sale as surplus vehicles. Second-hand school buses are sold to such entities as churches, resorts or summer camps; others are exported to Central America, South America, or elsewhere. Other examples of retired school buses are preserved and restored by collectors and bus enthusiasts; collectors and museums have an interest in older and rarer models. Additionally, restored school buses appear alongside other period vehicles in television and film. When a school bus is sold for usage outside of student transport, NHTSA regulations require that its identification as a school bus be removed. To do so, all school bus lettering must be removed or covered while the exterior must be painted a color different than school bus yellow; the stop arm(s) and warning lamps must be removed or disabled. School bus conversions In retirement, not all school buses live on as transport vehicles. In contrast, the purchasers of school buses use the large body and chassis to use as either a working vehicle, or as a basis to build a rolling home. To build a utility vehicle for farms, owners often remove much of the roof and sides, creating a large flatbed or open-bed truck for hauling hay. Other farms use unconverted, re-painted, school buses to transport their workforce. Skoolies are retired school buses converted into recreational vehicles (the term also applies to their owners and enthusiasts). Constructed and customized by their owners; while some examples have primitive accommodations, others rival the features of production RVs. Exteriors vary widely, including only the removal of school bus lettering, conservative designs, or the bus equivalent of an art car. An example of a Skoolie is Further, a 1939 (and later, 1947) school bus converted by Ken Kesey and the Merry Pranksters, intended for use on cross-country counterculture road trips. Both versions of Further are painted with a variety of psychedelic colors and designs. School bus export Retired school buses from Canada and the United States are sometimes exported to Africa, Central America, South America, or elsewhere. Used as public transportation between communities, these buses are nicknamed "chicken buses" for both their crowded accommodation and the (occasional) transportation of livestock alongside passengers. To attract passengers (and fares), yellow buses are often repainted with flamboyant exterior color schemes and modified with chrome exterior trim. Around the world Outside the United States and Canada, the usage and design of buses for student transport varies worldwide. In Europe, Asia, and Australia, buses utilized for student transport may be derived from standard transit buses. Alongside differences in body, chassis, engines, and seating design, school buses outside North America differ primarily in their signage, livery, and traffic priority.
Technology
Motorized road transport
null
633325
https://en.wikipedia.org/wiki/Neutrino%20astronomy
Neutrino astronomy
Neutrino astronomy is the branch of astronomy that gathers information about astronomical objects by observing and studying neutrinos emitted by them with the help of neutrino detectors in special Earth observatories. It is an emerging field in astroparticle physics providing insights into the high-energy and non-thermal processes in the universe. Neutrinos are nearly massless and electrically neutral or chargeless elementary particles. They are created as a result of certain types of radioactive decay, nuclear reactions such as those that take place in the Sun or high energy astrophysical phenomena, in nuclear reactors, or when cosmic rays hit atoms in the atmosphere. Neutrinos rarely interact with matter (only via the weak nuclear force), travel at nearly the speed of light in straight lines, pass through large amounts of matter without any notable absorption or without being deflected by magnetic fields. Unlike photons, neutrinos rarely scatter along their trajectory. But like photons, neutrinos are some of the most common particles in the universe. Because of this, neutrinos offer a unique opportunity to observe processes that are inaccessible to optical telescopes, such as reactions in the Sun's core. Neutrinos that are created in the Sun’s core are barely absorbed, so a large quantity of them escape from the Sun and reach the Earth. Neutrinos can also offer a very strong pointing direction compared to charged particle cosmic rays. Neutrinos are very hard to detect due to their non-interactive nature. In order to detect neutrinos, scientists have to shield the detectors from cosmic rays, which can penetrate hundreds of meters of rock. Neutrinos, on the other hand, can go through the entire planet without being absorbed, like "ghost particles". That's why neutrino detectors are placed many hundreds of meter underground, usually at the bottom of mines. There a neutrino detection liquid such as a Chlorine-rich solution is placed; the neutrinos react with a Chlorine isotope and can create radioactive Argon. Gallium to Germanium conversion has also been used. The IceCube Neutrino Observatory built in 2010 in the south pole is the biggest neutrino detector, consisting of thousands of optical sensors buried 500 meters underneath a cubic kilometer of deep, ultra-transparent ice, detects light emitted by charged particles that are produced when a single neutrino collides with a proton or neutron inside an atom. The resulting nuclear reaction produces secondary particles traveling at high speeds that give off a blue light called Cherenkov radiation. Super-Kamiokande in Japan and ANTARES and KM3NeT in the Mediterranean are some other important neutrino detectors. Since neutrinos interact weakly, neutrino detectors must have large target masses (often thousands of tons). The detectors also must use shielding and effective software to remove background signal. Since neutrinos are very difficult to detect, the only bodies that have been studied in this way are the sun and the supernova SN1987A, which exploded in 1987. Scientist predicted that supernova explosions would produce bursts of neutrinos, and a similar burst was actually detected from Supernova 1987A. In the future neutrino astronomy promises to discover other aspects of the universe, including coincidental gravitational waves, gamma ray bursts, the cosmic neutrino background, origins of ultra-high-energy neutrinos, neutrino properties (such as neutrino mass hierarchy), dark matter properties, etc. It will become an integral part of multi-messenger astronomy, complementing gravitational astronomy and traditional telescopic astronomy. History Neutrinos were first recorded in 1956 by Clyde Cowan and Frederick Reines in an experiment employing a nearby nuclear reactor as a neutrino source. Their discovery was acknowledged with a Nobel Prize in Physics in 1995. This was followed by the first atmospheric neutrino detection in 1965 by two groups almost simultaneously. One was led by Frederick Reines who operated a liquid scintillator - the Case-Witwatersrand-Irvine or CWI detector - in the East Rand gold mine in South Africa at an 8.8 km water depth equivalent. The other was a Bombay-Osaka-Durham collaboration that operated in the Indian Kolar Gold Field mine at an equivalent water depth of 7.5 km. Although the KGF group detected neutrino candidates two months later than Reines CWI, they were given formal priority due to publishing their findings two weeks earlier. In 1968, Raymond Davis, Jr. and John N. Bahcall successfully detected the first solar neutrinos in the Homestake experiment. Davis, along with Japanese physicist Masatoshi Koshiba were jointly awarded half of the 2002 Nobel Prize in Physics "for pioneering contributions to astrophysics, in particular for the detection of cosmic neutrinos (the other half went to Riccardo Giacconi for corresponding pioneering contributions which have led to the discovery of cosmic X-ray sources)." The first generation of undersea neutrino telescope projects began with the proposal by Moisey Markov in 1960 "...to install detectors deep in a lake or a sea and to determine the location of charged particles with the help of Cherenkov radiation." The first underwater neutrino telescope began as the DUMAND project. DUMAND stands for Deep Underwater Muon and Neutrino Detector. The project began in 1976 and although it was eventually cancelled in 1995, it acted as a precursor to many of the following telescopes in the following decades. The Baikal Neutrino Telescope is installed in the southern part of Lake Baikal in Russia. The detector is located at a depth of 1.1 km and began surveys in 1980. In 1993, it was the first to deploy three strings to reconstruct the muon trajectories as well as the first to record atmospheric neutrinos underwater. AMANDA (Antarctic Muon And Neutrino Detector Array) used the 3 km thick ice layer at the South Pole and was located several hundred meters from the Amundsen-Scott station. Holes 60 cm in diameter were drilled with pressurized hot water in which strings with optical modules were deployed before the water refroze. The depth proved to be insufficient to be able to reconstruct the trajectory due to the scattering of light on air bubbles. A second group of 4 strings were added in 1995/96 to a depth of about 2000 m that was sufficient for track reconstruction. The AMANDA array was subsequently upgraded until January 2000 when it consisted of 19 strings with a total of 667 optical modules at a depth range between 1500 m and 2000 m. AMANDA would eventually be the predecessor to IceCube in 2005. An example of an early neutrino detector is the (ASD), located in the Soledar Salt Mine in Ukraine at a depth of more than 100 m. It was created in the Department of High Energy Leptons and Neutrino Astrophysics of the Institute of Nuclear Research of the USSR Academy of Sciences in 1969 to study antineutrino fluxes from collapsing stars in the Galaxy, as well as the spectrum and interactions of muons of cosmic rays with energies up to 10 ^ 13 eV. A feature of the detector is a 100-ton scintillation tank with dimensions on the order of the length of an electromagnetic shower with an initial energy of 100 GeV. 21st century After the decline of DUMAND the participating groups split into three branches to explore deep sea options in the Mediterranean Sea. ANTARES was anchored to the sea floor in the region off Toulon at the French Mediterranean coast. It consists of 12 strings, each carrying 25 "storeys" equipped with three optical modules, an electronic container, and calibration devices down to a maximum depth of 2475 m. NEMO (NEutrino Mediterranean Observatory) was pursued by Italian groups to investigate the feasibility of a cubic-kilometer scale deep-sea detector. A suitable site at a depth of 3.5 km about 100 km off Capo Passero at the South-Eastern coast of Sicily has been identified. From 2007 to 2011 the first prototyping phase tested a "mini-tower" with 4 bars deployed for several weeks near Catania at a depth of 2 km. The second phase as well as plans to deploy the full-size prototype tower will be pursued in the KM3NeT framework. The NESTOR Project was installed in 2004 to a depth of 4 km and operated for one month until a failure of the cable to shore forced it to be terminated. The data taken still successfully demonstrated the detector's functionality and provided a measurement of the atmospheric muon flux. The proof of concept will be implemented in the KM3Net framework. The second generation of deep-sea neutrino telescope projects reach or even exceed the size originally conceived by the DUMAND pioneers. IceCube, located at the South Pole and incorporating its predecessor AMANDA, was completed in December 2010. It currently consists of 5160 digital optical modules installed on 86 strings at depths of 1450 to 2550 m in the Antarctic ice. The KM3NeT in the Mediterranean Sea and the GVD are in their preparatory/prototyping phase. IceCube instruments 1 km3 of ice. GVD is also planned to cover 1 km3 but at a much higher energy threshold. KM3NeT is planned to cover several km3 and have two components; ARCA (Astroparticle Research with Cosmics in the Abyss) and ORCA (Oscillations Research with Cosmics in the Abyss). Both KM3NeT and GVD have completed at least part of their construction and it is expected that these two along with IceCube will form a global neutrino observatory. In July 2018, the IceCube Neutrino Observatory announced that they have traced an extremely-high-energy neutrino that hit their Antarctica-based research station in September 2017 back to its point of origin in the blazar TXS 0506+056 located 3.7 billion light-years away in the direction of the constellation Orion. This is the first time that a neutrino detector has been used to locate an object in space and that a source of cosmic rays has been identified. In November 2022, the IceCube collaboration made another significant progress towards identifying the origin of cosmic rays, reporting the observation of 79 neutrinos with an energy over 1 TeV originated from the nearby galaxy M77. These findings in a well-known object are expected to help study the active nucleus of this galaxy, as well as serving as a baseline for future observations. In June 2023, astronomers reported using a new technique to detect, for the first time, the release of neutrinos from the galactic plane of the Milky Way galaxy. Detection methods Neutrinos interact incredibly rarely with matter, so the vast majority of neutrinos will pass through a detector without interacting. If a neutrino does interact, it will only do so once. Therefore, to perform neutrino astronomy, large detectors must be used to obtain enough statistics. The method of neutrino detection depends on the energy and type of the neutrino. A famous example is that anti-electron neutrinos can interact with a nucleus in the detector by inverse beta decay and produce a positron and a neutron. The positron immediately will annihilate with an electron, producing two 511keV photons. The neutron will attach to another nucleus and give off a gamma with an energy of a few MeV. In general, neutrinos can interact through neutral-current and charged-current interactions. In neutral-current interactions, the neutrino interacts with a nucleus or electron and the neutrino retains its original flavor. In charged-current interactions, the neutrino is absorbed by the nucleus and produces a lepton corresponding to the neutrino's flavor (\nu_{e} -> e^-,\nu_{\mu} -> \mu^{-}, etc.). If the charged resultants are moving fast enough, they can create Cherenkov light. To observe neutrino interactions, detectors use photomultiplier tubes (PMTs) to detect individual photons. From the timing of the photons, it is possible to determine the time and place of the neutrino interaction. If the neutrino creates a muon during its interaction, then the muon will travel in a line, creating a "track" of Cherenkov photons. The data from this track can be used to reconstruct the directionality of the muon. For high-energy interactions, the neutrino and muon directions are the same, so it's possible to tell where the neutrino came from. This is pointing direction is important in extra-solar system neutrino astronomy. Along with time, position, and possibly direction, it's possible to infer the energy of the neutrino from the interactions. The number of photons emitted is related to the neutrino energy, and neutrino energy is important for measuring the fluxes from solar and geo-neutrinos. Due to the rareness of neutrino interactions, it is important to maintain a low background signal. For this reason, most neutrino detectors are constructed under a rock or water overburden. This overburden shields against most cosmic rays in the atmosphere; only some of the highest-energy muons are able to penetrate to the depths of our detectors. Detectors must include ways of dealing with data from muons so as to not confuse them with neutrinos. Along with more complicated measures, if a muon track is first detected outside of the desired "fiducial" volume, the event is treated as a muon and not considered. Ignoring events outside the fiducial volume also decreases the signal from radiation outside the detector. Despite shielding efforts, it is inevitable that some background will make it into the detector, many times in the form of radioactive impurities within the detector itself. At this point, if it is impossible to differentiate between the background and true signal, a Monte Carlo simulation must be used to model the background. While it may be unknown if an individual event is background or signal, it is possible to detect an excess about the background, signifying existence of the desired signal. Applications When astronomical bodies, such as the Sun, are studied using light, only the surface of the object can be directly observed. Any light produced in the core of a star will interact with gas particles in the outer layers of the star, taking hundreds of thousands of years to make it to the surface, making it impossible to observe the core directly. Since neutrinos are also created in the cores of stars (as a result of stellar fusion), the core can be observed using neutrino astronomy. Other sources of neutrinos- such as neutrinos released by supernovae- have been detected. Several neutrino experiments have formed the Supernova Early Warning System (SNEWS), where they search for an increase of neutrino flux that could signal a supernova event. There are currently goals to detect neutrinos from other sources, such as active galactic nuclei (AGN), as well as gamma-ray bursts and starburst galaxies. Neutrino astronomy may also indirectly detect dark matter. Supernova warning Seven neutrino experiments (Super-K, LVD, IceCube, KamLAND, Borexino, Daya Bay, and HALO) work together as the Supernova Early Warning System (SNEWS). In a core collapse supernova, ninety-nine percent of the energy released will be in neutrinos. While photons can be trapped in the dense supernova for hours, neutrinos are able to escape on the order of seconds. Since neutrinos travel at roughly the speed of light, they can reach Earth before photons do. If two or more of SNEWS detectors observe a coincidence of an increased flux of neutrinos, an alert is sent to professional and amateur astronomers to be on the lookout for supernova light. By using the distance between detectors and the time difference between detections, the alert can also include directionality as to the supernova's location in the sky. Stellar processes The Sun, like other stars, is powered by nuclear fusion in its core. The core is incredibly large, meaning that photons produced in the core will take a long time to diffuse outward. Therefore, neutrinos are the only way that we can obtain real-time data about the nuclear processes in the Sun. There are two main processes for stellar nuclear fusion. The first is the Proton-Proton (PP) chain, in which protons are fused together into helium, sometimes temporarily creating the heavier elements of lithium, beryllium, and boron along the way. The second is the CNO cycle, in which carbon, nitrogen, and oxygen are fused with protons, and then undergo alpha decay (helium nucleus emission) to begin the cycle again. The PP chain is the primary process in the Sun, while the CNO cycle is more dominant in stars more massive than the Sun. Each step in the process has an allowed spectra of energy for the neutrino (or a discrete energy for electron capture processes). The relative rates of the Sun's nuclear processes can be determined by observations in its flux at different energies. This would shed insight into the Sun's properties, such as metallicity, which is the composition of heavier elements. Borexino is one of the detectors studying solar neutrinos. In 2018, they found 5σ significance for the existence of neutrinos from the fusing of two protons with an electron (pep neutrinos). In 2020, they found for the first time evidence of CNO neutrinos in the Sun. Improvements on the CNO measurement will be especially helpful in determining the Sun's metallicity. Composition and structure of Earth The interior of Earth contains radioactive elements such as ^{40}K and the decay chains of ^{238}U and ^{232}Th. These elements decay via Beta decay, which emits an anti-neutrino. The energies of these anti-neutrinos are dependent on the parent nucleus. Therefore, by detecting the anti-neutrino flux as a function of energy, we can obtain the relative compositions of these elements and set a limit on the total power output of Earth's geo-reactor. Most of our current data about the core and mantle of Earth comes from seismic data, which does not provide any information as to the nuclear composition of these layers. Borexino has detected these geo-neutrinos through the process \bar{\nu}+p^+\longrightarrow e^+ {+n}. The resulting positron will immediately annihilate with an electron and produce two gamma-rays each with an energy of 511keV (the rest mass of an electron). The neutron will later be captured by another nucleus, which will lead to a 2.22MeV gamma-ray as the nucleus de-excites. This process on average takes on the order of 256 microseconds. By searching for time and spatial coincidence of these gamma rays, the experimenters can be certain there was an event. Using over 3,200 days of data, Borexino used geoneutrinos to place constraints on the composition and power output of the mantle. They found that the ratio of ^{238}U to ^{232}Th is the same as chondritic meteorites. The power output from uranium and thorium in Earth's mantle was found to be 14.2-35.7 TW with a 68% confidence interval. Neutrino tomography also provides insight into the interior of Earth. For neutrinos with energies of a few TeV, the interaction probability becomes non-negligible when passing through Earth. The interaction probability will depend on the number of nucleons the neutrino passed along its path, which is directly related to density. If the initial flux is known (as it is in the case of atmospheric neutrinos), then detecting the final flux provides information about the interactions that occurred. The density can then be extrapolated from knowledge of these interactions. This can provide an independent check on the information obtained from seismic data. In 2018, one year worth of IceCube data was evaluated to perform neutrino tomography. The analysis studied upward going muons, which provide both the energy and directionality of the neutrinos after passing through the Earth. A model of Earth with five layers of constant density was fit to the data, and the resulting density agreed with seismic data. The values determined for the total mass of Earth, the mass of the core, and the moment of inertia all agree with the data obtained from seismic and gravitational data. With the current data, the uncertainties on these values are still large, but future data from IceCube and KM3NeT will place tighter restrictions on this data. High-energy astrophysical events Neutrinos can either be primary cosmic rays (astrophysical neutrinos), or be produced from cosmic ray interactions. In the latter case, the primary cosmic ray will produce pions and kaons in the atmosphere. As these hadrons decay, they produce neutrinos (called atmospheric neutrinos). At low energies, the flux of atmospheric neutrinos is many times greater than astrophysical neutrinos. At high energies, the pions and kaons have a longer lifetime (due to relativistic time dilation). The hadrons are now more likely to interact before they decay. Because of this, the astrophysical neutrino flux will dominate at high energies (~100TeV). To perform neutrino astronomy of high-energy objects, experiments rely on the highest energy neutrinos. To perform astronomy of distant objects, a strong angular resolution is required. Neutrinos are electrically neutral and interact weakly, so they travel mostly unperturbed in straight lines. If the neutrino interacts within a detector and produces a muon, the muon will produce an observable track. At high energies, the neutrino direction and muon direction are closely correlated, so it is possible to trace back the direction of the incoming neutrino. These high-energy neutrinos are either the primary or secondary cosmic rays produced by energetic astrophysical processes. Observing neutrinos could provide insights into these processes beyond what is observable with electromagnetic radiation. In the case of the neutrino detected from a distant blazar, multi-wavelength astronomy was used to show spatial coincidence, confirming the blazar as the source. In the future, neutrinos could be used to supplement electromagnetic and gravitational observations, leading to multi-messenger astronomy.
Physical sciences
Basics
Astronomy
633593
https://en.wikipedia.org/wiki/Carbon%20steel
Carbon steel
Carbon steel is a steel with carbon content from about 0.05 up to 2.1 percent by weight. The definition of carbon steel from the American Iron and Steel Institute (AISI) states: no minimum content is specified or required for chromium, cobalt, molybdenum, nickel, niobium, titanium, tungsten, vanadium, zirconium, or any other element to be added to obtain a desired alloying effect; the specified minimum for copper does not exceed 0.40%; or the specified maximum for any of the following elements does not exceed: manganese 1.65%; silicon 0.60%; and copper 0.60%. As the carbon content percentage rises, steel has the ability to become harder and stronger through heat treating; however, it becomes less ductile. Regardless of the heat treatment, a higher carbon content reduces weldability. In carbon steels, the higher carbon content lowers the melting point. The term may be used to reference steel that is not stainless steel; in this use carbon steel may include alloy steels. High carbon steel has many uses such as milling machines, cutting tools (such as chisels) and high strength wires. These applications require a much finer microstructure, which improves toughness. Properties Carbon steel is often divided into two main categories: low-carbon steel and high-carbon steel. It may also contain other elements, such as manganese, phosphorus, sulfur, and silicon, which can affect its properties. Carbon steel can be easily machined and welded, making it versatile for various applications. It can also be heat treated to improve its strength, hardness, and durability. Carbon steel is susceptible to rust and corrosion, especially in environments with high moisture levels and/or salt. It can be shielded from corrosion by coating it with paint, varnish, or other protective material. Alternatively, it can be made from a stainless steel alloy that contains chromium, which provides excellent corrosion resistance. Carbon steel can be alloyed with other elements to improve its properties, such as by adding chromium and/or nickel to improve its resistance to corrosion and oxidation or adding molybdenum to improve its strength and toughness at high temperatures. It is an environmentally friendly material, as it is easily recyclable and can be reused in various applications. It is energy-efficient to produce, as it requires less energy than other metals such as aluminium and copper. Type Mild or low-carbon steel Mild steel (iron containing a small percentage of carbon, strong and tough but not readily tempered), also known as plain-carbon steel and low-carbon steel, is now the most common form of steel because its price is relatively low while it provides material properties that are acceptable for many applications. Mild steel contains approximately 0.05–0.30% carbon making it malleable and ductile. Mild steel has a relatively low tensile strength, but it is cheap and easy to form. Surface hardness can be increased with carburization. The density of mild steel is approximately and the Young's modulus is . Low-carbon steels display yield-point runout where the material has two yield points. The first yield point (or upper yield point) is higher than the second and the yield drops dramatically after the upper yield point. If a low-carbon steel is only stressed to some point between the upper and lower yield point then the surface develops Lüder bands. Low-carbon steels contain less carbon than other steels and are easier to cold-form, making them easier to handle. Typical applications of low carbon steel are car parts, pipes, construction, and food cans. High-tensile steel High-tensile steels are low-carbon, or steels at the lower end of the medium-carbon range, which have additional alloying ingredients in order to increase their strength, wear properties or specifically tensile strength. These alloying ingredients include chromium, molybdenum, silicon, manganese, nickel, and vanadium. Impurities such as phosphorus and sulfur have their maximum allowable content restricted. 41xx steel 4140 steel 4145 steel 4340 steel 300M steel EN25 steel – 2.521% nickel-chromium-molybdenum steel EN26 steel Higher-carbon steels Carbon steels which can successfully undergo heat-treatment have a carbon content in the range of 0.30–1.70% by weight. Trace impurities of various other elements can significantly affect the quality of the resulting steel. Trace amounts of sulfur in particular make the steel red-short, that is, brittle and crumbly at high working temperatures. Low-alloy carbon steel, such as A36 grade, contains about 0.05% sulfur and melt around . Manganese is often added to improve the hardenability of low-carbon steels. These additions turn the material into a low-alloy steel by some definitions, but AISI's definition of carbon steel allows up to 1.65% manganese by weight. There are two types of higher carbon steels which are high carbon steel and the ultra high carbon steel. The reason for the limited use of high carbon steel is that it has extremely poor ductility and weldability and has a higher cost of production. The applications best suited for the high carbon steels is its use in the spring industry, farm industry, and in the production of wide range of high-strength wires. AISI classification The following classification method is based on the American AISI/SAE standard. Other international standards including DIN (Germany), GB (China), BS/EN (UK), AFNOR (France), UNI (Italy), SS (Sweden), UNE (Spain), JIS (Japan), ASTM standards, and others. Carbon steel is broken down into four classes based on carbon content: Low-carbon steel Low-carbon steel has 0.05 to 0.15% carbon (plain carbon steel) content. Medium-carbon steel Medium-carbon steel has approximately 0.3–0.5% carbon content. It balances ductility and strength and has good wear resistance. It is used for large parts, forging and automotive components. High-carbon steel High-carbon steel has approximately 0.6 to 1.0% carbon content. It is very strong, used for springs, edged tools, and high-strength wires. Ultra-high-carbon steel Ultra-high-carbon steel has approximately 1.25–2.0% carbon content. Steels that can be tempered to great hardness. Used for special purposes such as (non-industrial-purpose) knives, axles, and punches. Most steels with more than 2.5% carbon content are made using powder metallurgy. Heat treatment The purpose of heat treating carbon steel is to change the mechanical properties of steel, usually ductility, hardness, yield strength, or impact resistance. Note that the electrical and thermal conductivity are only slightly altered. As with most strengthening techniques for steel, Young's modulus (elasticity) is unaffected. All treatments of steel trade ductility for increased strength and vice versa. Iron has a higher solubility for carbon in the austenite phase; therefore all heat treatments, except spheroidizing and process annealing, start by heating the steel to a temperature at which the austenitic phase can exist. The steel is then quenched (heat drawn out) at a moderate to low rate allowing carbon to diffuse out of the austenite forming iron-carbide (cementite) and leaving ferrite, or at a high rate, trapping the carbon within the iron thus forming martensite. The rate at which the steel is cooled through the eutectoid temperature (about ) affects the rate at which carbon diffuses out of austenite and forms cementite. Generally speaking, cooling swiftly will leave iron carbide finely dispersed and produce a fine grained pearlite and cooling slowly will give a coarser pearlite. Cooling a hypoeutectoid steel (less than 0.77 wt% C) results in a lamellar-pearlitic structure of iron carbide layers with α-ferrite (nearly pure iron) between. If it is hypereutectoid steel (more than 0.77 wt% C) then the structure is full pearlite with small grains (larger than the pearlite lamella) of cementite formed on the grain boundaries. A eutectoid steel (0.77% carbon) will have a pearlite structure throughout the grains with no cementite at the boundaries. The relative amounts of constituents are found using the lever rule. The following is a list of the types of heat treatments possible: Spheroidizing Spheroidite forms when carbon steel is heated to approximately for over 30 hours. Spheroidite can form at lower temperatures but the time needed drastically increases, as this is a diffusion-controlled process. The result is a structure of rods or spheres of cementite within primary structure (ferrite or pearlite, depending on which side of the eutectoid you are on). The purpose is to soften higher carbon steels and allow more formability. This is the softest and most ductile form of steel. Full annealing A hypoeutectoid carbon steel (carbon composition smaller than the eutectoid one) is heated to approximately above the austenictic temperature (A3), whereas a hypereutectoid steel is heated to a temperature above the eutectoid one (A1) for a certain number of hours; this ensures all the ferrite transforms into austenite (although cementite might still exist in hypereutectoid steels). The steel must then be cooled slowly, in the realm of 20 °C (36 °F) per hour. Usually it is just furnace cooled, where the furnace is turned off with the steel still inside. This results in a coarse pearlitic structure, which means the "bands" of pearlite are thick. Fully annealed steel is soft and ductile, with no internal stresses, which is often necessary for cost-effective forming. Only spheroidized steel is softer and more ductile. Process annealing A process used to relieve stress in a cold-worked carbon steel with less than 0.3% C. The steel is usually heated to for 1 hour, but sometimes temperatures as high as . The image above shows the process annealing area. Isothermal annealing It is a process in which hypoeutectoid steel is heated above the upper critical temperature. This temperature is maintained for a time and then reduced to below the lower critical temperature and is again maintained. It is then cooled to room temperature. This method eliminates any temperature gradient. Normalizing Carbon steel is heated to approximately for 1 hour; this ensures the steel completely transforms to austenite. The steel is then air-cooled, which is a cooling rate of approximately per minute. This results in a fine pearlitic structure, and a more-uniform structure. Normalized steel has a higher strength than annealed steel; it has a relatively high strength and hardness. Quenching Carbon steel with at least 0.4 wt% C is heated to normalizing temperatures and then rapidly cooled (quenched) in water, brine, or oil to the critical temperature. The critical temperature is dependent on the carbon content, but as a general rule is lower as the carbon content increases. This results in a martensitic structure; a form of steel that possesses a super-saturated carbon content in a deformed body-centered cubic (BCC) crystalline structure, properly termed body-centered tetragonal (BCT), with much internal stress. Thus quenched steel is extremely hard but brittle, usually too brittle for practical purposes. These internal stresses may cause stress cracks on the surface. Quenched steel is approximately three times harder (four with more carbon) than normalized steel. Martempering (marquenching) Martempering is not actually a tempering procedure, hence the term marquenching. It is a form of isothermal heat treatment applied after an initial quench, typically in a molten salt bath, at a temperature just above the "martensite start temperature". At this temperature, residual stresses within the material are relieved and some bainite may be formed from the retained austenite which did not have time to transform into anything else. In industry, this is a process used to control the ductility and hardness of a material. With longer marquenching, the ductility increases with a minimal loss in strength; the steel is held in this solution until the inner and outer temperatures of the part equalize. Then the steel is cooled at a moderate speed to keep the temperature gradient minimal. Not only does this process reduce internal stresses and stress cracks, but it also increases impact resistance. Tempering This is the most common heat treatment encountered because the final properties can be precisely determined by the temperature and time of the tempering. Tempering involves reheating quenched steel to a temperature below the eutectoid temperature and then cooling. The elevated temperature allows very small amounts spheroidite to form, which restores ductility but reduces hardness. Actual temperatures and times are carefully chosen for each composition. Austempering The austempering process is the same as martempering, except the quench is interrupted and the steel is held in the molten salt bath at temperatures between , and then cooled at a moderate rate. The resulting steel, called bainite, produces an acicular microstructure in the steel that has great strength (but less than martensite), greater ductility, higher impact resistance, and less distortion than martensite steel. The disadvantage of austempering is it can be used only on a few sheets of steel, and it requires a special salt bath. Case hardening Case hardening processes harden only the exterior of the steel part, creating a hard, wear-resistant skin (the "case") but preserving a tough and ductile interior. Carbon steels are not very hardenable meaning they can not be hardened throughout thick sections. Alloy steels have a better hardenability, so they can be through-hardened and do not require case hardening. This property of carbon steel can be beneficial, because it gives the surface good wear characteristics but leaves the core flexible and shock-absorbing. Forging temperature of steel
Physical sciences
Iron alloys
Chemistry
633601
https://en.wikipedia.org/wiki/Tool%20steel
Tool steel
Tool steel is any of various carbon steels and alloy steels that are particularly well-suited to be made into tools and tooling, including cutting tools, dies, hand tools, knives, and others. Their suitability comes from their distinctive hardness, resistance to abrasion and deformation, and their ability to hold a cutting edge at elevated temperatures. As a result, tool steels are suited for use in the shaping of other materials, as for example in cutting, machining, stamping, or forging. Tool steels have a carbon content between 0.4% and 1.5%. The presence of carbides in their matrix plays the dominant role in the qualities of tool steel. The four major alloying elements that form carbides in tool steel are: tungsten, chromium, vanadium and molybdenum. The rate of dissolution of the different carbides into the austenite form of the iron determines the high-temperature performance of steel (slower is better, making for a heat-resistant steel). Proper heat treatment of these steels is important for adequate performance. The manganese content is often kept low to minimize the possibility of cracking during water quenching. There are six groups of tool steels: water-hardening, cold-work, shock-resistant, high-speed, hot-work, and special purpose. The choice of group to select depends on cost, working temperature, required surface hardness, strength, shock resistance, and toughness requirements. The more severe the service condition (higher temperature, abrasiveness, corrosiveness, loading), the higher the alloy content and consequent amount of carbides required for the tool steel. Tool steels are used for cutting, pressing, extruding, and coining of metals and other materials. Their use in tooling is essential; injection molds for example require tool steels for their resistance to abrasion- an important criterion for mold durability which enables hundreds of thousands of moldings operations over its lifetime. The AISI-SAE grades of tool steel is the most common scale used to identify various grades of tool steel. Individual alloys within a grade are given a number; for example: A2, O1, etc. Water-hardening group W-group tool steel gets its name from its defining property of having to be water quenched. W-grade steel is essentially high carbon plain-carbon steel. This group of tool steel is the most commonly used tool steel because of its low cost compared to others. They work well for parts and applications where high temperatures are not encountered; above it begins to soften to a noticeable degree. Its hardenability is low, so W-group tool steels must be subjected to a rapid quenching, requiring the use of water. These steels can attain high hardness (above 66 Rockwell C) and are rather brittle compared to other tool steels. W-steels are still sold, especially for springs, but are much less widely used than they were in the 19th and early 20th centuries. This is partly because W-steels warp and crack much more during quench than oil-quenched or air hardening steels. The toughness of W-group tool steels is increased by alloying with manganese, silicon and molybdenum. Up to 0.20% of vanadium is used to retain fine grain sizes during heat treating. Typical applications for various carbon compositions are for W-steels: 0.60–0.75% carbon: machine parts, chisels, setscrews; properties include medium hardness with good toughness and shock resistance. 0.76–0.90% carbon: forging dies, hammers, and sledges. 0.91–1.10% carbon: general purpose tooling applications that require a good balance of wear resistance and toughness, such as rasps, drills, cutters, and shear blades. 1.11–1.30% carbon: files, small drills, lathe tools, razor blades, and other light-duty applications where more wear resistance is required without great toughness. Steel of about 0.8% C gets as hard as steel with more carbon, but the free iron carbide particles in 1% or 1.25% carbon steel make it hold an edge better. However, the fine edge probably rusts off faster than it wears off, if it is used to cut acidic or salty materials. Cold-work group The cold-work tool steels include the O series (oil-hardening), the A series (air-hardening), and the D series (high carbon-chromium). These are steels used to cut or form materials that are at low temperatures. This group possesses high hardenability and wear resistance, and average toughness and heat softening resistance. They are used in production of larger parts or parts that require minimal distortion during hardening. The use of oil quenching and air-hardening helps reduce distortion, avoiding the higher stresses caused by the quicker water quenching. More alloying elements are used in these steels, as compared to the water-hardening class. These alloys increase the steels' hardenability, and thus require a less severe quenching process and as a result are less likely to crack. They have high surface hardness and are often used to make knife blades. The machinability of the oil hardening grades is high but for the high carbon-chromium types is low. Oil-hardening: the O series This series includes an O1 type, an O2 type, an O6 type and an O7 type. All steels in this group are typically hardened at , oil quenched, then tempered below . Air-hardening: the A series The first air-hardening-grade tool steel was mushet steel, which was known as air-hardening steel at the time. Modern air-hardening steels are characterized by low distortion during heat treatment because of their high-chromium content. Their machinability is good and they have a balance of wear resistance and toughness (i.e. between the D and shock-resistant grades). High carbon-chromium: the D series The D series of the cold-work class of tool steels, which originally included types D2, D3, D6, and D7, contains between 10% and 13% chromium (which is unusually high). These steels retain their hardness up to a temperature of . Common applications for these tool steels include forging dies, die-casting die blocks, and drawing dies. Due to their high chromium content, certain D-type tool steels are often considered stainless or semi-stainless, however their corrosion resistance is very limited due to the precipitation of the majority of their chromium and carbon constituents as carbides. Shock-resisting group The high shock resistance and good hardenability are provided by chromium-tungsten, silicon-molybdenum, silicon-manganese alloying. Shock-resisting group tool steels (S) are designed to resist shock at both low and high temperatures. A low carbon content is required for the necessary toughness (approximately 0.5% carbon). Carbide-forming alloys provide the necessary abrasion resistance, hardenability, and hot-work characteristics. This family of steels displays very high impact toughness and relatively low abrasion resistance and can attain relatively high hardness of 58 to 60 HRC. In the US, toughness usually derives from 1 to 2% silicon and 0.5–1% molybdenum content. In Europe, shock steels often contain carbon and around 3% nickel. A range of 1.75% to 2.75% nickel is still used in some shock-resisting and high-strength low-alloy steels (HSLA), such as L6, 4340, and Swedish saw steel, but it is relatively expensive. An example of its use is in the production of jackhammer bits. High-speed group Hot-working group Hot-working steels are a group of steel used to cut or shape material at high temperatures. H-group tool steels were developed for strength and hardness during prolonged exposure to elevated temperatures. These tool steels are low carbon and moderate to high alloy that provide good hot hardness and toughness and fair wear resistance due to a substantial amount of carbide. H1 to H19 are based on a chromium content of 5%; H20 to H39 are based on a tungsten content of 9-18% and a chromium content of 3–4%; H40 to H59 are molybdenum based. Examples include DIN 1.2344 tool steel (H13). Special-purpose group is short for plastic mold steels. They are designed to meet the requirements of zinc die casting and plastic injection molding dies. L-type tool steel is short for low alloy special purpose tool steel. L6 is extremely tough. F-type tool steel is water hardened and substantially more wear resistant than W-type tool steel. Comparison
Physical sciences
Iron alloys
Chemistry
633964
https://en.wikipedia.org/wiki/Umbrella%20octopus
Umbrella octopus
Umbrella octopuses (family Opisthoteuthidae) are a group of pelagic octopuses. Umbrella octopuses are characterized by a web of skin between the arms, causing them to somewhat resemble an opened umbrella when the arms are spread. Description Opisthoteuthidae are a group of octopuses characterized by a web of skin in between their arms. They broad U-shaped shell that support muscles for a pair of small fins on the mantle, these fins are far less developed than other families in Cirrina and essentially only act as stabilizers when the animal swims (using a medusoid motion of the arms and webbing). This structure makes the umbrella octopus resemble an umbrella when they spread their arms/web out. The structure of the umbrella octopus has the oral surface below the mantle of the octopuses and the web with their arms surround the bottom of the mantle. Their outer skin has a very delicate consistency that results in white spots appearing on their skin when damaged. Although Opisthoteuthidae are categorized as cirrates, unlike the other cirrates, they do not have an intermediate web; rather, they use the web in between their arms to mimic the intermediate web that other cirrates have. Lacking an intermediate web is what causes the indentations in the outer edge of their arms that make them look like an umbrella. Behavior Defense mechanisms Opisthoteuthidae lack an intermediate web but they mimic the defensive mechanism of ballooning by extending the web between their arms as much as possible and curving the outer edges of their arms inwards in order to have the edges touch the ground. They also extend their fins parallel to the floor to help keep their balance or they curve them around their mantle. Opisthoteuthidae have been observed to hold this position for five and a half minutes. Another defensive mechanism that Opisthoteuthidae have been observed using is web-inversion which is when they have their arms turned upwards and their web with the oral surface facing outwards. The oral surface can be facing the floor, or the octopuses may lie laterally so their side is in contact with the floor. It has been noted that these defensive behaviors are the positions the octopuses may go into while feeding as well, but it is possible that this could be because of the stress of being captured and placed in an aquarium to be observed. Resting behavior When resting at the floor, the octopus's behavior falls into one of two tactics: bottom-resting or flat-spreading. Bottom-resting is when the octopus is resting near the floor. It will erect its mantle, curve the outer edges of its arms inwards to have them be the only part making contact with the floor. The fins are extended out parallel to the bottom to maintain balance. When flat-spreading they spread their arms and web, so it is parallel to the bottom and they keep the edges of their arms curved inwards. Their heads will point backwards at a small angle and their fins will be used for stabilization. Dispersion Opisthoteuthidae are deep sea creatures that have been found in the Clipperton-Clarion Fracture Zone in the Pacific Ocean at a depth of about 4,800 m. They have also been found in the South China Sea. They stay within 3,000-4,000 meters below sea level and try to stay hovering over the ocean floor. Taxonomy Family Opisthoteuthidae has classically contained a single genus, Opisthoteuthis, which has recently been split into three genera on the basis of differences in enlarged suckers on male specimens. Genera Grimpoteuthis, Luteuthis, and Cryptoteuthis now are included in the family Grimpoteuthidae. Genus Opisthoteuthis Verrill, 1883 Opisthoteuthis agassizii Verrill, 1883 Opisthoteuthis albatrossi (Sasaki, 1920) Opisthoteuthis borealis Collins, 2005 Opisthoteuthis bruuni (Voss, 1982) Opisthoteuthis californiana Berry, 1949 Opisthoteuthis calypso Villanueva, Collins, Sánchez & Voss, 2002 [Possibly attributable to genus Insigniteuthis] Opisthoteuthis chathamensis O'Shea, 1999 Opisthoteuthis dongshaensis C. C. Lu, 2010 [Possibly attributable to genus Insigniteuthis] Opisthoteuthis extensa Thiele, 1915 Opisthoteuthis grimaldii (Joubin, 1903) Opisthoteuthis hardyi Villanueva, Collins, Sánchez & Voss, 2002 Opisthoteuthis kerberos Verhoeff, 2024 Opisthoteuthis massyae (Grimpe, 1920) (Synonym Opisthoteuthis vossi Sánchez & Guerra, 1989 ) Opisthoteuthis medusoides Thiele, 1915 Opisthoteuthis mero O'Shea, 1999 Opisthoteuthis philipii Oommen, 1976 Opisthoteuthis pluto Berry, 1918 Opisthoteuthis robsoni O'Shea, 1999 Genus Exsuperoteuthis Verhoeff, 2024 Exsuperoteuthis depressa Ijima & Ikeda, 1895 (Synonym Opisthoteuthis japonica Taki, 1962) Exsuperoteuthis persephone Berry, 1918 Genus Insigniteuthis Verhoeff, 2024 Insigniteuthis obscura Verhoeff, 2024
Biology and health sciences
Cephalopods
Animals
634032
https://en.wikipedia.org/wiki/Argonaut%20%28animal%29
Argonaut (animal)
The argonauts (genus Argonauta, the only extant genus in the family Argonautidae) are a group of pelagic octopuses. They are also called paper nautili, referring to the paper-thin eggcase that females secrete; however, as octopuses, they are only distant relatives of true nautili. Their structure lacks the gas-filled chambers present in chambered nautilus shells and is not a true cephalopod shell, but rather an evolutionary innovation unique to the genus. It is used as a brood chamber, and to trap surface air to maintain buoyancy. It was once speculated that argonauts did not manufacture their eggcases but utilized shells abandoned by other organisms, in the manner of hermit crabs. Experiments by pioneering marine biologist Jeanne Villepreux-Power in the early 19th century disproved this hypothesis, as Villepreux-Power successfully reared argonaut young and observed their shells' development. Argonauts are found in tropical and subtropical waters worldwide. They live in the open ocean, i.e. they are pelagic. Like most octopuses, they have a rounded body, eight limbs (arms) and no fins. However, unlike most octopuses, argonauts live close to the surface rather than on the seabed. Argonauta species are characterised by very large eyes and small webs between the arms. The funnel–mantle locking apparatus is a major diagnostic feature of this taxon. It consists of knob-like cartilages in the mantle and corresponding depressions in the funnel. Unlike the closely allied genera Ocythoe and Tremoctopus, Argonauta species lack water pores. Of its names, "argonaut" means "sailor of the Argo". "Paper nautilus" is derived from the Greek ναυτίλος nautílos, which literally means "sailor", as paper nautili were thought to use two of their arms as sails. This is not the case, as argonauts swim by expelling water through their funnels. The chambered nautilus was later named after the argonaut, but belongs to a different cephalopod order, Nautilida. Description Sexual dimorphism and reproduction Argonauts exhibit extreme sexual dimorphism in size and lifespan. Females grow up to 10 cm and make shells up to 30 cm, while males rarely surpass 2 cm. The males mate only once in their short lifetime, whereas the females are iteroparous, capable of having offspring many times over the course of their lives. In addition, the females have been known since ancient times, while the males were described only in the late 19th century. The males lack the dorsal tentacles used by the females to create their eggcases. The males use a modified arm, the hectocotylus, to transfer sperm to the female. For fertilization, the arm is inserted into the female's pallial cavity and then becomes detached from the male. The hectocotylus when found in females was originally described as a parasitic worm. Eggcase Female argonauts produce a laterally compressed calcareous eggcase in which they reside. This "shell" has a double keel fringed by two rows of alternating tubercles. The sides are ribbed with the centre either flat or having winged protrusions. The eggcase curiously resembles the shells of extinct ammonites. It is secreted by the tips of the female's two greatly expanded dorsal tentacles (third left arms) before egg laying. After she deposits her eggs in the floating eggcase, the female takes shelter in it, often retaining the male's detached hectocotylus. She is usually found with her head and tentacles protruding from the opening, but she retreats deeper inside if disturbed. These ornate curved white eggcases are occasionally found floating on the sea, sometimes with the female argonaut clinging to it. It is not made of aragonite as most other shells are, but of calcite, with a three-layered structure and a higher proportion of magnesium carbonate (7%) than other cephalopod shells. The eggcase contains a bubble of air that the animal captures at the surface of the water and uses for buoyancy, similarly to other shelled cephalopods, although it does not have a chambered phragmocone. Once thought to contribute to occasional mass strandings on beaches, the air bubble is under sophisticated control, evident from the behaviour of animals from which air has been removed under experimental diving conditions. This system to attain neutral buoyancy is effective only at the relatively shallow depths of the upper 10 meters of the water column. Young females with mantle lengths less than 9 millimeters are shell-less like the males, with both having been found in waters between 50 and 200 meters. Most other octopuses lay eggs in caves; Neale Monks and C. Phil Palmer speculate that, before ammonites died out during the Cretaceous–Paleogene extinction event, the argonauts may have evolved to use discarded ammonite shells for their egg laying, eventually becoming able to mend the shells and perhaps make their own shells. However, this is uncertain and it is unknown whether this is the result of convergent evolution. Argonauta argo is the largest species in the genus and also produces the largest eggcase, which may reach a length of 300 mm. The smallest species is Argonauta boettgeri, with a maximum recorded size of 67 mm. Beak The beaks of Argonauta species are distinctive, being characterised by a very small rostrum and a fold that runs to the lower edge or near the free corner. The rostrum is "pinched in" at the sides, making it much narrower than in other octopuses, with the exception of the closely allied monotypic genera Ocythoe and Vitreledonella. The jaw angle is curved and indistinct. Beaks have a sharp shoulder, which may or may not have posterior and anterior parts at different slopes. The hood lacks a notch and is very broad, flat, and low. The hood to crest ratio (f/g) is approximately 2–2.4. The lateral wall of the beak has no notch near the wide crest. Argonaut beaks are most similar to those of Ocythoe tuberculata and Vitreledonella richardi, but differ in "leaning back" to a greater degree than the former and having a more curved jaw angle than the latter. Feeding and defense Feeding mostly occurs during the day. Argonauts use tentacles to grab prey and drag it toward the mouth. It then bites the prey to inject it with venom from the salivary gland. They feed on small crustaceans, molluscs, jellyfish and salps. If the prey is shelled, the argonaut uses its radula to drill into the organism, then inject the toxin. Argonauts are capable of altering their color. They can blend in with their surroundings to avoid predators. They also produce ink, which is ejected when the animal is being attacked. This ink paralyzes the olfaction of the attacker, providing time for the argonaut to escape. The female is also able to pull back the web covering of her shell, making a silvery flash, which may deter a predator from attacking. Argonauts are preyed upon by tunas, billfishes, and dolphins. Shells and remains of argonauts have been recorded from the stomachs of Alepisaurus ferox and Coryphaena hippurus. Male argonauts have been observed residing inside aggregate salps (Pegea socia), although little is known about this relationship. Classification The genus Argonauta contains up to seven extant species. Several extinct species are also known. Four extant species are widely considered valid: Argonauta argo Linnaeus, 1758 Argonauta hians Lightfoot, 1786 Argonauta nodosus Lightfoot, 1786 Argonauta nouryi Lorois, 1852 Several additional taxa are either treated as valid species or regarded as nomina dubia: Argonauta boettgeri Maltzan, 1881 Argonauta cornutus Conrad, 1854 Argonauta pacificus Dall, 1871 A number of extinct species have also been described: †Argonauta absyrtus Martill & Barker, 2006 †Argonauta biarmata Ponzi, 1876 †Argonauta itoigawai Tomida, 1983 †Argonauta joanneus Hilber, 1915 †Argonauta oweri Fleming, 1945 †Argonauta sismondai Bellardi, 1872 †Argonauta tokunagai Yokoyama, 1913 The extinct species Obinautilus awaensis was originally assigned to Argonauta, but has since been transferred to the genus Obinautilus. Dubious or uncertain taxa The following taxa associated with the family Argonautidae are of uncertain taxonomic status: In design The argonaut was the inspiration for a number of classical and modern art and decorative forms including use on pottery and architectural elements. Some early examples are found in Bronze Age Minoan art from Crete. A variation known as the double argonaut design was also found in Minoan jewelry. This design was also transposed and adapted in both gold and glass in contemporary Mycenaean contexts, as seen both at Mycenae and the Tholos at Volo. In literature and etymology Argonauts are featured in Twenty Thousand Leagues Under the Seas, noted for their ability to use their tentacles as sails, though this is a widespread myth. A female argonaut is also described in Marianne Moore's poem "The Paper Nautilus". "Argonauta" is the name of a chapter in Anne Morrow Lindbergh's Gift from the Sea. Paper nautiluses were caught in The Swiss Family Robinson novel. Argonauts gave their name to an Arabidopsis thaliana mutation and by extension to Argonaute proteins.
Biology and health sciences
Cephalopods
Animals
634213
https://en.wikipedia.org/wiki/Celiac%20plexus
Celiac plexus
The celiac plexus, also known as the solar plexus because of its radiating nerve fibers, is a complex network of nerves located in the abdomen, near where the celiac trunk, superior mesenteric artery, and renal arteries branch from the abdominal aorta. It is behind the stomach and the omental bursa, and in front of the crura of the diaphragm, on the level of the first lumbar vertebra. The plexus is formed in part by the greater and lesser splanchnic nerves of both sides, and fibers from the anterior and posterior vagal trunks. The celiac plexus proper consists of the celiac ganglia with a network of interconnecting fibers. The aorticorenal ganglia are often considered to be part of the celiac ganglia, and thus, part of the plexus. Structure The celiac plexus includes a number of smaller plexuses: Other plexuses that are derived from the celiac plexus: Terminology The celiac plexus is often popularly referred to as the solar plexus. In the context of sparring or injury, a strike to the region of the stomach around the celiac plexus is commonly called a blow "to the solar plexus". In this case it is not the celiac plexus itself being referred to, but rather the region around it. A blow to this region may cause the diaphragm to spasm, resulting in difficulty in breathing—a sensation commonly known as "getting the wind knocked out of you". It may also affect the celiac plexus itself, which can cause great pain and interfere with the functioning of the viscera. Clinical significance A blunt injury to the celiac plexus normally resolves with rest and deep breathing. A celiac plexus block by means of fluoroscopically guided injection is sometimes used to treat intractable pain from cancers such as pancreatic cancer. Such a block may be performed by pain management specialists and radiologists, with CT scans for guidance. Intractable pain related to chronic pancreatitis may be an indication for celiac plexus ablation.
Biology and health sciences
Human anatomy
Health
634526
https://en.wikipedia.org/wiki/Fodder
Fodder
Fodder (), also called provender (), is any agricultural foodstuff used specifically to feed domesticated livestock, such as cattle, rabbits, sheep, horses, chickens and pigs. "Fodder" refers particularly to food given to the animals (including plants cut and carried to them), rather than that which they forage for themselves (called forage). Fodder includes hay, straw, silage, compressed and pelleted feeds, oils and mixed rations, and sprouted grains and legumes (such as bean sprouts, fresh malt, or spent malt). Most animal feed is from plants, but some manufacturers add ingredients to processed feeds that are of animal origin. The worldwide animal feed trade produced 1.245 billion tons of compound feed in 2022 according to an estimate by the International Feed Industry Federation, with an annual growth rate of about 2%. The use of agricultural land to grow feed rather than human food can be controversial (see food vs. feed); some types of feed, such as corn (maize), can also serve as human food; those that cannot, such as grassland grass, may be grown on land that can be used for crops consumed by humans. In many cases the production of grass for cattle fodder is a valuable intercrop between crops for human consumption, because it builds the organic matter in the soil. When evaluating if this soil organic matter increase mitigates climate change, both permanency of the added organic matter as well as emissions produced during use of the fodder product have to be taken into account. Some agricultural byproducts fed to animals may be considered unsavory by humans. Common plants specifically grown for fodder Alfalfa (lucerne) Barley Common duckweed Birdsfoot trefoil Brassica spp. Kale Rapeseed (canola) Rutabaga (swede) Turnip Clover Alsike clover Red clover Subterranean clover White clover Grass Bermuda grass Brome False oat grass Fescue Heath grass Meadow grasses (from naturally mixed grassland swards) Orchard grass Ryegrass Timothy-grass Corn (maize) Millet Oats Sorghum Soybeans Trees (pollard tree shoots for "tree-hay") Wheat Types Biochar for cattle Bran Conserved forage plants: hay and silage Compound feed and premixes, often called pellets, nuts or (cattle) cake Crop residues: stover, copra, straw, chaff, sugar beet waste Fish meal Freshly cut grass and other forage plants Grass or lawn clipping waste Green maize Green sorghum Horse gram Leaves from certain species of trees Meat and bone meal (now illegal in cattle and sheep feeds in many areas due to risk of BSE) Molasses Native green grass Oilseed press cake (cottonseed, safflower, sunflower, soybean, peanut or groundnut) Oligosaccharides Processed insects (i.e. processed maggots) Seaweed (including Asparagopsis taxiformis which is used mainly as a supplement to reduce methane emissions by up to 90%) Seeds and grains, either whole or prepared by crushing, milling, etc. Single cell protein(can also be made from atmospheric ) Sprouted grains and legumes Yeast extract (brewer's yeast residue) Health concerns In the past, bovine spongiform encephalopathy (BSE, or "mad cow disease") spread through the inclusion of ruminant meat and bone meal in cattle feed due to prion contamination. This practice is now banned in most countries where it has occurred. Some animals have a lower tolerance for spoiled or moldy fodder than others, and certain types of molds, toxins, or poisonous weeds inadvertently mixed into a feed source may cause economic losses due to sickness or death of the animals. The US Department of Health and Human Services regulates drugs of the Veterinary Feed Directive type that can be present within commercial livestock feed. Droughts Increasing intensities and frequencies of drought events put rangeland agriculture under pressure in semi-arid and arid geographic areas. Innovative emergency fodder production concepts have been reported, such as bush-based animal fodder production in Namibia. During extended dry periods, some farmers have used woody biomass fibre from encroacher bush as their primary source of cattle feed, adding locally-available supplements for nutrients as well as to improve palatability. Production of sprouted grains as fodder Fodder in the form of sprouted cereal grains such as barley, and legumes can be grown in small and large quantities. Systems have been developed recently that allow for many tons of sprouts to be produced each day, year round. Sprouted grains can significantly increase the nutritional value of the grain compared with feeding the ungerminated grain to stock. They use less water than traditional forage, making them ideal for drought conditions. Sprouted barley and other cereal grains can be grown hydroponically in a carefully-controlled environment. Hydroponically-grown sprouted fodder at tall with a root mat is at its peak for animal feed. Although products such as barley are grain, when sprouted they are approved by the American Grassfed Association to be used as livestock feed.
Technology
Animal husbandry
null
2286239
https://en.wikipedia.org/wiki/Lithium%20nitride
Lithium nitride
Lithium nitride is an inorganic compound with the chemical formula . It is the only stable alkali metal nitride. It is a reddish-pink solid with a high melting point. Preparation and handling Lithium nitride is prepared by direct reaction of elemental lithium with nitrogen gas: Instead of burning lithium metal in an atmosphere of nitrogen, a solution of lithium in liquid sodium metal can be treated with . Lithium nitride must be protected from moisture as it reacts violently with water to produce ammonia: Structure and properties alpha- (stable at room temperature and pressure) has an unusual crystal structure that consists of two types of layers: one layer has the composition contains 6-coordinate N centers and the other layer consists only of lithium cations. Two other forms are known: beta-, formed from the alpha phase at 0.42 GPa has the sodium arsenide () structure; gamma- (same structure as lithium bismuthide ) forms from the beta form at 35 to 45 GPa. Lithium nitride shows ionic conductivity for , with a value of c. 2×10−4 Ω−1cm−1, and an (intracrystal) activation energy of c. 0.26 eV (c. 24 kJ/mol). Hydrogen doping increases conductivity, whilst doping with metal ions (Al, Cu, Mg) reduces it. The activation energy for lithium transfer across lithium nitride crystals (intercrystalline) has been determined to be higher, at c. 68.5 kJ/mol. The alpha form is a semiconductor with band gap of c. 2.1 eV. Reactions Reacting lithium nitride with carbon dioxide results in amorphous carbon nitride (), a semiconductor, and lithium cyanamide (), a precursor to fertilizers, in an exothermic reaction. Under hydrogen at around 200°C, Li3N will react to form lithium amide. At higher temperatures it will react further to form ammonia and lithium hydride. Lithium imide can also be formed under certain conditions. Some research has explored this as a possible industrial process to produce ammonia since lithium hydride can be thermally decomposed back to lithium metal. Lithium nitride has been investigated as a storage medium for hydrogen gas, as the reaction is reversible at 270 °C. Up to 11.5% by weight absorption of hydrogen has been achieved.
Physical sciences
Nitride salts
Chemistry
2286395
https://en.wikipedia.org/wiki/Superbase
Superbase
A superbase is a compound that has a particularly high affinity for protons. Superbases are of theoretical interest and potentially valuable in organic synthesis. Superbases have been described and used since the 1850s. Definitions Generically IUPAC defines a superbase as a "compound having a very high basicity, such as lithium diisopropylamide." Superbases are often defined in two broad categories, organic and organometallic. Organic superbases are charge-neutral compounds with basicities greater than that of proton sponge (pKBH+ = 18.6 in MeCN)." In a related definition: any species with a higher absolute proton affinity (APA = 245.3 kcal/mol) and intrinsic gas phase basicity (GB = 239 kcal/mol) than proton sponge. Common superbases of this variety feature amidine, guanidine, and phosphazene functional groups. Strong superbases can be designed by utilizing various approaches to stabilize the conjugate acid, up to the theoretical limits of basicity. Organometallic superbases, sometimes called Lochmann–Schlosser superbases, result from the combination of alkali metal alkoxides and organolithium reagents. Caubère defines superbases as "bases resulting from a mixing of two (or more) bases leading to new basic species possessing inherent new properties. The term superbase does not mean a base is thermodynamically and/or kinetically stronger than another, instead it means that a basic reagent is created by combining the characteristics of several different bases." Organic superbases Organic superbases are mostly charge-neutral, nitrogen containing species, where nitrogen act as a proton acceptor. These include the phosphazenes, phosphanes, amidines, and guanidines. Other organic compounds that meet the physicochemical or structural definitions of 'superbase' include proton chelators like the aromatic proton sponges and the bispidines. Multicyclic polyamines, like DABCO might also be loosely included in this category. Phosphanes and carbodiphosphoranes are also strong organosuperbases. Despite enormous proton affinity, many organosuperbases can exhibit low nucleophilicity. Superbases are used in organocatalysis. Organometallic Organometallic compounds of electropositive metals are superbases, but they are generally strong nucleophiles. Examples include organolithium and organomagnesium (Grignard reagent) compounds. Another type of organometallic superbase has a reactive metal exchanged for a hydrogen on a heteroatom, such as oxygen (unstabilized alkoxides) or nitrogen (metal amides such as lithium diisopropylamide). The Schlosser base (or Lochmann-Schlosser base), the combination of n-butyllithium and potassium tert-butoxide, is commonly cited as a superbase. n-Butyllithium and potassium tert-butoxide form a mixed aggregate of greater reactivity than either component reagent. Inorganic Inorganic superbases are typically salt-like compounds with small, highly charged anions, e.g. lithium hydride, potassium hydride, and sodium hydride. Such species are insoluble, but the surfaces of these materials are highly reactive and slurries are useful in synthesis. Caesium oxide is probably the strongest base according to quantum-chemical calculations.
Physical sciences
Concepts
Chemistry
2286508
https://en.wikipedia.org/wiki/Salt%20bridge
Salt bridge
In electrochemistry, a salt bridge or ion bridge is an essential laboratory device discovered over 100 years ago. It contains an electrolyte solution, typically an inert solution, used to connect the oxidation and reduction half-cells of a galvanic cell (voltaic cell), a type of electrochemical cell. In short, it functions as a link connecting the anode and cathode half-cells within an electrochemical cell. It also maintains electrical neutrality within the internal circuit and stabilizes the junction potential between the solutions in the half-cells. Additionally, it serves to minimize cross-contamination between the two half cells. A salt bridge typically consists of tubes filled with an electrolyte solution. These tubes often have diaphragms such as glass frits at their ends to help contain the solution within the tubes and prevent excessive mixing with the surrounding environment. When setting up a salt bridge between different solvents of half-cells, it is crucial to ensure that the electrolyte used in the bridge is soluble in both solutions and does not interact with any species present in either solutions. There are several types of salt bridges: glass tube bridges (traditional KCl-type salt bridge and ionic liquid salt bridge), filter paper bridges, porous frit salt bridges, fumed-silica, and agar gel salt bridges. The following sections will explore in greater detail the characteristics and applications of glass tube bridges, filter paper bridges, fumed silica salt bridges, and charcoal salt bridges. Glass tube bridges (KCl-type and ionic liquid salt bridge) Glass tube salt bridges commonly consist of U-shaped Vycor tubes filled with a relatively inert electrolyte. The electrolyte solution usually comprises a combination of cations, such as ammonium and potassium, and anions, including chloride and nitrate, which have similar mobility. The combination is chosen which does not react with any of the chemicals used in the cell. KCl-type salt bridges Traditionally, concentrated aqueous potassium chloride (KCl) solution has been used for decades to neutralize the liquid-junction potential. When comparing other salt solutions such as potassium bromide and potassium iodide to potassium chloride, potassium chloride is the most efficient in nullifying the junction potential. Yet, the effectiveness of this salt bridge decreases as the ionic strength of the sample solution increases. Ionic liquid salt bridges Due to the numerous drawbacks of KCl-type salt bridges, ionic liquid salt bridges (ILSB) have been utilized to address the potentiometry issues arising from KCl-type salt bridges in electrochemical cells. ILSBs demonstrate efficient performance in aqueous solutions of hydrophilic electrolytes. This is because ionic liquids do not mix with water (they are immiscible), rendering them suitable as salt bridges for aqueous solutions. Additionally, they are chemically inert and highly stable in water. To set up a glass tube salt bridge, a U-shaped Vycor tube is fashioned to contain a suitable electrolyte solution. Normally, glass frits, a porous material, cover the ends of the tube or the electrolyte is often gelified with agar-agar to help prevent the intermixing of fluids that might otherwise occur. The conductivity of a glass tube bridge primarily depends on the concentration of the electrolyte solution. At concentrations below saturation, an increase in concentration enhances conductivity. However, beyond-saturation electrolyte content and a narrow tube diameter may both reduce conductivity. Filter paper bridges Porous paper such as filter paper may be used as a salt bridge if soaked in an appropriate electrolyte such as the electrolytes used in glass tube bridges. No gelification agent is required as the filter paper provides a solid medium for conduction. The conductivity of this kind of salt bridge depends on a number of factors: the concentration of the electrolyte solution, the texture of the paper, and the absorbing ability of the paper. Generally, smoother texture and higher absorbency equate to higher conductivity. To set up this type of salt bridge, laboratory filter paper can be used and rolled to form a shape that connects the two half-cells, typically rolled into a cylindrical shape. The rolled filter paper is then soaked in an appropriate inert salt solution. A straw can be used to shape the rolled filter paper into a U-shaped tube, providing mechanical strength to the soaked filter paper. This filter paper can now be used to act as a salt bridge and connect the two half-cells. While filter paper salt bridges are inexpensive and easily accessible, one disadvantage of not using a straw to provide mechanical strength is that a new rolled and soaked filter paper must be used for each experiment. Additionally, filter paper has limited lon Charcoal salt bridges A recent development is the charcoal salt bridge. It is considered an excellent option for a porous junction for the reference electrode in an alkaline solution. A porous junction serves as a salt bridge between the two half-cells of reference and electrolyte solutions. Other materials used for porous junctions, such as glass, Teflon, and agar gel, have their own benefits but also some significant drawbacks such as high cost and high risk of contamination. Therefore, the advantages of using charcoal as frits include its low cost and easy accessibility, as charcoal can be sourced from porous carbon materials. Despite being fragile, charcoal facilitates efficient ion transfer due to its highly porous structure.
Physical sciences
Electrochemistry
Chemistry
2288189
https://en.wikipedia.org/wiki/Workplace%20Hazardous%20Materials%20Information%20System
Workplace Hazardous Materials Information System
The Workplace Hazardous Materials Information System (WHMIS; , SIMDUT) is Canada's national workplace hazard communication standard. The key elements of the system, which came into effect on October 31, 1988, are cautionary labelling of containers of WHMIS controlled products, the provision of material safety data sheets (MSDSs) and worker education and site-specific training programs. WHMIS is an example of synchronization and cooperation amongst Canada's federal, provincial and territorial governments. The coordinated approach avoided duplication, inefficiency through loss of scale and the interprovincial trade barriers that would have been created had each province and territory established its own hazard communication system. Legislative framework The federal Hazardous Products Act and associated Controlled Products Regulations, administered by the Workplace Hazardous Materials Bureau residing in the federal Department of Health Canada, established the national standard for chemical classification and hazard communication in Canada and is the foundation for the workers' "right-to-know" legislation enacted in each of Canada's provinces and territories. Under the Constitution of Canada, labour legislation falls primarily under the jurisdiction of Canada's provinces and territories. The Labour Program, of the federal government Department of Human Resources and Skills Development Canada, is the occupational health and safety (OHS) regulatory authority for the approximately 10% of workplaces designated to be under federal jurisdiction. As such, each of the thirteen federal, provincial and territorial (FPT) agencies responsible for OHS has established employer WHMIS requirements within their respective jurisdiction. These requirements place an onus on employers to ensure that controlled products used, stored or handled in the workplace are properly labelled, that material safety data sheets are made available to workers, and that workers receive education and site-specific training to ensure the safe storage, handling and use of controlled products in the workplace. The development of WHMIS 1988 was a collaborative effort, involving Industry, organized labour and governments as part of a advisory body to develop the system. The system divides responsibility among the various levels of a product, with suppliers, employers and workers each having a role in WHMIS. Suppliers are distributing 'hazardous products', must ensure containers are properly marked, SDS documentation is accurate and provided to customers. Employers are responsible for the training of their workers on the safe usage of hazardous products and any risks they pose, providing safe storage and labeled containers, and ensure availability of SDS to workers. Workers are expected to take part in WHMIS trainings, follow training and instructions on the safe usage of hazardous materials and report issues such as damaged or missing container labels. Even after the move from WHMIS 1988 to WHMIS 2015, this structure of shared responsibility is retained, largely unchanged. WHMIS 2015 On February 11, 2015, the Government of Canada published in the Canada Gazette a new modified version of the WHMIS system called WHMIS 2015. WHMIS 2015 was created "to incorporate the Globally Harmonized System of Classification and Labelling of Chemicals (GHS) for workplace chemicals." A notable difference in the WHMIS adoption of GHS was the inclusion of a 'biohazard' hazard pictogram, retained from the original WHMIS 1988 pictograms. The standard GHS pictograms do not include a specific symbol for biohazards. Transition Period WHMIS 2015 was phased into use over a three year period from February 2015 to December 2018. By 1 December 2018, use of WHMIS 2015 was mandatory by manufacturers/importers, distributors and employers. Materials covered WHMIS 2015 regulations cover materials, referred to as a 'hazardous products' used in a workplace, that meets the criteria of hazard classification, such as toxicity, flash point. These are classified as 'hazardous product': The following were exempt from WHMIS, and are in most cases subject to legislation specific to the material: WHMIS 1988 The original WHMIS went into effect 31 October 1988, as part of a series of legislations at both the federal and provisional and territory levels. WHMIS 1988 was phased out from 2015 to 2018, with the old system being completely phased out on 1 December, 2018. All substances that WHMIS applied, 'controlled products', fell into one of the six general WHMIS classes: The 1988 system included eight symbols, one per classification, except for Class D, which had three symbols, one for each division of the class.
Physical sciences
Basics: General
Chemistry
2288549
https://en.wikipedia.org/wiki/Momentum%20operator
Momentum operator
In quantum mechanics, the momentum operator is the operator associated with the linear momentum. The momentum operator is, in the position representation, an example of a differential operator. For the case of one particle in one spatial dimension, the definition is: where is the reduced Planck constant, the imaginary unit, is the spatial coordinate, and a partial derivative (denoted by ) is used instead of a total derivative () since the wave function is also a function of time. The "hat" indicates an operator. The "application" of the operator on a differentiable wave function is as follows: In a basis of Hilbert space consisting of momentum eigenstates expressed in the momentum representation, the action of the operator is simply multiplication by , i.e. it is a multiplication operator, just as the position operator is a multiplication operator in the position representation. Note that the definition above is the canonical momentum, which is not gauge invariant and not a measurable physical quantity for charged particles in an electromagnetic field. In that case, the canonical momentum is not equal to the kinetic momentum. At the time quantum mechanics was developed in the 1920s, the momentum operator was found by many theoretical physicists, including Niels Bohr, Arnold Sommerfeld, Erwin Schrödinger, and Eugene Wigner. Its existence and form is sometimes taken as one of the foundational postulates of quantum mechanics. Origin from de Broglie plane waves The momentum and energy operators can be constructed in the following way. One dimension Starting in one dimension, using the plane wave solution to Schrödinger's equation of a single free particle, where is interpreted as momentum in the -direction and is the particle energy. The first order partial derivative with respect to space is This suggests the operator equivalence so the momentum of the particle and the value that is measured when a particle is in a plane wave state is the (generalized) eigenvalue of the above operator. Since the partial derivative is a linear operator, the momentum operator is also linear, and because any wave function can be expressed as a superposition of other states, when this momentum operator acts on the entire superimposed wave, it yields the momentum eigenvalues for each plane wave component. These new components then superimpose to form the new state, in general not a multiple of the old wave function. Three dimensions The derivation in three dimensions is the same, except the gradient operator del is used instead of one partial derivative. In three dimensions, the plane wave solution to Schrödinger's equation is: and the gradient is where , , and are the unit vectors for the three spatial dimensions, hence This momentum operator is in position space because the partial derivatives were taken with respect to the spatial variables. Definition (position space) For a single particle with no electric charge and no spin, the momentum operator can be written in the position basis as: where is the gradient operator, is the reduced Planck constant, and is the imaginary unit. In one spatial dimension, this becomes This is the expression for the canonical momentum. For a charged particle in an electromagnetic field, during a gauge transformation, the position space wave function undergoes a local U(1) group transformation, and will change its value. Therefore, the canonical momentum is not gauge invariant, and hence not a measurable physical quantity. The kinetic momentum, a gauge invariant physical quantity, can be expressed in terms of the canonical momentum, the scalar potential  and vector potential : The expression above is called minimal coupling. For electrically neutral particles, the canonical momentum is equal to the kinetic momentum. Properties Hermiticity The momentum operator can be described as a symmetric (i.e. Hermitian), unbounded operator acting on a dense subspace of the quantum state space. If the operator acts on a (normalizable) quantum state then the operator is self-adjoint. In physics the term Hermitian often refers to both symmetric and self-adjoint operators. (In certain artificial situations, such as the quantum states on the semi-infinite interval , there is no way to make the momentum operator Hermitian. This is closely related to the fact that a semi-infinite interval cannot have translational symmetry—more specifically, it does not have unitary translation operators. See below.) Canonical commutation relation By applying the commutator to an arbitrary state in either the position or momentum basis, one can easily show that: where is the unit operator. The Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at once. In quantum mechanics, position and momentum are conjugate variables. Fourier transform The following discussion uses the bra–ket notation. One may write so the tilde represents the Fourier transform, in converting from coordinate space to momentum space. It then holds that that is, the momentum acting in coordinate space corresponds to spatial frequency, An analogous result applies for the position operator in the momentum basis, leading to further useful relations, where stands for Dirac's delta function. Derivation from infinitesimal translations The translation operator is denoted , where represents the length of the translation. It satisfies the following identity: that becomes Assuming the function to be analytic (i.e. differentiable in some domain of the complex plane), one may expand in a Taylor series about : so for infinitesimal values of : As it is known from classical mechanics, the momentum is the generator of translation, so the relation between translation and momentum operators is: thus 4-momentum operator Inserting the 3d momentum operator above and the energy operator into the 4-momentum (as a 1-form with metric signature): obtains the 4-momentum operator: where is the 4-gradient, and the becomes preceding the 3-momentum operator. This operator occurs in relativistic quantum field theory, such as the Dirac equation and other relativistic wave equations, since energy and momentum combine into the 4-momentum vector above, momentum and energy operators correspond to space and time derivatives, and they need to be first order partial derivatives for Lorentz covariance. The Dirac operator and Dirac slash of the 4-momentum is given by contracting with the gamma matrices: If the signature was , the operator would be instead.
Physical sciences
Quantum mechanics
Physics
2289967
https://en.wikipedia.org/wiki/Comedo
Comedo
A comedo (plural comedones) is a clogged hair follicle (pore) in the skin. Keratin (skin debris) combines with oil to block the follicle. A comedo can be open (blackhead) or closed by skin (whitehead) and occur with or without acne. The word comedo comes from Latin comedere 'to eat up' and was historically used to describe parasitic worms; in modern medical terminology, it is used to suggest the worm-like appearance of the expressed material. The chronic inflammatory condition that usually includes comedones, inflamed papules, and pustules (pimples) is called acne. Infection causes inflammation and the development of pus. Whether a skin condition classifies as acne depends on the number of comedones and infection. Comedones should not be confused with sebaceous filaments. Comedo-type ductal carcinoma in situ (DCIS) is not related to the skin conditions discussed here. DCIS is a noninvasive form of breast cancer, but comedo-type DCIS may be more aggressive, so may be more likely to become invasive. Causes Oil production in the sebaceous glands increases during puberty, causing comedones and acne to be common in adolescents. Acne is also found premenstrually and in women with polycystic ovarian syndrome. Smoking may worsen acne. Oxidation rather than poor hygiene or dirt causes blackheads to be black. Washing or scrubbing the skin too much could make it worse, by irritating the skin. Touching and picking at comedones might cause irritation and spread infection. What effect shaving has on the development of comedones or acne is unclear. Some skin products might increase comedones by blocking pores, and greasy hair products (such as pomades) can worsen acne. Skin products that claim to not clog pores may be labeled noncomedogenic or nonacnegenic. Make-up and skin products that are oil-free and water-based may be less likely to cause acne. Whether dietary factors or sun exposure make comedones better, worse, or neither is unknown. A hair that does not emerge normally, an ingrown hair, can also block the pore and cause a bulge or lead to infection (causing inflammation and pus). Genes may play a role in the chances of developing acne. Comedones may be more common in some ethnic groups. People of Latino and recent African descent may experience more inflammation in comedones, more comedonal acne, and earlier onset of inflammation. Pathophysiology Comedones are associated with the pilosebaceous unit, which includes a hair follicle and sebaceous gland. These units are mostly on the face, neck, upper chest, shoulders, and back. Excess keratin combined with sebum can plug the opening of the follicle. This small plug is called a microcomedo. Androgens increase sebum (oil) production. If sebum continues to build up behind the plug, it can enlarge and form a visible comedo. A comedo may be open to the air ("blackhead") or closed by skin ("whitehead"). Being open to the air causes oxidation of the melanin pigment, which turns it black. Cutibacterium acnes is the suspected infectious agent in acne. It can proliferate in sebum and cause inflamed pustules (pimples) characteristic of acne. Nodules are inflamed, painful, deep bumps under the skin. Comedones that are 1 mm or larger are called macrocomedones. They are closed comedones and are more frequent on the face than neck. Solar comedones (sometimes called senile comedones) are related to many years of exposure to the sun, usually on the cheeks, not to acne-related pathophysiology. Management Using nonoily cleansers and mild soap may not cause as much irritation to the skin as regular soap. Blackheads can be removed across an area with commercially available pore-cleansing strips (which can still damage the skin by leaving the pores wide open and ripping excess skin) or the more aggressive cyanoacrylate method used by dermatologists. Squeezing blackheads and whiteheads can remove them, but can also damage the skin. Doing so increases the risk of causing or transmitting infection and scarring, as well as potentially pushing any infection deeper into the skin. Comedo extractors are used with careful hygiene in beauty salons and by dermatologists, usually after using steam or warm water. Complementary medicine options for acne in general have not been shown to be effective in trials. These include aloe vera, pyridoxine (vitamin B6), fruit-derived acids, kampo (Japanese herbal medicine), ayurvedic herbal treatments, and acupuncture. Some acne treatments target infection specifically, but some treatments are aimed at the formation of comedones, as well. Others remove the dead layers of the skin and may help clear blocked pores. Dermatologists can often extract open comedones with minimal skin trauma, but closed comedones are more difficult. Laser treatment for acne might reduce comedones, but dermabrasion and laser therapy have also been known to cause scarring. Macrocomedones (1 mm or larger) can be removed by a dermatologist using surgical instruments or cauterized with a device that uses light. The acne drug isotretinoin can cause severe flare-ups of macrocomedones, so dermatologists recommend removal before starting the drug and during treatment. Some research suggests that the common acne medications retinoids and azelaic acid are beneficial and do not cause increased pigmentation of the skin. If using a retinoid, sunscreen is recommended. Rare conditions Favre–Racouchot syndrome occurs in sun-damaged skin and includes open and closed comedones. Nevus comedonicus or comedo nevus is a benign hamartoma (birthmark) of the pilosebaceous unit around the oil-producing gland in the skin. It has widened open hair follicles with dark keratin plugs that resemble comedones, but they are not actually comedones. Dowling–Degos disease is a genetic pigment disorder that includes comedo-like lesions and scars. Familial dyskeratotic comedones are a rare autosomal-dominant genetic condition, with keratotic (tough) papules and comedo-like lesions.
Biology and health sciences
Symptoms and signs
Health
2290687
https://en.wikipedia.org/wiki/Red-bellied%20black%20snake
Red-bellied black snake
The red-bellied black snake (Pseudechis porphyriacus) is a species of venomous snake in the family Elapidae, indigenous to Australia. Originally described by George Shaw in 1794 as a species new to science, it is one of eastern Australia's most commonly encountered snakes. Averaging around in length, it has glossy black upperparts, bright red or orange flanks, and a pink or dull red belly. It is not aggressive and generally retreats from human encounters, but will defend itself if provoked. Although its venom can cause significant illness, no deaths have been recorded from its bite, which is less venomous than other Australian elapid snakes. The venom contains neurotoxins, myotoxins, and coagulants and has haemolytic properties. Victims can also lose their sense of smell. Common in woodlands, forests, swamplands, along river banks and waterways the red-bellied black snake often ventures into nearby urban areas. It forages in bodies of shallow water, commonly with tangles of water plants and logs, where it hunts its main prey item, frogs, as well as fish, reptiles, and small mammals. The snake is a least-concern species according to the IUCN, but its numbers are thought to be declining due to habitat fragmentation and decline of frog populations. Taxonomy The red-bellied black snake was first described and named by English naturalist George Shaw in Zoology of New Holland (1794) as Coluber porphyriacus. Incorrectly assuming it was harmless and not venomous, he wrote, "This beautiful snake, which appears to be unprovided with tubular teeth or fangs, and consequently not of a venomous nature, is three, sometimes four, feet in nature." The species name is derived from the Greek porphyrous, which can mean "dark purple", "red-purple" or "beauteous". It was the first Australian elapid snake described. The syntype is presumed lost. French naturalist Bernard Germain de Lacépède described it under the name Trimeresurus leptocephalus in 1804. His countryman René Lesson described it as Acanthophis tortor in 1826. German biologist Hermann Schlegel felt it was allied with cobras and called it Naja porphyrica in 1837. The genus Pseudechis was created for this species by German biologist Johann Georg Wagler in 1830; several more species have been added to the genus subsequently. The name is derived from the Greek words pseudēs "false", and echis "viper". Snake expert Eric Worrell, in 1961, analysed the skulls of the genus and found that of the red-bellied black snake to be the most divergent. Its position as an early offshoot from the rest of the genus has been confirmed genetically in 2017. In addition to red-bellied black snake, the species has been called common black snake, redbelly, and RBBS. It was known as djirrabidi to the Eora and Darug inhabitants of the Sydney basin. Description The red-bellied black snake has a glossy black top body with a light-grey snout and brown mouth, and a completely black tail. It lacks a well-defined neck; its head merges seamlessly into the body. Its flanks are bright red or orange, fading to pink or dull red on the belly. All these scales have black margins. Snakes from northern populations tend to have lighter, more cream or pink bellies. The red-bellied black snake is on average around long, the largest individual recorded at . Males are generally slightly larger than females. A large specimen caught in Newcastle has been estimated to weigh around . The red-bellied black snake can have a strong smell, which some field experts have used to find the snakes in the wild. Like all elapid snakes, it is proteroglyphous (front-fanged). Juveniles are similar to the eastern small-eyed snake (Cryptophis nigrescens), with which they can be easily confused, although the latter species lacks the red flanks. Other similar species include the blue-bellied black snake (Pseudechis guttatus) and copperheads of the genus Austrelaps. An early misconception was that the red-bellied black snake was sexually dimorphic, and that the eastern brown snake (Pseudonaja textilis) was the female form. This error was recognised as such by Australian zoologist Gerard Krefft in his 1869 work Snakes of Australia. Scalation The number and arrangement of scales on a snake's body are a key element of identification to species level. The red-bellied black snake has 17 rows of dorsal scales at midbody, 180 to 215 ventral scales, 48 to 60 subcaudal scales (the anterior—and sometimes all—subcaudals are undivided), and a divided anal scale. There are two anterior and two posterior temporal scales, and the rostral shield is roughly square-shaped. Distribution and habitat The red-bellied black snake is native to the east coast of Australia, where it is one of the most commonly encountered snakes. It can be found in the urban forest, woodland, plains, and bushland areas of the Blue Mountains, Canberra, Sydney, Brisbane, Melbourne, Cairns, and Adelaide. The Macquarie Marshes mark a western border to its distribution in New South Wales, and Gladstone in central Queensland marks the northern limit to the main population. To the south, it occurs across eastern and central Victoria, and extends along the Murray River into South Australia. Disjunct populations occur in the southern Mount Lofty Ranges in South Australia and in North Queensland. The red-bellied black snake is most commonly seen close to dams, streams, billabongs, and other bodies of water, although they can venture up to away, including into nearby backyards. In particular, the red-bellied black snake prefers areas of shallow water with tangles of water plants, logs, or debris. Behaviour Red-bellied black snakes can hide in many places in their habitat, including logs, old mammal burrows, and grass tussocks. They can flee into water and hide there; one was reported as staying submerged for 23 minutes. When swimming, they may hold their full head or the nostrils above the water's surface. At times, they may float without moving on the water surface, thus looking like a stick. Within their habitat, red-bellied black snakes appear to have ranges or territories with which they are familiar and generally remain within. A 1987 field study in three New South Wales localities found that these areas vary widely, from in size. Within their territory, they may have some preferred places to reside. The red-bellied black snake is generally not an aggressive species, typically withdrawing when approached. If provoked, it recoils into a striking stance as a threat, holding its head and front part of its body horizontally above the ground and widening and flattening its neck. It may bite as a last resort. It is generally active by day, though nighttime activity has occasionally been recorded. When not hunting or basking, it may be found beneath timber, rocks, and rubbish or down holes and burrows. Snakes are active when their body temperatures are between . They also thermoregulate by basking in warm, sunny spots in the cool, early morning and rest in shade in the middle of hot days, and may reduce their activity in hot, dry weather in late summer and autumn. Rather than entering true hibernation, red-bellied black snakes become relatively inactive over winter, retreating to cover and at times emerging on warm, sunny days. Their dark colour allows them to absorb heat from sunshine more quickly. In July 1949, six large individuals were found hibernating under a concrete slab in marshland in Woy Woy, New South Wales. Groups of up to six hibernating red-bellied black snakes have been recorded from under concrete slabs around Mount Druitt and Rooty Hill in western Sydney. Males are more active in the Southern Hemisphere spring (early October to November) as they roam looking for mates; one reportedly travelled in a day. In summer, both sexes are less active generally. Reproduction In spring, male red-bellied black snakes often engage in ritualised combat for 2 to 30 minutes, even attacking other males already mating with females. They wrestle vigorously, but rarely bite, and engage in head-pushing contests, where each snake tries to push his opponent's head downward with his chin. The male seeks out a female and rubs his chin on her body, and may twitch, hiss, and rarely bite as he becomes aroused. The female indicates readiness to mate by straightening out and allowing their bodies to align. Pregnancy takes place any time from early spring to late summer. Females become much less active and band together in small groups in late pregnancy. They share the same retreat and bask in the sun together. The red-bellied black snake is ovoviviparous; that is, it gives birth to live young in individual membranous sacs, after 14 weeks' gestation, usually in February or March. The young, numbering between eight and 40, emerge from their sacs very shortly after birth, and have an average length around . Young snakes almost triple their length and increase their weight 18-fold in their first year of life, and are sexually mature when they reach SVL (snoutvent length) of for males or for females. Females can breed at around 31 months of age, while males can slightly earlier. Red-bellied black snakes can live up to 25 years. Feeding The diet of red-bellied black snakes primarily consists of frogs, but they also prey on reptiles and small mammals. They also eat other snakes, commonly eastern brown snakes and even their own species. Fish are hunted in water. Red-bellied black snakes may hunt on or under the water surface, and prey can be eaten underwater or brought to the surface. They have been recorded stirring up substrate, possibly to disturb prey. As red-bellied black snakes grow and mature, they continue to eat the same size prey, but add larger animals, as well. Although they prefer live food, red-bellied black snakes have been reported eating frogs squashed by cars. They are susceptible to cane toad (Rhinella marina) toxins. The introduction of cane toads in Australia dates to 1935, when they were introduced in an attempt at biological control of native beetles, which were damaging sugarcane fields (a non-native plant). The intervention failed, mostly because the toads are on the ground, while the beetles feed on leaves at the top of the plant. One research study concluded that in less than 75 years, the red-bellied black snake had evolved in toad-inhabited regions of Australia to have increased resistance to toad toxin and decreased preference for toads as prey. Venom Early settlers feared the red-bellied black snake, though it turned out to be much less dangerous than many other species. The murine median lethal dose (LD50) is 2.52 mg/kg when administered subcutaneously. A red-bellied black snake yields an average of 37 mg of venom when milked, with the maximum recorded being 94 mg. It accounted for 16% of identified snakebite victims in Australia between 2005 and 2015, with no deaths recorded. Its venom contains neurotoxins, myotoxins, and coagulants and also has haemolytic properties. Bites from red-bellied black snakes can be very painful—needing analgesia—and result in local swelling, prolonged bleeding, and even local necrosis, particularly if the bite is on a finger. Severe local reactions may require surgical debridement or even amputation. Symptoms of systemic envenomation—including nausea, vomiting, headache, abdominal pain, diarrhoea, or excessive sweating—were thought to be rare, but a 2010 review found they occurred in most bite victims. Most people also go on to develop an anticoagulant coagulopathy in a few hours. This is characterised by a raised activated partial thromboplastin time (aPTT), and subsides over 24 hours. It resolves quickly with antivenom. A few people go on to develop a myotoxicity and associated generalised muscle pain and occasionally weakness, which may last up to 7 days. Patients may suffer a loss of sense of smell (anosmia); this is unrelated to the severity of the envenoming and can be temporary or permanent. Although the venom contains the three-finger toxin α-elapitoxin-Ppr1, which acts as a neurotoxin in laboratory experiments, neurotoxic symptoms are generally absent in clinical cases. A biologically active agent—pseudexin—was isolated from red-bellied black snake venom in 1981. Making up 25% of the venom, it is a single polypeptide chain with a molecular weight around 16.5 kilodaltons. In 1989, it was found to be composed of three phospholipase A2 isoenzymes. If antivenom is indicated, red-bellied black snake bites are generally treated with tiger snake antivenom. While black snake antivenom can be used, tiger snake antivenom can be used at a lower volume and is a cheaper treatment. It is the most commonly reported species responsible for envenomed dogs in New South Wales. In 2006, a 12-year-old golden retriever suffered rhabdomyolysis and acute kidney injury secondary to a red-bellied black snake bite. Laboratory testing has found that cats are relatively resistant to the venom, with a lethal dose as high as 7 mg/kg. Conservation and threats The red-bellied black snake is considered to be a least-concern species according to the International Union for Conservation of Nature. Its preferred habitat has been particularly vulnerable to urban development and is highly fragmented, and a widespread decline in frogs, which are its preferred prey, has occurred. Snake numbers appear to have declined. Feral cats are known to prey on red-bellied black snakes, while young snakes presumably are taken by laughing kookaburras (Dacelo novaeguineae), brown falcons (Falco berigora), and other raptors. Captivity One of the snakes commonly kept as pets in Australia, the red-bellied black snake adapts readily to captivity and lives on a supply of mice, though it can also survive on fish fillets, chicken, and dog food.
Biology and health sciences
Snakes
Animals
5713772
https://en.wikipedia.org/wiki/Proteaceae
Proteaceae
The Proteaceae form a family of flowering plants predominantly distributed in the Southern Hemisphere. The family comprises 83 genera with about 1,660 known species. Australia and South Africa have the greatest concentrations of diversity. Together with the Platanaceae (plane trees), Nelumbonaceae (the sacred lotus) and in the recent APG IV system the Sabiaceae, they make up the order Proteales. Well-known Proteaceae genera include Protea, Banksia, Embothrium, Grevillea, Hakea, and Macadamia. Species such as the New South Wales waratah (Telopea speciosissima), king protea (Protea cynaroides), and various species of Banksia, Grevillea, and Leucadendron are popular cut flowers. The nuts of Macadamia integrifolia are widely grown commercially and consumed, as are those of Gevuina avellana on a smaller scale. Etymology The name Proteaceae was adapted by Robert Brown from the name Proteae coined in 1789 for the family by Antoine Laurent de Jussieu, based on the genus Protea, which in 1767, Carl Linnaeus derived from the name of the Greek god Proteus, a deity who was able to change between many forms. This is an appropriate image, seeing as the family is known for its astonishing variety and diversity of flowers and leaves. Description The genera of Proteaceae are highly varied, with Banksia in particular providing a striking example of adaptive radiation in plants. This variability makes it impossible to provide a simple, diagnostic identification key for the family, although individual genera may be easily identified. Proteaceae range from prostrate shrubs to tall forest trees, of 40 m in height, and are usually of medium height or low or perennial shrubs, except for some Stirlingia species that are herbs. Some species are facultatively deciduous (Embothrium coccineum), rarely acaulescent, the cauline portion of the collar is often thickened (lignotuber). Indumentum of three-celled hairs, sometimes glandular, rarely absent, the apical cell is usually elongated, acute, sometimes equally or unequally bifid. Leaves rarely aromatic, usually alternate, and in a spiral, rarely opposed, or verticilate; coriaceous, rarely fleshy or spinescent, simple or compound (imparipinate, imparibipinate or rarely palmate or digitate with pinnatisect segments), entire edge to (3-)pinnatisect (giving a fern-like aspect); rarely divided dichotomously, often remotely toothed, crenate or serrated, seated or stalked; the petiole frequently with a swollen base but rarely sheathed (sometimes in Synaphea), without stipules; pinnate sometimes palmate or parallel venation, brochidodromous or reduced to a single prominent vane, vernation normally conduplicate; anisophylly often occurs during the different growth periods; leaf blade dorsiventral, isobilateral or centred; mesophyll tissue usually with sclerenchymatous idioblasts, rare secretory cavities. Brachy-paracytic stomata (laterocytic in Bellendena). Plant stems with two types of radii, wide and multi-serrated or narrow and uni-serrated, phloem stratified or not, trilacunar nodes with three leaf traces (rarely unilacunar with one trace), sclereids frequent; bark with lenticels frequently horizontally enlarged, cork cambium present, usually superficial. Roots lateral and short, often grouped in bundles (proteoid roots) with very dense root hairs, rarely with mycorrhiza. Plants usually hermaphroditic, more rarely monoecious, dioecious or andromonoecious. Inflorescences very variable, simple or compound, axillary or terminal, lateral flowers solitary or in pairs, rarely with a terminal flower, racemiform, paniculate or condensed, usually with bracts, sometimes converted into leaves or squamiform, forming a type of cone, or with bright colours, forming an involucre or pseudanthium, the peduncles and pedicels sometimes contracted, compacted with the rachis, in some cases the congested inflorescences form super inflorescences (some Alloxylon); very rarely the flowers are solitary and axillary near the end of branches; in species with lignotubers the flowers sometimes grow from these and pass through the soil (geophytes). Flowers are usually perfect, actinomorphic, or zygomorphic, hypogynous, frequently large and showy. Flat or oblique, sometimes forming a gynophore. Hypogynous disk present and extrastaminal or absent. Perianth of (3-)4(−8) tepals (sometimes interpreted as a dimerous and dichlamydeous perianth), in 1(−2) valvate whorls, sometimes elongated in a basal sack, free or fused in different ways (all fused or even one free and three basally to completely fused), or even connivent by marginally interdigitate papillae forming a tube or a bilabiate structure, zygomorphic, sometimes opening laterally in a variety of ways. Haplostemonous androecium, usually isostemonous, opposititepalous of (3-)4(−5) stamens, all fertile or some converted into staminodes, usually filamentous, filaments partially or totally fused to the tepals, rarely free, basifixed anthers adnate, ditheous, tetrasporangiate, sometimes unilocular and bisporagiate, introrse to latrorse (rarely), expanded connective, usually with apiculus, dehiscence along longitudinal tears. Hypogynous glands (0-)1–4, squamiform or elongated, fleshy, free or fused forming a lunate or annular nectary over the receptacle. Superior gynoecium of 1(−2) apocarpous carpels, sessile or stipitate (with a more or less elongated gynophore), sometimes not completely closed, style usually developed, stigma small or in the shape of a terminal or sub terminal disk or even lateral and oblique, often indented, papilous, moist or dry, ovules 1–100 or more per carpel, anatropous, hemianatropous, amphitropous or orthotropous, mostly hemitropous, bitegmic, crassinucellate, chalaza with a ring of vascular bundles, the funiculus is occasionally absent and the ovule is fused to the placenta, marginal placentation with various dispositions or apical. Fruit dehiscent or indehiscent, in achene or nucule, follicle, drupe (with lignified endocarp) or falsely drupal (with lignified internal mesocarp), sometimes similar to a caryopsis as it is fused to the wall of the ovary and the testa, often lignified and serotinous; the fruit from the same inflorescence are sometimes fused forming a syncarp. Seeds 1-many, sometimes winged, flat to rounded, with endosperm absent, present in Bellendina, endotesta with an unusual layer containing crystals of calcium oxalate that is rarely absent, well differentiated embryo, straight, dicotyledonous, but often with 3 or more (up to 9) large cotyledons, often auriculate. Pollen in monads, triangular in polar view, (2-)3(−8)-aperturate, usually isopolar and triporate, biporate in Embothrium and the tribe Banksieae, colpoidate in Beauprea, spherical in Aulax and Franklandia or strongly anisopolar in some species of Persoonia; the openings of the former's tetrads follow Garside's Law. Chromosomal number: n=5, 7, 10–14, 26, 28; sizes range from very small (average of 1,0 μm) to very big (average of 14,4 μm) according to species; x=7, 12. Flowers Generally speaking, the diagnostic feature of Proteaceae is the compound pseudanthium. In many genera, the most obvious feature is the large and often very showy inflorescences, consisting of many small flowers densely packed into a compact head or spike. This character does not occur in all Proteaceae, however; Adenanthos species, for example, have solitary flowers. In most Proteaceae species, the pollination mechanism is highly specialised. It usually involves the use of a "pollen-presenter", an area on the style-end that presents the pollen to the pollinator. Proteaceae flower parts occur in fours. The four tepals are fused into a long, narrow tube with a closed cup at the top, and the filaments of the four stamens are fused to the tepals in such a way that the anthers are enclosed within the cup. The pistil initially passes along the inside of the perianth tube, so the stigma, too, is enclosed within the cup. As the flower develops, the pistil grows rapidly. Since the stigma is trapped, the style must bend to elongate, and eventually it bends so far, the perianth is split along one seam. The style continues to grow until anthesis, when the nectaries begin to produce nectar. At this time, the perianth splits into its component tepals, the cup splits apart, and the pistil is released to spring more or less upright. Ecology Many of the Proteaceae have specialised proteoid roots, masses of lateral roots and hairs forming a radial absorptive surface, produced in the leaf litter layer during seasonal growth, and usually shrivelling at the end of the growth season. They are an adaptation to growth in poor, phosphorus-deficient soils, greatly increasing the plants' access to scarce water and nutrients by exuding carboxylates that mobilise previously unavailable phosphorus. They also increase the root's absorption surface, but this is a minor feature, as it also increases competition for nutrients against its own root clusters. However, this adaptation leaves them highly vulnerable to dieback caused by the Phytophthora cinnamomi water mould, and generally intolerant of fertilization. Due to these specialized proteoid roots, the Proteaceae are one of few flowering plant families that do not form symbioses with arbuscular mycorrhizal fungi. They exude large amounts of organic acids (citric acid and malic acid) every 2–3 days in order to aid the mobilization and absorption of phosphate. Many species are fire-adapted (pyrophytes), meaning they have strategies for surviving fires that sweep through their habitat. Some are resprouters, and have a thick rootstock buried in the ground that shoots up new stems after a fire, and others are reseeders, meaning the adult plants are killed by the fire, but disperse their seeds, which are stimulated by the smoke to take root and grow. The heat was previously thought to have stimulated growth, but the chemicals in the smoke have now been shown to cause it. There are four dioecious genera (Aulax, Dilobeia, Heliciopsis and Leucadendron), 11 andromonoecious genera and some other genera have species that are cryptically andromonoecious: two species are sterile and only reproduce vegetatively (Lomatia tasmanica, Hakea pulvinifera). The species vary between being autocompatible and autoincompatible, with intermediate situations; these situations sometimes occur in the same species. The flowers are usually protandrous. Just before anthesis, the anthers release their pollen, depositing it onto the stigma, which in many cases has an enlarged fleshy area specifically for the deposition of its own pollen. Nectar-feeders are unlikely to come into contact with the anthers themselves, but can hardly avoid contacting the stigma; thus, the stigma functions as a pollen-presenter, ensuring the nectar-feeders act as pollinators. The downside of this pollination strategy is that the probability of self-fertilisation is greatly increased; many Proteaceae counter this with strategies such as protandry, self-incompatibility, or preferential abortion of selfed seed. The systems for presenting pollen are usually highly diverse, corresponding to the diversification of the pollinators. Pollination is carried out by bees, beetles, flies, moths, birds (honeyeaters, sunbirds, sugarbirds and hummingbirds) and mammals (rodents, small marsupials, elephant shrews and bats). The latter two means were evolutionarily derived from entomophily in different, independent events. The dispersion of some species exhibit serotiny, which is associated with their pyrophytic behaviour. These trees accumulate fruits on their branches whose outer layers or protective structures (bracts) are highly lignified and resistant to fire. The fruit only release their seeds when they have been burnt and when the ground has been fertilized with ashes from the fire and is free from competitors. Many species have seeds with elaiosomes that are dispersed by ants; the seeds with wings or thistledown exhibit anemochory, while the drupes and other fleshy fruit exhibit endozoochory as mammals and birds ingest them. Some African and Australian rodents are known to accumulate fruit and seeds of these plants in their nests in order to feed on them, although some manage to germinate. Distribution Proteaceae are mainly a Southern Hemisphere family, with its main centres of diversity in Australia and South Africa. It also occurs in Central Africa, South and Central America, India, eastern and south eastern Asia, and Oceania. Only two species are known from New Zealand, although fossil pollen evidence suggests there were more previously. It is a good example of a Gondwanan family, with taxa occurring on virtually every land mass considered a remnant of the ancient supercontinent Gondwana, except Antarctica. The family and subfamilies are thought to have diversified well before the fragmentation of Gondwana, implying all of them are well over 90 million years old. Evidence for this includes an abundance of proteaceous pollen found in the Cretaceous coal deposits of the South Island of New Zealand. It is thought to have achieved its present distribution largely by continental drift rather than dispersal across ocean gaps. Phytochemistry No conclusive studies have been carried out on the chemical substances present in this broad family. The genera Protea and Faurea are unusual as they use xylose as the main sugar in their nectar and as they have high concentrations of polygalactol, while sucrose is the main sugar present in Grevillea. Cyanogenic glycosides, derived from tyrosine, are often present, as are proanthocyanidines (delphinidin and cyanidin), flavonols (kaempferol, quercetin and myricetin) and arbutin. Alkaloids are usually absent. Iridoids and ellagic acid are also absent. Saponins and sapogenins can be either present or absent in different species. Many species accumulate aluminium. Uses and cultivation Many traditional cultures have used Proteaceae as sustenance, medicine, for curing animal hides, as a source of dyes, firewood and as wood for construction. Aboriginal Australians eat the fruit of Persoonia, and the seeds of species from other genera, including Gevuina and Macadamia, form part of the diet of the indigenous peoples but are also sold throughout the world. The tender shoots of Helicia species are used in Java, and the nectar from the inflorescences of a number of species is drunk in Australia. Traditional medicines can be obtained from infusions of the roots, bark, leaves, or flowers of many species that are used as topical applications for skin conditions or internally as tonics, aphrodisiacs, and galactogens to treat headaches, cough, dysentery, diarrhea, indigestion, stomach ulcers, and kidney disease. The wood from the trees of this family is widely used in construction and for internal uses such as decoration; the wood from species of Protea, Leucadendron and Grevillea is especially popular. Many species are used in gardening, particularly genera of Banksia, Embothrium, Grevillea, and Telopea. This use has resulted in the introduction of exotic species that have become invasive; examples include the hakea willow (Hakea salicifolia) and the silky hakea (Hakea sericea) in Portugal. Two species of Macadamia are cultivated commercially for their edible nuts. Gevuina avellana (Chilean hazel) is also cultivated for its edible nuts, in Chile and New Zealand, and they are also used in the pharmaceutical industry for their humectant properties and as an ingredient in sunscreens. It is the most cold-resistant of the tree families that produce nuts. It is also planted in the British Isles and on the Pacific coast of the United States for its tropical appearance and its ability to grow in cooler climates. Many Proteaceae species are cultivated by the nursery industry as barrier plants and for their prominent and distinctive flowers and foliage. Some species are of importance to the cut flower industry, especially some Banksia and Protea species. Sugarbushes (Protea), pincushions (Leucospermum) and conebushes (Leucadendron), as well as others like pagodas (Mimetes), Aulax and blushing brides (Serruria), comprise one of the three main plant groups of fynbos, which forms part of the Cape Floral Kingdom, the smallest but richest plant kingdom for its size and the only kingdom contained within a single country. The other main groups of plants in fynbos are the Ericaceae and the Restionaceae. South African proteas are thus widely cultivated due to their many varied forms and unusual flowers. They are popular in South Africa for their beauty and their usefulness in wildlife gardens for attracting birds and useful insects. The species most valued as ornamentals are the trees that grow in southern latitudes as they give landscapes in temperate climates a tropical appearance; Lomatia ferruginea (Fuinque), Lomatia hirsuta (Radal) have been introduced in Western Europe and to the western United States. Embothrium coccineum (Chilean Firetree or Notro) is highly valued in the British Isles for its dark red flowers and can be found as far north as the Faroe Islands at a latitude of 62° north. Among the banksias, many of which grow in temperate and Mediterranean climates, the vast majority are shrubs; only a few are trees that are valued for their height. Among the tallest species are: B. integrifolia with its subspecies B. integrifolia subsp. monticola, which is noteworthy as the plants that form the subspecies are the tallest trees of the banksias and they are more frost-resistant than other banksias, B. seminuda, B. littoralis, B. serrata; among those that can be considered small trees or large shrubs: B. grandis, B. prionotes, B. marginata, B. coccinea and B. speciosa; all of these are planted in parks and gardens and even along roadsides because of their size. The rest of the species of this genus, around 170 species, are shrubs, although some of them are valued for their flowers. Another species that is cultivated in some parts of the world, although it is smaller, is Telopea speciosissima (Waratah), from the mountains of New South Wales, Australia. Some temperate climate species are cultivated more locally in Australia for their attractive appearance: Persoonia pinifolia (pine-leaved geebung) is valued for its vivid yellow flowers and grape-like fruit. Adenanthos sericeus (woolly bush) is planted for its attractive soft leaves and its small red or orange flowers. Hicksbeachia pinnatifolia (beef nut, red bauple nut) is commonly planted for its foliage and edible nuts. Parasites The Proteaceae are particularly susceptible to certain parasites, in particular the oomycete Phytophthora cinnamomi, which causes severe root rot in the plants that grow in Mediterranean climates. Fusarium oxysporum causes a disease called fusariosis in roots that causes a yellowing and wilting, with serious ecological damages to woodland plants and economic losses in plants of commercial interest. Other common infections are caused by species of Botryosphaeria, Rhizoctonia, Armillaria, Botrytis, Calonectria and other fungi. Conservation status The IUCN considers that 47 Proteaceae species are threatened, of which one species, Stenocarpus dumbeensis Guillaumin, 1935, from New Caledonia, is thought to be extinct. The species of this family are particularly susceptible to the destruction or fragmentation of their habitat, fire, parasitic diseases, competition from introduced plants, soil degradation and other damage provoked by humans and their domesticated animals. The species are also affected by climate change. Fossils The Proteaceae have a rich fossil record, despite the inherent difficulties in identifying remains that do not show diagnostic characteristics. Identification usually comes from using a combination of brachy-paracytic stomata and the unusual trichome bases or, in other cases, the unusual structure of pollen tetrads. Xylocaryon was identified as a member of the Proteaceae from the similarity of its fruit to the extant genus Eidothea. Fossils attributable to this family have been found on the majority of areas that formed the Gondwana supercontinent. A wide variety of pollen belonging to this family dating back to the Upper Cretaceous (Campanian-Maastrichtian) from the south east of Australia and pollen from the Middle Cretaceous (Cenomanian-Turonian) from northern Africa and Peru described as Triorites africaensis. The first macrofossils appear twenty million years later in the Palaeocene of South America and the north east of Australia. The fossil record of some areas, such as New Zealand and Tasmania, show a greater biodiversity for Proteaceae than currently exists, which supports the fact that the distribution of many taxa has changed drastically with the passage of time and that the family has suffered a general decline, including high levels of extinction during the Cenozoic. Taxonomy First described by French botanist Antoine Laurent de Jussieu, the family Proteaceae is a fairly large one, with around 80 genera, but less than 2,000 species. It is recognised by virtually all taxonomists. Firmly established under classical Linnaean taxonomy, it is also recognised by the cladistics-based APG and APG II systems. It is placed in the order Proteales, whose placement has itself varied. A classification of the genera within Proteaceae was made by Lawrie Johnson and Barbara Briggs in their influential 1975 monograph "On the Proteaceae: the evolution and classification of a southern family", until it was largely superseded by the molecular studies of Peter H. Weston and Nigel Barker in 2006. Proteaceae are now divided into five subfamilies: Bellendenoideae, Persoonioideae, Symphionematoideae, Proteoideae and Grevilleoideae. In 2008 Mast and colleagues updated Macadamia and related genera in tribe Macadamieae. Furthermore, Orites megacarpus was found not to be within the genus Orites, nor in the tribe Roupaleae, instead in the tribe Macadamieae, hence given the new species name Nothorites megacarpus. The full arrangement, according to Weston and Barker (2006) with the updates to genera from Mast et al. (2008), is as follows: Family Proteaceae Subfamily Bellendenoideae Bellendena Subfamily Persoonioideae Tribe Placospermeae Placospermum Tribe Persoonieae Persoonia Subfamily Symphionematoideae Agastachys — Symphionema Subfamily Proteoideae incertae sedis Eidothea — Beauprea — Beaupreopsis — Dilobeia — Cenarrhenes — Franklandia Tribe Conospermeae Subtribe Stirlingiinae Stirlingia Subtribe Conosperminae Conospermum — Synaphea Tribe Petrophileae Petrophile — Aulax Tribe Proteeae Protea — Faurea Tribe Leucadendreae Subtribe Isopogoninae Isopogon Subtribe Adenanthinae Adenanthos Subtribe Leucadendrinae Leucadendron — Serruria — Paranomus — Vexatorella — Sorocephalus — Spatalla — Leucospermum — Mimetes — Diastella — Orothamnus Subfamily Grevilleoideae incertae sedis Sphalmium — Carnarvonia Tribe Roupaleae incertae sedis Megahertzia — Knightia — Eucarpha — Triunia Subtribe Roupalinae Roupala — Neorites — Orites Subtribe Lambertiinae Lambertia — Xylomelum Subtribe Heliciinae Helicia — Hollandaea Subtribe Floydiinae Darlingia — Floydia Tribe Banksieae Subtribe Musgraveinae Musgravea — Austromuellera Subtribe Banksiinae Banksia Tribe Embothrieae Subtribe Lomatiinae Lomatia Subtribe Embothriinae Embothrium — Oreocallis — Alloxylon — Telopea Subtribe Stenocarpinae Stenocarpus — Strangea Subtribe Hakeinae Opisthiolepis — Buckinghamia — Hakea — Grevillea — Finschia Tribe Macadamieae Subtribe Macadamiinae Macadamia — Lasjia — Nothorites — Panopsis — Brabejum Subtribe Malagasiinae Malagasia — Catalepidia Subtribe Virotiinae Virotia — Athertonia — Heliciopsis Subtribe Gevuininae Cardwellia — Euplassa — Gevuina — Bleasdalea — Hicksbeachia — Kermadecia
Biology and health sciences
Others
null
5715146
https://en.wikipedia.org/wiki/West%20Indian%20Ocean%20coelacanth
West Indian Ocean coelacanth
The West Indian Ocean coelacanth (Latimeria chalumnae) (sometimes known as gombessa, African coelacanth, or simply coelacanth) is a crossopterygian, one of two extant species of coelacanth, a rare order of vertebrates more closely related to lungfish and tetrapods than to the common ray-finned fishes. The other extant species is the Indonesian coelacanth (L. menadoensis). The West Indian Ocean coelacanth was historically known by fishermen around the Comoro Islands (where it is known as gombessa), Madagascar, and Mozambique in the western Indian Ocean, but first scientifically recognised from a specimen collected in South Africa in 1938. This coelacanth was once thought to be evolutionarily conservative, but discoveries have shown initial morphological diversity. It has a vivid blue pigment, and is the better known of the two extant species. The species has been assessed as critically endangered on the IUCN Red List. Anatomy and physiology The average weight of Latimeria chalumnae is 80 kg (176 lb), and they can reach up to 2 m (6.5 ft) in length. Adult females are slightly larger than males. Latimeria chalumnae exhibit a deep royal blue color with spots used as a camouflage tactic for hunting prey. Similar anatomical adaptations include the abundance of visual cells such as rods to help see when light is limited. This combined with the West Indian Ocean coelacanth's large eyes aid seeing in dark water. Similar to cartilaginous fish, Latimeria chalumnae has a rectal gland, pituitary gland, pancreas, and spinal cord. To balance osmotic pressure, these fish adopt an efficient mechanism of osmoregulation by retaining urea in their blood. Latimeria chalumnae are an ovoviviparous species, which means that they retain their eggs internally until they hatch. They also have low fecundity due to their long gestation period of around 12 months, though not much is known about their age of sexual maturity. Habitat and behavior L. chalumnae are usually found between of depth, but are sometimes found as deep as and as shallow as . L. chalumnae tend to reside in underwater caves, which are most common at these depths. This may limit their maximum depth range, along with lack of prey. They are known to spend the daytime within these lava caves, likely for protection from predators, and use the surrounding feeding grounds at night. Coelacanths are opportunistic in their feeding. Some of their known prey species are fish that include: Amioides polyacanthus, Beryx splendens, Lucigadus ori and Brotula multibarbata. Their intracranial joint and associated basicranial muscle likely play an important but unresolved role in feeding. Some individuals have been seen performing "headstands" as feeding behavior, allowing coelacanth to slurp prey from crevices within lava caves. This behavior is made possible due to the coelacanth's ability to move both its upper and lower jaw, which is a unique trait in extant vertebrates that have bone skeletons. Population and conservation L. chalumnae is widely but very sparsely distributed around the rim of the western Indian Ocean, from South Africa northward along the East African coast, especially the Tanga Region of Tanzania to Kenya, the Comoros, and Madagascar, seemingly occurring in small colonies. In 1991, it was estimated that 2–5 coelacanths were accidentally caught each year from Grand Comoro, making up about 1% of its population. Between 1991 and 1994, there was an estimated 30% total population reduction of the coelacanth. In 1998, the total population of the West Indian Ocean coelacanth was estimated to have been 500 or fewer, a number that would threaten the survival of the species. Near Grand Comoro, an island northwest of Madagascar, a maximum of 370 individuals reside. L. chalumnae is listed as critically endangered by IUCN. In accordance with the Convention on International Trade of Endangered Species treaty, the coelacanth was added to Appendix I (threatened with extinction) in 1989. The treaty forbids international trade for commercial purposes and regulates all trade, including sending specimens to museums, through a system of permits. Discovery First discovery in South Africa On December 22, 1938, Hendrik Goosen, the captain of the trawler Nerine, returned to the harbour at East London, South Africa, after a trawl between the Chalumna and Ncera Rivers. As he frequently did, he telephoned his friend, Marjorie Courtenay-Latimer, curator at East London Museum, to see if she wanted to look over the contents of the catch for anything interesting, and told her of the strange fish he had set aside for her. Correspondence in the archives of the South African Institute for Aquatic Biodiversity (SAIAB, formerly the JLB Smith Institute of Ichthyology) show that Goosen went to great lengths to avoid any damage to this fish and ordered his crew to set it aside for the East London Museum. Goosen later told how the fish was steely blue when first seen but by the time the Nerine entered East London harbour many hours later the fish had become dark grey. Failing to find a description of the creature in any of her books, Courtenay-Latimer attempted to contact her friend, Professor J. L. B. Smith, but he was away for Christmas. Unable to preserve the fish, she reluctantly sent it to a taxidermist. When Smith returned, he immediately recognized it as a coelacanth, known to science only from fossils. Smith named the fish Latimeria chalumnae in honor of Marjorie Courtenay-Latimer and the waters in which it was found. The two discoverers received immediate recognition, and the fish became known as a "living fossil". The 1938 coelacanth is still on display in the East London Museum. However, as the specimen had been stuffed, the gills and skeleton were not available for examination, and some doubt therefore remained as to whether it was truly the same species. Smith began a hunt for a second specimen that would take more than a decade. The West Indian Ocean coelacanth was later found to be known to fishermen of the Grande Comore and Anjouan Islands, which it inhabits the slopes of, at depths between . The second specimen, Malania anjouanae A second specimen with a missing dorsal fin and deformed tail fin was captured in 1952 off the coast of Anjouan (Comoros). At the time it was believed to be a new species and placed in a new genus as well, Malania, named in honour of the Prime Minister of South Africa at the time, Daniel François Malan, without whose help the specimen would not have been preserved with its muscles and internal organs more or less intact. It has since been accepted as Latimeria chalumnae. Taxonomy The West Indian Ocean coelacanth (Latimeria chalumnae) is allocated to the genus Latimeria, which it shares with one other species, the Indonesian coelacanth (Latimeria menadoensis). From September 1997 – July 1998, two coelacanth fish were discovered off the coast of Manado Tua Island, Sulawesi, Indonesia, different from the Latimeria chalumnae discovered near the Comores. The Indonesian coelacanth is identifiable by its brownish grey color. Genetics The genome of Latimeria chalumnae was sequenced in 2013 to provide insight into tetrapod evolution. The coelacanths were long believed to be the closest relatives to the first tetrapods on land due to their body characteristics. However, genetic sequencing proved that the lungfishes are in fact the closest relatives to land tetrapods. The full sequence and annotation of the entry is available on the Ensembl genome browser.
Biology and health sciences
Fishes: General
Animals
361574
https://en.wikipedia.org/wiki/Parasitic%20disease
Parasitic disease
A parasitic disease, also known as parasitosis, is an infectious disease caused by parasites. Parasites are organisms which derive sustenance from its host while causing it harm. The study of parasites and parasitic diseases is known as parasitology. Medical parasitology is concerned with three major groups of parasites: parasitic protozoa, helminths, and parasitic arthropods. Parasitic diseases are thus considered those diseases that are caused by pathogens belonging taxonomically to either the animal kingdom, or the protozoan kingdom. Terminology Although organisms such as bacteria function as parasites, the usage of the term "parasitic disease" is usually more restricted. The three main types of organisms causing these conditions are protozoa (causing protozoan infection), helminths (helminthiasis), and ectoparasites. Protozoa and helminths are usually endoparasites (usually living inside the body of the host), while ectoparasites usually live on the surface of the host. Protozoa are single-celled, microscopic organisms that belong to the kingdom Protista. Helminths on the other hand are macroscopic, multicellular organisms that belong to the kingdom Animalia. Protozoans obtain their required nutrients through pinocytosis and phagocytosis. Helminths of class Cestoidea and Trematoda absorb nutrients, whereas nematodes obtain needed nourishment through ingestion. Occasionally the definition of "parasitic disease" is restricted to diseases due to endoparasites. Transmission Mammals can get parasites from contaminated food or water, bug bites, sexual contact, or contact with animals. Some ways in which people may acquire parasitic infections are walking barefoot, inadequate disposal of feces, lack of hygiene, close contact with someone carrying specific parasites, and eating undercooked foods, unwashed fruits and vegetables or foods from contaminated regions. Treatment Parasitic infections can usually be treated with antiparasitic drugs. The use of viruses to treat infections caused by protozoa has been proposed.
Biology and health sciences
Concepts
Health
361609
https://en.wikipedia.org/wiki/Moduli%20space
Moduli space
In mathematics, in particular algebraic geometry, a moduli space is a geometric space (usually a scheme or an algebraic stack) whose points represent algebro-geometric objects of some fixed kind, or isomorphism classes of such objects. Such spaces frequently arise as solutions to classification problems: If one can show that a collection of interesting objects (e.g., the smooth algebraic curves of a fixed genus) can be given the structure of a geometric space, then one can parametrize such objects by introducing coordinates on the resulting space. In this context, the term "modulus" is used synonymously with "parameter"; moduli spaces were first understood as spaces of parameters rather than as spaces of objects. A variant of moduli spaces is formal moduli. Bernhard Riemann first used the term "moduli" in 1857. Motivation Moduli spaces are spaces of solutions of geometric classification problems. That is, the points of a moduli space correspond to solutions of geometric problems. Here different solutions are identified if they are isomorphic (that is, geometrically the same). Moduli spaces can be thought of as giving a universal space of parameters for the problem. For example, consider the problem of finding all circles in the Euclidean plane up to congruence. Any circle can be described uniquely by giving three points, but many different sets of three points give the same circle: the correspondence is many-to-one. However, circles are uniquely parameterized by giving their center and radius: this is two real parameters and one positive real parameter. Since we are only interested in circles "up to congruence", we identify circles having different centers but the same radius, and so the radius alone suffices to parameterize the set of interest. The moduli space is, therefore, the positive real numbers. Moduli spaces often carry natural geometric and topological structures as well. In the example of circles, for instance, the moduli space is not just an abstract set, but the absolute value of the difference of the radii defines a metric for determining when two circles are "close". The geometric structure of moduli spaces locally tells us when two solutions of a geometric classification problem are "close", but generally moduli spaces also have a complicated global structure as well. For example, consider how to describe the collection of lines in R2 that intersect the origin. We want to assign to each line L of this family a quantity that can uniquely identify it—a modulus. An example of such a quantity is the positive angle θ(L) with 0 ≤ θ < π radians. The set of lines L so parametrized is known as P1(R) and is called the real projective line. We can also describe the collection of lines in R2 that intersect the origin by means of a topological construction. To wit: consider the unit circle S1 ⊂ R2 and notice that every point s ∈ S1 gives a line L(s) in the collection (which joins the origin and s). However, this map is two-to-one, so we want to identify s ~ −s to yield P1(R) ≅ S1/~ where the topology on this space is the quotient topology induced by the quotient map S1 → P1(R). Thus, when we consider P1(R) as a moduli space of lines that intersect the origin in R2, we capture the ways in which the members (lines in this case) of the family can modulate by continuously varying 0 ≤ θ < π. Basic examples Projective space and Grassmannians The real projective space Pn is a moduli space that parametrizes the space of lines in Rn+1 which pass through the origin. Similarly, complex projective space is the space of all complex lines in Cn+1 passing through the origin. More generally, the Grassmannian G(k, V) of a vector space V over a field F is the moduli space of all k-dimensional linear subspaces of V. Projective space as moduli of very ample line bundles generated by global sections Whenever there is an embedding of a scheme into the universal projective space , the embedding is given by a line bundle and sections which all don't vanish at the same time. This means, given a pointthere is an associated pointgiven by the compositionsThen, two line bundles with sections are equivalentiff there is an isomorphism such that . This means the associated moduli functor sends a scheme to the setShowing this is true can be done by running through a series of tautologies: any projective embedding gives the globally generated sheaf with sections . Conversely, given an ample line bundle globally generated by sections gives an embedding as above. Chow variety The Chow variety Chow(d,P3) is a projective algebraic variety which parametrizes degree d curves in P3. It is constructed as follows. Let C be a curve of degree d in P3, then consider all the lines in P3 that intersect the curve C. This is a degree d divisor DC in G(2, 4), the Grassmannian of lines in P3. When C varies, by associating C to DC, we obtain a parameter space of degree d curves as a subset of the space of degree d divisors of the Grassmannian: Chow(d,P3). Hilbert scheme The Hilbert scheme Hilb(X) is a moduli scheme. Every closed point of Hilb(X) corresponds to a closed subscheme of a fixed scheme X, and every closed subscheme is represented by such a point. A simple example of a Hilbert scheme is the Hilbert scheme parameterizing degree hypersurfaces of projective space . This is given by the projective bundlewith universal family given bywhere is the associated projective scheme for the degree homogeneous polynomial . Definitions There are several related notions of things we could call moduli spaces. Each of these definitions formalizes a different notion of what it means for the points of space M to represent geometric objects. Fine moduli spaces This is the standard concept. Heuristically, if we have a space M for which each point m ∊ M corresponds to an algebro-geometric object Um, then we can assemble these objects into a tautological family U over M. (For example, the Grassmannian G(k, V) carries a rank k bundle whose fiber at any point [L] ∊ G(k, V) is simply the linear subspace L ⊂ V.) M is called a base space of the family U. We say that such a family is universal if any family of algebro-geometric objects T over any base space B is the pullback of U along a unique map B → M. A fine moduli space is a space M which is the base of a universal family. More precisely, suppose that we have a functor F from schemes to sets, which assigns to a scheme B the set of all suitable families of objects with base B. A space M is a fine moduli space for the functor F if M represents F, i.e., there is a natural isomorphism τ : F → Hom(−, M), where Hom(−, M) is the functor of points. This implies that M carries a universal family; this family is the family on M corresponding to the identity map 1M ∊ Hom(M, M). Coarse moduli spaces Fine moduli spaces are desirable, but they do not always exist and are frequently difficult to construct, so mathematicians sometimes use a weaker notion, the idea of a coarse moduli space. A space M is a coarse moduli space for the functor F if there exists a natural transformation τ : F → Hom(−, M) and τ is universal among such natural transformations. More concretely, M is a coarse moduli space for F if any family T over a base B gives rise to a map φT : B → M and any two objects V and W (regarded as families over a point) correspond to the same point of M if and only if V and W are isomorphic. Thus, M is a space which has a point for every object that could appear in a family, and whose geometry reflects the ways objects can vary in families. Note, however, that a coarse moduli space does not necessarily carry any family of appropriate objects, let alone a universal one. In other words, a fine moduli space includes both a base space M and universal family U → M, while a coarse moduli space only has the base space M. Moduli stacks It is frequently the case that interesting geometric objects come equipped with many natural automorphisms. This in particular makes the existence of a fine moduli space impossible (intuitively, the idea is that if L is some geometric object, the trivial family L × [0,1] can be made into a twisted family on the circle S1 by identifying L × {0} with L × {1} via a nontrivial automorphism. Now if a fine moduli space X existed, the map S1 → X should not be constant, but would have to be constant on any proper open set by triviality), one can still sometimes obtain a coarse moduli space. However, this approach is not ideal, as such spaces are not guaranteed to exist, they are frequently singular when they do exist, and miss details about some non-trivial families of objects they classify. A more sophisticated approach is to enrich the classification by remembering the isomorphisms. More precisely, on any base B one can consider the category of families on B with only isomorphisms between families taken as morphisms. One then considers the fibred category which assigns to any space B the groupoid of families over B. The use of these categories fibred in groupoids to describe a moduli problem goes back to Grothendieck (1960/61). In general, they cannot be represented by schemes or even algebraic spaces, but in many cases, they have a natural structure of an algebraic stack. Algebraic stacks and their use to analyze moduli problems appeared in Deligne-Mumford (1969) as a tool to prove the irreducibility of the (coarse) moduli space of curves of a given genus. The language of algebraic stacks essentially provides a systematic way to view the fibred category that constitutes the moduli problem as a "space", and the moduli stack of many moduli problems is better-behaved (such as smooth) than the corresponding coarse moduli space. Further examples Moduli of curves The moduli stack classifies families of smooth projective curves of genus g, together with their isomorphisms. When g > 1, this stack may be compactified by adding new "boundary" points which correspond to stable nodal curves (together with their isomorphisms). A curve is stable if it has only a finite group of automorphisms. The resulting stack is denoted . Both moduli stacks carry universal families of curves. One can also define coarse moduli spaces representing isomorphism classes of smooth or stable curves. These coarse moduli spaces were actually studied before the notion of moduli stack was invented. In fact, the idea of a moduli stack was invented by Deligne and Mumford in an attempt to prove the projectivity of the coarse moduli spaces. In recent years, it has become apparent that the stack of curves is actually the more fundamental object. Both stacks above have dimension 3g−3; hence a stable nodal curve can be completely specified by choosing the values of 3g−3 parameters, when g > 1. In lower genus, one must account for the presence of smooth families of automorphisms, by subtracting their number. There is exactly one complex curve of genus zero, the Riemann sphere, and its group of isomorphisms is PGL(2). Hence, the dimension of is dim(space of genus zero curves) − dim(group of automorphisms) = 0 − dim(PGL(2)) = −3. Likewise, in genus 1, there is a one-dimensional space of curves, but every such curve has a one-dimensional group of automorphisms. Hence, the stack has dimension 0. The coarse moduli spaces have dimension 3g−3 as the stacks when g > 1 because the curves with genus g > 1 have only a finite group as its automorphism i.e. dim(a group of automorphisms) = 0. Eventually, in genus zero, the coarse moduli space has dimension zero, and in genus one, it has dimension one. One can also enrich the problem by considering the moduli stack of genus g nodal curves with n marked points. Such marked curves are said to be stable if the subgroup of curve automorphisms which fix the marked points is finite. The resulting moduli stacks of smooth (or stable) genus g curves with n-marked points are denoted (or ), and have dimension 3g − 3 + n. A case of particular interest is the moduli stack of genus 1 curves with one marked point. This is the stack of elliptic curves, and is the natural home of the much studied modular forms, which are meromorphic sections of bundles on this stack. Moduli of varieties In higher dimensions, moduli of algebraic varieties are more difficult to construct and study. For instance, the higher-dimensional analogue of the moduli space of elliptic curves discussed above is the moduli space of abelian varieties, such as the Siegel modular variety. This is the problem underlying Siegel modular form theory.
Mathematics
Other algebra topics
null
361814
https://en.wikipedia.org/wiki/Sauropsida
Sauropsida
Sauropsida (Greek for "lizard faces") is a clade of amniotes, broadly equivalent to the class Reptilia, though typically used in a broader sense to also include extinct stem-group relatives of modern reptiles and birds (which, as theropod dinosaurs, are nested within reptiles as more closely related to crocodilians than to lizards or turtles). The most popular definition states that Sauropsida is the sibling taxon to Synapsida, the other clade of amniotes which includes mammals as its only modern representatives. Although early synapsids have historically been referred to as "mammal-like reptiles", all synapsids are more closely related to mammals than to any modern reptile. Sauropsids, on the other hand, include all amniotes more closely related to modern reptiles than to mammals. This includes Aves (birds), which are recognized as a subgroup of archosaurian reptiles despite originally being named as a separate class in Linnaean taxonomy. The base of Sauropsida forks into two main groups of "reptiles": Eureptilia ("true reptiles") and Parareptilia ("next to reptiles"). Eureptilia encompasses all living reptiles (including birds), as well as various extinct groups. Parareptilia is typically considered to be an entirely extinct group, though a few hypotheses for the origin of turtles have suggested that they belong to the parareptiles. The clades Recumbirostra and Varanopidae, traditionally thought to be lepospondyls and synapsids respectively, may also be basal sauropsids. The term "Sauropsida" originated in 1864 with Thomas Henry Huxley, who grouped birds with reptiles based on fossil evidence. History of classification Huxley and the fossil gaps The term Sauropsida ("lizard faces") has a long history, and hails back to Thomas Henry Huxley, and his opinion that birds had risen from the dinosaurs. He based this chiefly on the fossils of Hesperornis and Archaeopteryx, that were starting to become known at the time. In the Hunterian lectures delivered at the Royal College of Surgeons in 1863, Huxley grouped the vertebrate classes informally into mammals, sauroids, and ichthyoids (the latter containing the anamniotes), based on the gaps in physiological traits and lack of transitional fossils that seemed to exist between the three groups. Early in the following year he proposed the names Sauropsida and Ichthyopsida for the two latter. Huxley did however include groups on the mammalian line (synapsids) like Dicynodon among the sauropsids. Thus, under the original definition, Sauropsida contained not only the groups usually associated with it today, but also several groups that today are known to be in the mammalian side of the tree. Sauropsids redefined (Goodrich, 1916) By the early 20th century, the fossils of Permian synapsids from South Africa had become well known, allowing palaeontologists to trace synapsid evolution in much greater detail. The term Sauropsida was taken up by E. S. Goodrich in 1916 much like Huxley's, to include lizards, birds and their relatives. He distinguished them from mammals and their extinct relatives, which he included in the sister group Theropsida (now usually replaced with the name Synapsida). Goodrich's classification thus differs somewhat from Huxley's, in which the non-mammalian synapsids (or at least the dicynodontians) fell under the sauropsids. Goodrich supported this division by the nature of the hearts and blood vessels in each group, and other features such as the structure of the forebrain. According to Goodrich, both lineages evolved from an earlier stem group, the Protosauria ("first lizards"), which included some Paleozoic amphibians as well as early reptiles predating the sauropsid/synapsid split (and thus not true sauropsids). His concept differed from modern classifications in that he considered a modified fifth metatarsal to be an apomorphy of the group, leading him to place Sauropterygia, Mesosauria and possibly Ichthyosauria and Araeoscelida in the Theropsida. Detailing the reptile family tree In 1956, D. M. S. Watson observed that sauropsids and synapsids diverged very early in the reptilian evolutionary history, and so he divided Goodrich's Protosauria between the two groups. He also reinterpreted the Sauropsida and Theropsida to exclude birds and mammals respectively, making them paraphyletic, unlike Goodrich's definition. Thus his Sauropsida included Procolophonia, Eosuchia, Protorosauria, Millerosauria, Chelonia (turtles), Squamata (lizards and snakes), Rhynchocephalia, Rhynchosauria, Choristodera, Thalattosauria, Crocodilia, "thecodonts" (paraphyletic basal Archosauria), non-avian dinosaurs, pterosaurs and sauropyterygians. However, his concept differed from the modern one in that reptiles without an otic notch, such as araeoscelids and captorhinids, were believed to be theropsids. This classification supplemented, but was never as popular as, the classification of the reptiles (according to Romer's classic Vertebrate Paleontology) into four subclasses according to the positioning of temporal fenestrae, openings in the sides of the skull behind the eyes. Since the advent of phylogenetic nomenclature, the term Reptilia has fallen out of favor with many taxonomists, who have used Sauropsida in its place to include a monophyletic group containing the traditional reptiles and the birds. Cladistic definitions The class Reptilia has been known to be an evolutionary grade rather than a clade for as long as evolution has been recognised. Reclassifying reptiles has been among the key aims of phylogenetic nomenclature. The term Sauropsida had from the mid 20th century been used to denote a branch-based clade containing all amniote species which are not on the synapsid side of the split between reptiles and mammals. This group encompasses all now-living reptiles as well as birds, and as such is comparable to Goodrich's classification. The main difference is that better resolution of the early amniote tree has split up most of Goodrich's "Protosauria", though definitions of Sauropsida essentially identical to Huxley's (i.e. including the mammal-like reptiles) are also forwarded. Some later cladistic work has used Sauropsida more restrictively, to signify the crown group, i.e. all descendants of the last common ancestor of extant reptiles and birds. A number of phylogenetic stem, node and crown definitions have been published, anchored in a variety of fossil and extant organisms, thus there is currently no consensus of the actual definition (and thus content) of Sauropsida as a phylogenetic unit. Some taxonomists, such as Benton (2004), have co-opted the term to fit into traditional rank-based classifications, making Sauropsida and Synapsida class-level taxa to replace the traditional Class Reptilia, while Modesto and Anderson (2004), using the PhyloCode standard, have suggested replacing the name Sauropsida with their redefinition of Reptilia, arguing that the latter is by far better known and should have priority. Cladistic definitions of Sauropsida include: Sauropsida as the total group of reptiles: "Reptiles plus all other amniotes more closely related to them than they are to mammals" (Gauthier, 1994). This is a branch-based total group definition. Gauthier (1994) considered turtles to be descended from parareptiles, thus defining Reptilia as a more restricted crown group encompassing diapsids and parareptiles (apart from mesosaurs, which he considered to be the most basal branch of sauropsids). Sauropsida as a total group, synonymous with Reptilia sensu lato: "The most inclusive clade containing Lacerta agilis and Crocodylus niloticus, but not Homo sapiens" (Modesto & Anderson, 2004). This total group definition leaves the question of turtle ancestry unresolved. Sauropsida as a broad node-based group: "The last common ancestor of mesosaurs, testudines and diapsids, and all its descendants" (Laurin & Reisz, 1995). Though formulated differently, this grouping was similar in scope and intention to the definition provided by Gauthier (1994). Evolutionary history Sauropsids evolved from basal amniotes approximately 320 million years ago, in the Carboniferous Period of the Paleozoic Era. In the Mesozoic Era (from about 250 million years ago to about 66 million years ago), sauropsids were the largest animals on land, in the water, and in the air. The Mesozoic is sometimes called the Age of Reptiles. In the Cretaceous–Paleogene extinction event, the large-bodied sauropsids died out in the global extinction event at the end of the Mesozoic era. With the exception of a few species of birds, the entire dinosaur lineage became extinct; in the following era, the Cenozoic, the remaining birds diversified so extensively that, today, nearly one out of every three species of land vertebrate is a bird species. Phylogeny The cladogram presented here illustrates the "family tree" of sauropsids, and follows a simplified version of the relationships found by M.S. Lee, in 2013. All genetic studies have supported the hypothesis that turtles (formerly categorized together with ancient anapsids) are diapsid reptiles, despite lacking any skull openings behind their eye sockets; some studies have even placed turtles among the archosaurs, though a few have recovered turtles as lepidosauromorphs instead. The cladogram below used a combination of genetic (molecular) and fossil (morphological) data to obtain its results. Laurin & Piñeiro (2017) and Modesto (2019) proposed an alternate phylogeny of basal sauropsids. In this tree, parareptiles include turtles and are closely related to non-araeoscelidian diapsids. The family Varanopidae, otherwise included in Synapsida, is considered by Modesto a sauropsid group. In recent studies, the "microsaur" clade Recumbirostra, historically considered lepospondyl reptiliomorphs, have been recovered as early sauropsids. A 2024 study defines Captorhinidae and Araeoscelidia as sister groups that split off before the formation of crown amniota (synapsids and sauropsids). The same study also considers parareptiles to be polyphyletic, with some groups being closer to the crown group of reptiles than others. Structure difference with synapsids The last common ancestor of synapsids and Sauropsida lived at around 320mya during Carboniferous, known as Reptiliomorpha. Thermal and secretion The early synapsids inherited abundant glands on their skins from their amphibian ancestors. Those glands evolved into sweat glands in synapsids, which granted them the ability to maintain constant body temperature but made them unable to save water from evaporation. Moreover, the way synapsids discharge nitrogenous waste is through urea, which is toxic and must be dissolved in water to be secreted. Unfortunately, the upcoming Permian and Triassic periods were arid periods. As a result, only a small percent of early synapsids survived in the land from South Africa to Antarctica in today's geography. Unlike synapsids, sauropsids do not have those glands on the skin; their way of nitrogenous waste emission is through uric acid which does not require water and can be excreted with feces. As a result, sauropsids were able to expand to all environments and reach their pinnacle. Even today, most vertebrates that live in arid environments are sauropsids, snakes and desert lizards for example. Brain structure Different from how synapsids have their cortex in six different layers of neurons which is called neocortex, the cerebrum of Sauropsida has a completely different structure. For the corresponding structure of the cerebrum in the classic view, the neocortex of synapsids is homology with only the archicortex of the avian brain. However, in the modern view appeared since the 1960s, behavioral studies suggested that avian neostriatum and hyperstriatum can receive signals of vision, hearing, and body sensations, which means they act just like the neocortex. Comparing an avian brain to that to a mammal, nuclear-to-layered hypothesis proposed by Karten (1969), suggested that the cells which form layers in synapsids' neocortex, gather individually by type and form several nuclei. For synapsids, when one new function is adapted in evolution it will be assigned to a separate area of cortex, so for each function, synapsids will have to develop a separate area of cortex, and damage to that specific cortex may cause disability. However, for Sauropsida functions are disassembled and assigned to all nuclei. In this case, brain function is highly flexible for Sauropsida, even with a small brain, many Sauropsida can still have a relatively high intelligence compared to mammals, for example, birds in the family Corvidae. So, it is possible that some non-avian dinosaurs, like Tyrannosaurus, which had tiny brains compared to their enormous body size, were more intelligent than previously thought.
Biology and health sciences
Reptiles: General
Animals
361897
https://en.wikipedia.org/wiki/Astrophysics
Astrophysics
Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space—what they are, rather than where they are", which is studied in celestial mechanics. Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium, and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics. In practice, modern astronomical research often involves substantial work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, and quantum and physical cosmology (the physical study of the largest-scale structures of the universe), including string cosmology and astroparticle physics. History Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthly world was the realm which underwent growth and decay and in which natural motion was in a straight line and ended when the moving object reached its goal. Consequently, it was held that the celestial region was made of a fundamentally different kind of matter from that found in the terrestrial sphere; either Fire as maintained by Plato, or Aether as maintained by Aristotle. During the 17th century, natural philosophers such as Galileo, Descartes, and Newton began to maintain that the celestial and terrestrial regions were made of similar kinds of material and were subject to the same natural laws. Their challenge was that the tools had not yet been invented with which to prove these assertions. For much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects. A new astronomy, soon to be called astrophysics, began to emerge when William Hyde Wollaston and Joseph von Fraunhofer independently discovered that, when decomposing the light from the Sun, a multitude of dark lines (regions where there was less or no light) were observed in the spectrum. By 1860 the physicist, Gustav Kirchhoff, and the chemist, Robert Bunsen, had demonstrated that the dark lines in the solar spectrum corresponded to bright lines in the spectra of known gases, specific lines corresponding to unique chemical elements. Kirchhoff deduced that the dark lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere. In this way it was proved that the chemical elements found in the Sun and stars were also found on Earth. Among those who extended the study of solar and stellar spectra was Norman Lockyer, who in 1868 detected radiant, as well as dark lines in solar spectra. Working with chemist Edward Frankland to investigate the spectra of elements at various temperatures and pressures, he could not associate a yellow line in the solar spectrum with any known elements. He thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified. In 1885, Edward C. Pickering undertook an ambitious program of stellar spectral classification at Harvard College Observatory, in which a team of woman computers, notably Williamina Fleming, Antonia Maury, and Annie Jump Cannon, classified the spectra recorded on photographic plates. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types. Following Pickering's vision, by 1924 Cannon expanded the catalog to nine volumes and over a quarter of a million stars, developing the Harvard Classification Scheme which was accepted for worldwide use in 1922. In 1895, George Ellery Hale and James E. Keeler, along with a group of ten associate editors from Europe and the United States, established The Astrophysical Journal: An International Review of Spectroscopy and Astronomical Physics. It was intended that the journal would fill the gap between journals in astronomy and physics, providing a venue for publication of articles on astronomical applications of the spectroscope; on laboratory research closely allied to astronomical physics, including wavelength determinations of metallic and gaseous spectra and experiments on radiation and absorption; on theories of the Sun, Moon, planets, comets, meteors, and nebulae; and on instrumentation for telescopes and laboratories. Around 1920, following the discovery of the Hertzsprung–Russell diagram still used as the basis for classifying stars and their evolution, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered. In 1925 Cecilia Helena Payne (later Cecilia Payne-Gaposchkin) wrote an influential doctoral dissertation at Radcliffe College, in which she applied Saha's ionization theory to stellar atmospheres to relate the spectral classes to the temperature of stars. Most significantly, she discovered that hydrogen and helium were the principal components of stars, not the composition of Earth. Despite Eddington's suggestion, discovery was so unexpected that her dissertation readers (including Russell) convinced her to modify the conclusion before publication. However, later research confirmed her discovery. By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray, and gamma wavelengths. In the 21st century, it further expanded to include observations based on gravitational waves. Observational astrophysics Observational astronomy is a division of the astronomical science that is concerned with recording and interpreting data, in contrast with theoretical astrophysics, which is mainly concerned with finding out the measurable implications of physical models. It is the practice of observing celestial objects by using telescopes and other astronomical apparatus. Most astrophysical observations are made using the electromagnetic spectrum. Radio astronomy studies radiation with a wavelength greater than a few millimeters. Example areas of study are radio waves, usually emitted by cold objects such as interstellar gas and dust clouds; the cosmic microwave background radiation which is the redshifted light from the Big Bang; pulsars, which were first detected at microwave frequencies. The study of these waves requires very large radio telescopes. Infrared astronomy studies radiation with a wavelength that is too long to be visible to the naked eye but is shorter than radio waves. Infrared observations are usually made with telescopes similar to the familiar optical telescopes. Objects colder than stars (such as planets) are normally studied at infrared frequencies. Optical astronomy was the earliest kind of astronomy. Telescopes paired with a charge-coupled device or spectroscopes are the most common instruments used. The Earth's atmosphere interferes somewhat with optical observations, so adaptive optics and space telescopes are used to obtain the highest possible image quality. In this wavelength range, stars are highly visible, and many chemical spectra can be observed to study the chemical composition of stars, galaxies, and nebulae. Ultraviolet, X-ray and gamma ray astronomy study very energetic processes such as binary pulsars, black holes, magnetars, and many others. These kinds of radiation do not penetrate the Earth's atmosphere well. There are two methods in use to observe this part of the electromagnetic spectrum—space-based telescopes and ground-based imaging air Cherenkov telescopes (IACT). Examples of Observatories of the first type are RXTE, the Chandra X-ray Observatory and the Compton Gamma Ray Observatory. Examples of IACTs are the High Energy Stereoscopic System (H.E.S.S.) and the MAGIC telescope. Other than electromagnetic radiation, few things may be observed from the Earth that originate from great distances. A few gravitational wave observatories have been constructed, but gravitational waves are extremely difficult to detect. Neutrino observatories have also been built, primarily to study the Sun. Cosmic rays consisting of very high-energy particles can be observed hitting the Earth's atmosphere. Observations can also vary in their time scale. Most optical observations take minutes to hours, so phenomena that change faster than this cannot readily be observed. However, historical data on some objects is available, spanning centuries or millennia. On the other hand, radio observations may look at events on a millisecond timescale (millisecond pulsars) or combine years of data (pulsar deceleration studies). The information obtained from these different timescales is very different. The study of the Sun has a special place in observational astrophysics. Due to the tremendous distance of all other stars, the Sun can be observed in a kind of detail unparalleled by any other star. Understanding the Sun serves as a guide to understanding of other stars. The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the Hertzsprung–Russell diagram, which can be viewed as representing the state of a stellar object, from birth to destruction. Theoretical astrophysics Theoretical astrophysicists use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen. Theorists in astrophysics endeavor to create theoretical models and figure out the observational consequences of those models. This helps allow observers to look for data that can refute a model or help in choosing between several alternate or conflicting models. Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model. Topics studied by theoretical astrophysicists include stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Relativistic astrophysics serves as a tool to gauge the properties of large-scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves. Some widely accepted and studied theories and models in astrophysics, now included in the Lambda-CDM model, are the Big Bang, cosmic inflation, dark matter, dark energy and fundamental theories of physics. Popularization The roots of astrophysics can be found in the seventeenth century emergence of a unified physics, in which the same laws applied to the celestial and terrestrial realms. There were scientists who were qualified in both physics and astronomy who laid the firm foundation for the current science of astrophysics. In modern times, students continue to be drawn to astrophysics due to its popularization by the Royal Astronomical Society and notable educators such as prominent professors Lawrence Krauss, Subrahmanyan Chandrasekhar, Stephen Hawking, Hubert Reeves, Carl Sagan and Patrick Moore. The efforts of the early, late, and present scientists continue to attract young people to study the history and science of astrophysics. The television sitcom show The Big Bang Theory popularized the field of astrophysics with the general public, and featured some well known scientists like Stephen Hawking and Neil deGrasse Tyson.
Physical sciences
Basics_2
null
361924
https://en.wikipedia.org/wiki/Order%20theory
Order theory
Order theory is a branch of mathematics that investigates the intuitive notion of order using binary relations. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". This article introduces the field and provides basic definitions. A list of order-theoretic terms can be found in the order theory glossary. Background and motivation Orders are everywhere in mathematics and related fields like computer science. The first order often discussed in primary school is the standard order on the natural numbers e.g. "2 is less than 3", "10 is greater than 5", or "Does Tom have fewer cookies than Sally?". This intuitive concept can be extended to orders on other sets of numbers, such as the integers and the reals. The idea of being greater than or less than another number is one of the basic intuitions of number systems (compare with numeral systems) in general (although one usually is also interested in the actual difference of two numbers, which is not given by the order). Other familiar examples of orderings are the alphabetical order of words in a dictionary and the genealogical property of lineal descent within a group of people. The notion of order is very general, extending beyond contexts that have an immediate, intuitive feel of sequence or relative quantity. In other contexts orders may capture notions of containment or specialization. Abstractly, this type of order amounts to the subset relation, e.g., "Pediatricians are physicians," and "Circles are merely special-case ellipses." Some orders, like "less-than" on the natural numbers and alphabetical order on words, have a special property: each element can be compared to any other element, i.e. it is smaller (earlier) than, larger (later) than, or identical to. However, many other orders do not. Consider for example the subset order on a collection of sets: though the set of birds and the set of dogs are both subsets of the set of animals, neither the birds nor the dogs constitutes a subset of the other. Those orders like the "subset-of" relation for which there exist incomparable elements are called partial orders; orders for which every pair of elements is comparable are total orders. Order theory captures the intuition of orders that arises from such examples in a general setting. This is achieved by specifying properties that a relation ≤ must have to be a mathematical order. This more abstract approach makes much sense, because one can derive numerous theorems in the general setting, without focusing on the details of any particular order. These insights can then be readily transferred to many less abstract applications. Driven by the wide practical usage of orders, numerous special kinds of ordered sets have been defined, some of which have grown into mathematical fields of their own. In addition, order theory does not restrict itself to the various classes of ordering relations, but also considers appropriate functions between them. A simple example of an order theoretic property for functions comes from analysis where monotone functions are frequently found. Basic definitions This section introduces ordered sets by building upon the concepts of set theory, arithmetic, and binary relations. Partially ordered sets Orders are special binary relations. Suppose that P is a set and that ≤ is a relation on P ('relation on a set' is taken to mean 'relation amongst its inhabitants', i.e. ≤ is a subset of the cartesian product P x P). Then ≤ is a partial order if it is reflexive, antisymmetric, and transitive, that is, if for all a, b and c in P, we have that: a ≤ a (reflexivity) if a ≤ b and b ≤ a then a = b (antisymmetry) if a ≤ b and b ≤ c then a ≤ c (transitivity). A set with a partial order on it is called a partially ordered set, poset, or just ordered set if the intended meaning is clear. By checking these properties, one immediately sees that the well-known orders on natural numbers, integers, rational numbers and reals are all orders in the above sense. However, these examples have the additional property that any two elements are comparable, that is, for all a and b in P, we have that: a ≤ b or b ≤ a. A partial order with this property is called a total order. These orders can also be called linear orders or chains. While many familiar orders are linear, the subset order on sets provides an example where this is not the case. Another example is given by the divisibility (or "is-a-factor-of") relation |. For two natural numbers n and m, we write n|m if n divides m without remainder. One easily sees that this yields a partial order. For example neither 3 divides 13 nor 13 divides 3, so 3 and 13 are not comparable elements of the divisibility relation on the set of integers. The identity relation = on any set is also a partial order in which every two distinct elements are incomparable. It is also the only relation that is both a partial order and an equivalence relation because it satisfies both the antisymmetry property of partial orders and the symmetry property of equivalence relations. Many advanced properties of posets are interesting mainly for non-linear orders. Visualizing a poset Hasse diagrams can visually represent the elements and relations of a partial ordering. These are graph drawings where the vertices are the elements of the poset and the ordering relation is indicated by both the edges and the relative positioning of the vertices. Orders are drawn bottom-up: if an element x is smaller than (precedes) y then there exists a path from x to y that is directed upwards. It is often necessary for the edges connecting elements to cross each other, but elements must never be located within an edge. An instructive exercise is to draw the Hasse diagram for the set of natural numbers that are smaller than or equal to 13, ordered by | (the divides relation). Even some infinite sets can be diagrammed by superimposing an ellipsis (...) on a finite sub-order. This works well for the natural numbers, but it fails for the reals, where there is no immediate successor above 0; however, quite often one can obtain an intuition related to diagrams of a similar kind. Special elements within an order In a partially ordered set there may be some elements that play a special role. The most basic example is given by the least element of a poset. For example, 1 is the least element of the positive integers and the empty set is the least set under the subset order. Formally, an element m is a least element if: m ≤ a, for all elements a of the order. The notation 0 is frequently found for the least element, even when no numbers are concerned. However, in orders on sets of numbers, this notation might be inappropriate or ambiguous, since the number 0 is not always least. An example is given by the above divisibility order |, where 1 is the least element since it divides all other numbers. In contrast, 0 is the number that is divided by all other numbers. Hence it is the greatest element of the order. Other frequent terms for the least and greatest elements is bottom and top or zero and unit. Least and greatest elements may fail to exist, as the example of the real numbers shows. But if they exist, they are always unique. In contrast, consider the divisibility relation | on the set {2,3,4,5,6}. Although this set has neither top nor bottom, the elements 2, 3, and 5 have no elements below them, while 4, 5 and 6 have none above. Such elements are called minimal and maximal, respectively. Formally, an element m is minimal if: a ≤ m implies a = m, for all elements a of the order. Exchanging ≤ with ≥ yields the definition of maximality. As the example shows, there can be many maximal elements and some elements may be both maximal and minimal (e.g. 5 above). However, if there is a least element, then it is the only minimal element of the order. Again, in infinite posets maximal elements do not always exist - the set of all finite subsets of a given infinite set, ordered by subset inclusion, provides one of many counterexamples. An important tool to ensure the existence of maximal elements under certain conditions is Zorn's Lemma. Subsets of partially ordered sets inherit the order. We already applied this by considering the subset {2,3,4,5,6} of the natural numbers with the induced divisibility ordering. Now there are also elements of a poset that are special with respect to some subset of the order. This leads to the definition of upper bounds. Given a subset S of some poset P, an upper bound of S is an element b of P that is above all elements of S. Formally, this means that s ≤ b, for all s in S. Lower bounds again are defined by inverting the order. For example, -5 is a lower bound of the natural numbers as a subset of the integers. Given a set of sets, an upper bound for these sets under the subset ordering is given by their union. In fact, this upper bound is quite special: it is the smallest set that contains all of the sets. Hence, we have found the least upper bound of a set of sets. This concept is also called supremum or join, and for a set S one writes sup(S) or for its least upper bound. Conversely, the greatest lower bound is known as infimum or meet and denoted inf(S) or . These concepts play an important role in many applications of order theory. For two elements x and y, one also writes and for sup({x,y}) and inf({x,y}), respectively. For example, 1 is the infimum of the positive integers as a subset of integers. For another example, consider again the relation | on natural numbers. The least upper bound of two numbers is the smallest number that is divided by both of them, i.e. the least common multiple of the numbers. Greatest lower bounds in turn are given by the greatest common divisor. Duality In the previous definitions, we often noted that a concept can be defined by just inverting the ordering in a former definition. This is the case for "least" and "greatest", for "minimal" and "maximal", for "upper bound" and "lower bound", and so on. This is a general situation in order theory: A given order can be inverted by just exchanging its direction, pictorially flipping the Hasse diagram top-down. This yields the so-called dual, inverse, or opposite order. Every order theoretic definition has its dual: it is the notion one obtains by applying the definition to the inverse order. Since all concepts are symmetric, this operation preserves the theorems of partial orders. For a given mathematical result, one can just invert the order and replace all definitions by their duals and one obtains another valid theorem. This is important and useful, since one obtains two theorems for the price of one. Some more details and examples can be found in the article on duality in order theory. Constructing new orders There are many ways to construct orders out of given orders. The dual order is one example. Another important construction is the cartesian product of two partially ordered sets, taken together with the product order on pairs of elements. The ordering is defined by (a, x) ≤ (b, y) if (and only if) a ≤ b and x ≤ y. (Notice carefully that there are three distinct meanings for the relation symbol ≤ in this definition.) The disjoint union of two posets is another typical example of order construction, where the order is just the (disjoint) union of the original orders. Every partial order ≤ gives rise to a so-called strict order <, by defining a < b if a ≤ b and not b ≤ a. This transformation can be inverted by setting a ≤ b if a < b or a = b. The two concepts are equivalent although in some circumstances one can be more convenient to work with than the other. Functions between orders It is reasonable to consider functions between partially ordered sets having certain additional properties that are related to the ordering relations of the two sets. The most fundamental condition that occurs in this context is monotonicity. A function f from a poset P to a poset Q is monotone, or order-preserving, if a ≤ b in P implies f(a) ≤ f(b) in Q (Noting that, strictly, the two relations here are different since they apply to different sets.). The converse of this implication leads to functions that are order-reflecting, i.e. functions f as above for which f(a) ≤ f(b) implies a ≤ b. On the other hand, a function may also be order-reversing or antitone, if a ≤ b implies f(a) ≥ f(b). An order-embedding is a function f between orders that is both order-preserving and order-reflecting. Examples for these definitions are found easily. For instance, the function that maps a natural number to its successor is clearly monotone with respect to the natural order. Any function from a discrete order, i.e. from a set ordered by the identity order "=", is also monotone. Mapping each natural number to the corresponding real number gives an example for an order embedding. The set complement on a powerset is an example of an antitone function. An important question is when two orders are "essentially equal", i.e. when they are the same up to renaming of elements. Order isomorphisms are functions that define such a renaming. An order-isomorphism is a monotone bijective function that has a monotone inverse. This is equivalent to being a surjective order-embedding. Hence, the image f(P) of an order-embedding is always isomorphic to P, which justifies the term "embedding". A more elaborate type of functions is given by so-called Galois connections. Monotone Galois connections can be viewed as a generalization of order-isomorphisms, since they constitute of a pair of two functions in converse directions, which are "not quite" inverse to each other, but that still have close relationships. Another special type of self-maps on a poset are closure operators, which are not only monotonic, but also idempotent, i.e. f(x) = f(f(x)), and extensive (or inflationary), i.e. x ≤ f(x). These have many applications in all kinds of "closures" that appear in mathematics. Besides being compatible with the mere order relations, functions between posets may also behave well with respect to special elements and constructions. For example, when talking about posets with least element, it may seem reasonable to consider only monotonic functions that preserve this element, i.e. which map least elements to least elements. If binary infima ∧ exist, then a reasonable property might be to require that f(x ∧ y) = f(x) ∧ f(y), for all x and y. All of these properties, and indeed many more, may be compiled under the label of limit-preserving functions. Finally, one can invert the view, switching from functions of orders to orders of functions. Indeed, the functions between two posets P and Q can be ordered via the pointwise order. For two functions f and g, we have f ≤ g if f(x) ≤ g(x) for all elements x of P. This occurs for example in domain theory, where function spaces play an important role. Special types of orders Many of the structures that are studied in order theory employ order relations with further properties. In fact, even some relations that are not partial orders are of special interest. Mainly the concept of a preorder has to be mentioned. A preorder is a relation that is reflexive and transitive, but not necessarily antisymmetric. Each preorder induces an equivalence relation between elements, where a is equivalent to b, if a ≤ b and b ≤ a. Preorders can be turned into orders by identifying all elements that are equivalent with respect to this relation. Several types of orders can be defined from numerical data on the items of the order: a total order results from attaching distinct real numbers to each item and using the numerical comparisons to order the items; instead, if distinct items are allowed to have equal numerical scores, one obtains a strict weak ordering. Requiring two scores to be separated by a fixed threshold before they may be compared leads to the concept of a semiorder, while allowing the threshold to vary on a per-item basis produces an interval order. An additional simple but useful property leads to so-called well-founded, for which all non-empty subsets have a minimal element. Generalizing well-orders from linear to partial orders, a set is well partially ordered if all its non-empty subsets have a finite number of minimal elements. Many other types of orders arise when the existence of infima and suprema of certain sets is guaranteed. Focusing on this aspect, usually referred to as completeness of orders, one obtains: Bounded posets, i.e. posets with a least and greatest element (which are just the supremum and infimum of the empty subset), Lattices, in which every non-empty finite set has a supremum and infimum, Complete lattices, where every set has a supremum and infimum, and Directed complete partial orders (dcpos), that guarantee the existence of suprema of all directed subsets and that are studied in domain theory. Partial orders with complements, or poc sets, are posets with a unique bottom element 0, as well as an order-reversing involution such that However, one can go even further: if all finite non-empty infima exist, then ∧ can be viewed as a total binary operation in the sense of universal algebra. Hence, in a lattice, two operations ∧ and ∨ are available, and one can define new properties by giving identities, such as x ∧ (y ∨ z)  =  (x ∧ y) ∨ (x ∧ z), for all x, y, and z. This condition is called distributivity and gives rise to distributive lattices. There are some other important distributivity laws which are discussed in the article on distributivity in order theory. Some additional order structures that are often specified via algebraic operations and defining identities are Heyting algebras and Boolean algebras, which both introduce a new operation ~ called negation. Both structures play a role in mathematical logic and especially Boolean algebras have major applications in computer science. Finally, various structures in mathematics combine orders with even more algebraic operations, as in the case of quantales, that allow for the definition of an addition operation. Many other important properties of posets exist. For example, a poset is locally finite if every closed interval [a, b] in it is finite. Locally finite posets give rise to incidence algebras which in turn can be used to define the Euler characteristic of finite bounded posets. Subsets of ordered sets In an ordered set, one can define many types of special subsets based on the given order. A simple example are upper sets; i.e. sets that contain all elements that are above them in the order. Formally, the upper closure of a set S in a poset P is given by the set {x in P | there is some y in S with y ≤ x}. A set that is equal to its upper closure is called an upper set. Lower sets are defined dually. More complicated lower subsets are ideals, which have the additional property that each two of their elements have an upper bound within the ideal. Their duals are given by filters. A related concept is that of a directed subset, which like an ideal contains upper bounds of finite subsets, but does not have to be a lower set. Furthermore, it is often generalized to preordered sets. A subset which is - as a sub-poset - linearly ordered, is called a chain. The opposite notion, the antichain, is a subset that contains no two comparable elements; i.e. that is a discrete order. Related mathematical areas Although most mathematical areas use orders in one or the other way, there are also a few theories that have relationships which go far beyond mere application. Together with their major points of contact with order theory, some of these are to be presented below. Universal algebra As already mentioned, the methods and formalisms of universal algebra are an important tool for many order theoretic considerations. Beside formalizing orders in terms of algebraic structures that satisfy certain identities, one can also establish other connections to algebra. An example is given by the correspondence between Boolean algebras and Boolean rings. Other issues are concerned with the existence of free constructions, such as free lattices based on a given set of generators. Furthermore, closure operators are important in the study of universal algebra. Topology In topology, orders play a very prominent role. In fact, the collection of open sets provides a classical example of a complete lattice, more precisely a complete Heyting algebra (or "frame" or "locale"). Filters and nets are notions closely related to order theory and the closure operator of sets can be used to define a topology. Beyond these relations, topology can be looked at solely in terms of the open set lattices, which leads to the study of pointless topology. Furthermore, a natural preorder of elements of the underlying set of a topology is given by the so-called specialization order, that is actually a partial order if the topology is T0. Conversely, in order theory, one often makes use of topological results. There are various ways to define subsets of an order which can be considered as open sets of a topology. Considering topologies on a poset (X, ≤) that in turn induce ≤ as their specialization order, the finest such topology is the Alexandrov topology, given by taking all upper sets as opens. Conversely, the coarsest topology that induces the specialization order is the upper topology, having the complements of principal ideals (i.e. sets of the form {y in X | y ≤ x} for some x) as a subbase. Additionally, a topology with specialization order ≤ may be order consistent, meaning that their open sets are "inaccessible by directed suprema" (with respect to ≤). The finest order consistent topology is the Scott topology, which is coarser than the Alexandrov topology. A third important topology in this spirit is the Lawson topology. There are close connections between these topologies and the concepts of order theory. For example, a function preserves directed suprema if and only if it is continuous with respect to the Scott topology (for this reason this order theoretic property is also called Scott-continuity). Category theory The visualization of orders with Hasse diagrams has a straightforward generalization: instead of displaying lesser elements below greater ones, the direction of the order can also be depicted by giving directions to the edges of a graph. In this way, each order is seen to be equivalent to a directed acyclic graph, where the nodes are the elements of the poset and there is a directed path from a to b if and only if a ≤ b. Dropping the requirement of being acyclic, one can also obtain all preorders. When equipped with all transitive edges, these graphs in turn are just special categories, where elements are objects and each set of morphisms between two elements is at most singleton. Functions between orders become functors between categories. Many ideas of order theory are just concepts of category theory in small. For example, an infimum is just a categorical product. More generally, one can capture infima and suprema under the abstract notion of a categorical limit (or colimit, respectively). Another place where categorical ideas occur is the concept of a (monotone) Galois connection, which is just the same as a pair of adjoint functors. But category theory also has its impact on order theory on a larger scale. Classes of posets with appropriate functions as discussed above form interesting categories. Often one can also state constructions of orders, like the product order, in terms of categories. Further insights result when categories of orders are found categorically equivalent to other categories, for example of topological spaces. This line of research leads to various representation theorems, often collected under the label of Stone duality. History As explained before, orders are ubiquitous in mathematics. However, the earliest explicit mentionings of partial orders are probably to be found not before the 19th century. In this context the works of George Boole are of great importance. Moreover, works of Charles Sanders Peirce, Richard Dedekind, and Ernst Schröder also consider concepts of order theory. Contributors to ordered geometry were listed in a 1961 textbook: In 1901 Bertrand Russell wrote "On the Notion of Order" exploring the foundations of the idea through generation of series. He returned to the topic in part IV of The Principles of Mathematics (1903). Russell noted that binary relation aRb has a sense proceeding from a to b with the converse relation having an opposite sense, and sense "is the source of order and series." (p 95) He acknowledges Immanuel Kant was "aware of the difference between logical opposition and the opposition of positive and negative". He wrote that Kant deserves credit as he "first called attention to the logical importance of asymmetric relations." The term poset as an abbreviation for partially ordered set is attributed to Garrett Birkhoff in the second edition of his influential book Lattice Theory.
Mathematics
Order theory
null
362348
https://en.wikipedia.org/wiki/Terahertz%20radiation
Terahertz radiation
Terahertz radiation – also known as submillimeter radiation, terahertz waves, tremendously high frequency (THF), T-rays, T-waves, T-light, T-lux or THz – consists of electromagnetic waves within the International Telecommunication Union-designated band of frequencies from 0.3 to 3 terahertz (THz), although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz. One terahertz is 1012 Hz or 1,000 GHz. Wavelengths of radiation in the terahertz band correspondingly range from 1 mm to 0.1 mm = 100 μm. Because terahertz radiation begins at a wavelength of around 1 millimeter and proceeds into shorter wavelengths, it is sometimes known as the submillimeter band, and its radiation as submillimeter waves, especially in astronomy. This band of electromagnetic radiation lies within the transition region between microwave and far infrared, and can be regarded as either. Compared to lower radio frequencies, terahertz radiation is strongly absorbed by the gases of the atmosphere, and in air most of the energy is attenuated within a few meters, so it is not practical for long distance terrestrial radio communication. It can penetrate thin layers of materials but is blocked by thicker objects. THz beams transmitted through materials can be used for material characterization, layer inspection, relief measurement, and as a lower-energy alternative to X-rays for producing high resolution images of the interior of solid objects. Terahertz radiation occupies a middle ground where the ranges of microwaves and infrared light waves overlap, known as the "terahertz gap"; it is called a "gap" because the technology for its generation and manipulation is still in its infancy. The generation and modulation of electromagnetic waves in this frequency range ceases to be possible by the conventional electronic devices used to generate radio waves and microwaves, requiring the development of new devices and techniques. Description Terahertz radiation falls in between infrared radiation and microwave radiation in the electromagnetic spectrum, and it shares some properties with each of these. Terahertz radiation travels in a line of sight and is non-ionizing. Like microwaves, terahertz radiation can penetrate a wide variety of non-conducting materials; clothing, paper, cardboard, wood, masonry, plastic and ceramics. The penetration depth is typically less than that of microwave radiation. Like infrared, terahertz radiation has limited penetration through fog and clouds and cannot penetrate liquid water or metal. Terahertz radiation can penetrate some distance through body tissue like x-rays, but unlike them is non-ionizing, so it is of interest as a replacement for medical X-rays. Due to its longer wavelength, images made using terahertz waves have lower resolution than X-rays and need to be enhanced (see figure at right). The earth's atmosphere is a strong absorber of terahertz radiation, so the range of terahertz radiation in air is limited to tens of meters, making it unsuitable for long-distance communications. However, at distances of ~10 meters the band may still allow many useful applications in imaging and construction of high bandwidth wireless networking systems, especially indoor systems. In addition, producing and detecting coherent terahertz radiation remains technically challenging, though inexpensive commercial sources now exist in the 0.3–1.0 THz range (the lower part of the spectrum), including gyrotrons, backward wave oscillators, and resonant-tunneling diodes. Due to the small energy of THz photons, current THz devices require low temperature during operation to suppress environmental noise. Tremendous efforts thus have been put into THz research to improve the operation temperature, using different strategies such as optomechanical meta-devices. Sources Natural Terahertz radiation is emitted as part of the black-body radiation from anything with a temperature greater than about 2 kelvin. While this thermal emission is very weak, observations at these frequencies are important for characterizing cold 10–20 K cosmic dust in interstellar clouds in the Milky Way galaxy, and in distant starburst galaxies. Telescopes operating in this band include the James Clerk Maxwell Telescope, the Caltech Submillimeter Observatory and the Submillimeter Array at the Mauna Kea Observatory in Hawaii, the BLAST balloon borne telescope, the Herschel Space Observatory, the Heinrich Hertz Submillimeter Telescope at the Mount Graham International Observatory in Arizona, and at the recently built Atacama Large Millimeter Array. Due to Earth's atmospheric absorption spectrum, the opacity of the atmosphere to submillimeter radiation restricts these observatories to very high altitude sites, or to space. Artificial , viable sources of terahertz radiation are the gyrotron, the backward wave oscillator ("BWO"), the molecule gas far-infrared laser, Schottky-diode multipliers, varactor (varicap) multipliers, quantum-cascade laser, the free-electron laser, synchrotron light sources, photomixing sources, single-cycle or pulsed sources used in terahertz time-domain spectroscopy such as photoconductive, surface field, photo-Dember and optical rectification emitters, and electronic oscillators based on resonant tunneling diodes have been shown to operate up to 1.98 THz. There have also been solid-state sources of millimeter and submillimeter waves for many years. AB Millimeter in Paris, for instance, produces a system that covers the entire range from 8 GHz to 1,000 GHz with solid state sources and detectors. Nowadays, most time-domain work is done via ultrafast lasers. In mid-2007, scientists at the U.S. Department of Energy's Argonne National Laboratory, along with collaborators in Turkey and Japan, announced the creation of a compact device that could lead to portable, battery-operated terahertz radiation sources. The device uses high-temperature superconducting crystals, grown at the University of Tsukuba in Japan. These crystals comprise stacks of Josephson junctions, which exhibit a property known as the Josephson effect: when external voltage is applied, alternating current flows across the junctions at a frequency proportional to the voltage. This alternating current induces an electromagnetic field. A small voltage (around two millivolts per junction) can induce frequencies in the terahertz range. In 2008, engineers at Harvard University achieved room temperature emission of several hundred nanowatts of coherent terahertz radiation using a semiconductor source. THz radiation was generated by nonlinear mixing of two modes in a mid-infrared quantum cascade laser. Previous sources had required cryogenic cooling, which greatly limited their use in everyday applications. In 2009, it was discovered that the act of unpeeling adhesive tape generates non-polarized terahertz radiation, with a narrow peak at 2 THz and a broader peak at 18 THz. The mechanism of its creation is tribocharging of the adhesive tape and subsequent discharge; this was hypothesized to involve bremsstrahlung with absorption or energy density focusing during dielectric breakdown of a gas. In 2013, researchers at Georgia Institute of Technology's Broadband Wireless Networking Laboratory and the Polytechnic University of Catalonia developed a method to create a graphene antenna: an antenna that would be shaped into graphene strips from 10 to 100 nanometers wide and one micrometer long. Such an antenna could be used to emit radio waves in the terahertz frequency range. Terahertz gap In engineering, the terahertz gap is a frequency band in the THz region for which practical technologies for generating and detecting the radiation do not exist. It is defined as 0.1 to 10 THz (wavelengths of 3 mm to 30 μm) although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz (a wavelength of 10 μm). Currently, at frequencies within this range, useful power generation and receiver technologies are inefficient and unfeasible. Mass production of devices in this range and operation at room temperature (at which energy kT is equal to the energy of a photon with a frequency of 6.2 THz) are mostly impractical. This leaves a gap between mature microwave technologies in the highest frequencies of the radio spectrum and the well-developed optical engineering of infrared detectors in their lowest frequencies. This radiation is mostly used in small-scale, specialized applications such as submillimetre astronomy. Research that attempts to resolve this issue has been conducted since the late 20th century. In 2024, an experiment has been published by German researchers where a TDLAS experiment at 4.75 THz has been performed in "infrared quality" with an uncooled pyroelectric receiver while the THz source has been a cw DFB-QC-Laser operated at 43.3 K and laser currents between 480 mA and 600 mA. Closure of the terahertz gap Most vacuum electronic devices that are used for microwave generation can be modified to operate at terahertz frequencies, including the magnetron, gyrotron, synchrotron, and free-electron laser. Similarly, microwave detectors such as the tunnel diode have been re-engineered to detect at terahertz and infrared frequencies as well. However, many of these devices are in prototype form, are not compact, or exist at university or government research labs, without the benefit of cost savings due to mass production. Research Molecular biology Terahertz radiation has comparable frequencies to the motion of biomolecular systems in the course of their function (a frequency 1THz is equivalent to a timescale of 1 picosecond, therefore in particular the range of hundreds of GHz up to low numbers of THz is comparable to biomolecular relaxation timescales of a few ps to a few ns). Modulation of biological and also neurological function is therefore possible using radiation in the range hundreds of GHz up to a few THz at relatively low energies (without significant heating or ionisation) achieving either beneficial or harmful effects. Medical imaging Unlike X-rays, terahertz radiation is not ionizing radiation and its low photon energies in general do not damage living tissues and DNA. Some frequencies of terahertz radiation can penetrate several millimeters of tissue with low water content (e.g., fatty tissue) and reflect back. Terahertz radiation can also detect differences in water content and density of a tissue. Such methods could allow effective detection of epithelial cancer with an imaging system that is safe, non-invasive, and painless. In response to the demand for COVID-19 screening terahertz spectroscopy and imaging has been proposed as a rapid screening tool. The first images generated using terahertz radiation date from the 1960s; however, in 1995 images generated using terahertz time-domain spectroscopy generated a great deal of interest. Some frequencies of terahertz radiation can be used for 3D imaging of teeth and may be more accurate than conventional X-ray imaging in dentistry. Security Terahertz radiation can penetrate fabrics and plastics, so it can be used in surveillance, such as security screening, to uncover concealed weapons on a person, remotely. This is of particular interest because many materials of interest have unique spectral "fingerprints" in the terahertz range. This offers the possibility to combine spectral identification with imaging. In 2002, the European Space Agency (ESA) Star Tiger team, based at the Rutherford Appleton Laboratory (Oxfordshire, UK), produced the first passive terahertz image of a hand. By 2004, ThruVision Ltd, a spin-out from the Council for the Central Laboratory of the Research Councils (CCLRC) Rutherford Appleton Laboratory, had demonstrated the world's first compact THz camera for security screening applications. The prototype system successfully imaged guns and explosives concealed under clothing. Passive detection of terahertz signatures avoid the bodily privacy concerns of other detection by being targeted to a very specific range of materials and objects. In January 2013, the NYPD announced plans to experiment with the new technology to detect concealed weapons, prompting Miami blogger and privacy activist Jonathan Corbett to file a lawsuit against the department in Manhattan federal court that same month, challenging such use: "For thousands of years, humans have used clothing to protect their modesty and have quite reasonably held the expectation of privacy for anything inside of their clothing, since no human is able to see through them." He sought a court order to prohibit using the technology without reasonable suspicion or probable cause. By early 2017, the department said it had no intention of ever using the sensors given to them by the federal government. Scientific use and imaging In addition to its current use in submillimetre astronomy, terahertz radiation spectroscopy could provide new sources of information for chemistry and biochemistry. Recently developed methods of THz time-domain spectroscopy (THz TDS) and THz tomography have been shown to be able to image samples that are opaque in the visible and near-infrared regions of the spectrum. The utility of THz-TDS is limited when the sample is very thin, or has a low absorbance, since it is very difficult to distinguish changes in the THz pulse caused by the sample from those caused by long-term fluctuations in the driving laser source or experiment. However, THz-TDS produces radiation that is both coherent and spectrally broad, so such images can contain far more information than a conventional image formed with a single-frequency source. Submillimeter waves are used in physics to study materials in high magnetic fields, since at high fields (over about 11 tesla), the electron spin Larmor frequencies are in the submillimeter band. Many high-magnetic field laboratories perform these high-frequency EPR experiments, such as the National High Magnetic Field Laboratory (NHMFL) in Florida. Terahertz radiation could let art historians see murals hidden beneath coats of plaster or paint in centuries-old buildings, without harming the artwork. In additional, THz imaging has been done with lens antennas to capture radio image of the object. Particle accelerators New types of particle accelerators that could achieve multi Giga-electron volts per metre (GeV/m) accelerating gradients are of utmost importance to reduce the size and cost of future generations of high energy colliders as well as provide a widespread availability of compact accelerator technology to smaller laboratories around the world. Gradients in the order of 100 MeV/m have been achieved by conventional techniques and are limited by RF-induced plasma breakdown. Beam driven dielectric wakefield accelerators (DWAs) typically operate in the Terahertz frequency range, which pushes the plasma breakdown threshold for surface electric fields into the multi-GV/m range. DWA technique allows to accommodate a significant amount of charge per bunch, and gives an access to conventional fabrication techniques for the accelerating structures. To date 0.3 GeV/m accelerating and 1.3 GeV/m decelerating gradients have been achieved using a dielectric lined waveguide with sub-millimetre transverse aperture. An accelerating gradient larger than 1 GeV/m, can potentially be produced by the Cherenkov Smith-Purcell radiative mechanism in a dielectric capillary with a variable inner radius. When an electron bunch propagates through the capillary, its self-field interacts with the dielectric material and produces wakefields that propagate inside the material at the Cherenkov angle. The wakefields are slowed down below the speed of light, as the relative dielectric permittivity of the material is larger than 1. The radiation is then reflected from the capillary's metallic boundary and diffracted back into the vacuum region, producing high accelerating fields on the capillary axis with a distinct frequency signature. In presence of a periodic boundary the Smith-Purcell radiation imposes frequency dispersion. A preliminary study with corrugated capillaries has shown some modification to the spectral content and amplitude of the generated wakefields, but the possibility of using Smith-Purcell effect in DWA is still under consideration. Communication The high atmospheric absorption of terahertz waves limits the range of communication using existing transmitters and antennas to tens of meters. However, the huge unallocated bandwidth available in the band (ten times the bandwidth of the millimeter wave band, 100 times that of the SHF microwave band) makes it very attractive for future data transmission and networking use. There are tremendous difficulties to extending the range of THz communication through the atmosphere, but the world telecommunications industry is funding much research into overcoming those limitations. One promising application area is the 6G cellphone and wireless standard, which will supersede the current 5G standard around 2030. For a given antenna aperture, the gain of directive antennas scales with the square of frequency, while for low power transmitters the power efficiency is independent of bandwidth. So the consumption factor theory of communication links indicates that, contrary to conventional engineering wisdom, for a fixed aperture it is more efficient in bits per second per watt to use higher frequencies in the millimeter wave and terahertz range. Small directive antennas a few centimeters in diameter can produce very narrow 'pencil' beams of THz radiation, and phased arrays of multiple antennas could concentrate virtually all the power output on the receiving antenna, allowing communication at longer distances. In May 2012, a team of researchers from the Tokyo Institute of Technology published in Electronics Letters that it had set a new record for wireless data transmission by using T-rays and proposed they be used as bandwidth for data transmission in the future. The team's proof of concept device used a resonant tunneling diode (RTD) negative resistance oscillator to produce waves in the terahertz band. With this RTD, the researchers sent a signal at 542 GHz, resulting in a data transfer rate of 3 Gigabits per second. It doubled the record for data transmission rate set the previous November. The study suggested that Wi-Fi using the system would be limited to approximately , but could allow data transmission at up to 100 Gbit/s. In 2011, Japanese electronic parts maker Rohm and a research team at Osaka University produced a chip capable of transmitting 1.5 Gbit/s using terahertz radiation. Potential uses exist in high-altitude telecommunications, above altitudes where water vapor causes signal absorption: aircraft to satellite, or satellite to satellite. Amateur radio A number of administrations permit amateur radio experimentation within the 275–3,000 GHz range or at even higher frequencies on a national basis, under license conditions that are usually based on RR5.565 of the ITU Radio Regulations. Amateur radio operators utilizing submillimeter frequencies often attempt to set two-way communication distance records. In the United States, WA1ZMS and W4WWQ set a record of on 403 GHz using CW (Morse code) on 21 December 2004. In Australia, at 30 THz a distance of was achieved by stations VK3CV and VK3LN on 8 November 2020. Manufacturing Many possible uses of terahertz sensing and imaging are proposed in manufacturing, quality control, and process monitoring. These in general exploit the traits of plastics and cardboard being transparent to terahertz radiation, making it possible to inspect packaged goods. The first imaging system based on optoelectronic terahertz time-domain spectroscopy were developed in 1995 by researchers from AT&T Bell Laboratories and was used for producing a transmission image of a packaged electronic chip. This system used pulsed laser beams with duration in range of picoseconds. Since then commonly used commercial/ research terahertz imaging systems have used pulsed lasers to generate terahertz images. The image can be developed based on either the attenuation or phase delay of the transmitted terahertz pulse. Since the beam is scattered more at the edges and also different materials have different absorption coefficients, the images based on attenuation indicates edges and different materials inside of objects. This approach is similar to X-ray transmission imaging, where images are developed based on attenuation of the transmitted beam. In the second approach, terahertz images are developed based on the time delay of the received pulse. In this approach, thicker parts of the objects are well recognized as the thicker parts cause more time delay of the pulse. Energy of the laser spots are distributed by a Gaussian function. The geometry and behavior of Gaussian beam in the Fraunhofer region imply that the electromagnetic beams diverge more as the frequencies of the beams decrease and thus the resolution decreases. This implies that terahertz imaging systems have higher resolution than scanning acoustic microscope (SAM) but lower resolution than X-ray imaging systems. Although terahertz can be used for inspection of packaged objects, it suffers from low resolution for fine inspections. X-ray image and terahertz images of an electronic chip are brought in the figure on the right. Obviously the resolution of X-ray is higher than terahertz image, but X-ray is ionizing and can be impose harmful effects on certain objects such as semiconductors and live tissues. To overcome low resolution of the terahertz systems near-field terahertz imaging systems are under development. In nearfield imaging the detector needs to be located very close to the surface of the plane and thus imaging of the thick packaged objects may not be feasible. In another attempt to increase the resolution, laser beams with frequencies higher than terahertz are used to excite the p-n junctions in semiconductor objects, the excited junctions generate terahertz radiation as a result as long as their contacts are unbroken and in this way damaged devices can be detected. In this approach, since the absorption increases exponentially with the frequency, again inspection of the thick packaged semiconductors may not be doable. Consequently, a tradeoff between the achievable resolution and the thickness of the penetration of the beam in the packaging material should be considered. THz gap research Ongoing investigation has resulted in improved emitters (sources) and detectors, and research in this area has intensified. However, drawbacks remain that include the substantial size of emitters, incompatible frequency ranges, and undesirable operating temperatures, as well as component, device, and detector requirements that are somewhere between solid state electronics and photonic technologies. Free-electron lasers can generate a wide range of stimulated emission of electromagnetic radiation from microwaves, through terahertz radiation to X-ray. However, they are bulky, expensive and not suitable for applications that require critical timing (such as wireless communications). Other sources of terahertz radiation which are actively being researched include solid state oscillators (through frequency multiplication), backward wave oscillators (BWOs), quantum cascade lasers, and gyrotrons. Safety The terahertz region is between the radio frequency region and the laser optical region. Both the IEEE C95.1–2005 RF safety standard and the ANSI Z136.1–2007 Laser safety standard have limits into the terahertz region, but both safety limits are based on extrapolation. It is expected that effects on biological tissues are thermal in nature and, therefore, predictable by conventional thermal models . Research is underway to collect data to populate this region of the spectrum and validate safety limits. A theoretical study published in 2010 and conducted by Alexandrov et al at the Center for Nonlinear Studies at Los Alamos National Laboratory in New Mexico created mathematical models predicting how terahertz radiation would interact with double-stranded DNA, showing that, even though involved forces seem to be tiny, nonlinear resonances (although much less likely to form than less-powerful common resonances) could allow terahertz waves to "unzip double-stranded DNA, creating bubbles in the double strand that could significantly interfere with processes such as gene expression and DNA replication". Experimental verification of this simulation was not done. Swanson's 2010 theoretical treatment of the Alexandrov study concludes that the DNA bubbles do not occur under reasonable physical assumptions or if the effects of temperature are taken into account. A bibliographical study published in 2003 reported that T-ray intensity drops to less than 1% in the first 500 μm of skin but stressed that "there is currently very little information about the optical properties of human tissue at terahertz frequencies".
Physical sciences
Electromagnetic radiation
Physics
362594
https://en.wikipedia.org/wiki/Hercules%20%28constellation%29
Hercules (constellation)
Hercules is a star named after Hercules, the superhero adapted from the Greek hero Heracles. Hercules was one of the 48 constellations listed by the second-century astronomer Ptolemy, and it remains one of the 88 modern constellations today. It is the fifth-largest of the modern constellations and is the largest of the 50 which have no stars brighter than apparent magnitude +2.5. Characteristics Hercules is bordered by Draco to the north; Boötes, Corona Borealis, and Serpens Caput to the west; Ophiuchus to the south; Aquila to the southwest; and Sagitta, Vulpecula, and Lyra to the east. Covering 1225.1 square degrees and 2.970% of the night sky, it ranks fifth among the 88 constellations in size. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is 'Her'. The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 32 segments (illustrated in infobox). In the equatorial coordinate system, epoch 2000, the right ascension coordinates of these borders lie between and , while the declination coordinates are between +3.67° and +51.32°. In mid-northern latitudes, Hercules is best observed from mid-spring until early autumn, culminating at midnight on June 13. The solar apex is the direction of the open motion with respect to the Local Standard of Rest. This is located within the constellation of Hercules, around coordinates right ascension and declination . The north pole of the supergalactic coordinate system is located within this constellation at right ascension and declination . Stars Hercules has no first or second magnitude stars. However, it does have several stars above magnitude 4. Alpha Herculis, traditionally called Rasalgethi, is a triple star system, partly resolvable in small amateur telescopes, 359 light-years from Earth. Its common name means "the kneeler's head". The primary is an irregular variable star; it is a bright giant with a minimum magnitude of 4 and a maximum magnitude of 3. It has a diameter of roughly 400 solar diameters. The secondary, a spectroscopic binary that orbits the primary every 3600 years, is a blue-green hued star of magnitude 5.6. Beta Herculis, also called Kornephoros, is the brightest star in Hercules. It is a yellow giant of magnitude 2.8, 148 light-years from Earth; kornephoros means club-bearer. Delta Herculis A is a double star divisible in small amateur telescopes. The primary is a blue-white star of magnitude 3.1, and is 78 light-years from Earth. The optical companion is of magnitude 8.2. Gamma Herculis is also a double star divisible in small amateur telescopes. The primary is a white giant of magnitude 3.8, 195 light-years from Earth. The optical companion, widely separated, is 10th magnitude. Zeta Herculis is a binary star that is becoming divisible in medium-aperture amateur telescopes, as the components widen to their peak in 2025. The system, 35 light-years from Earth, has a period of 34.5 years. The primary is a yellow-tinged star of magnitude 2.9 and the secondary is an orange star of magnitude 5.7. Hercules hosts further quite bright double stars and binary stars. Kappa Herculis is a double star divisible in small amateur telescopes. The primary is a yellow giant of magnitude 5.0, 388 light-years from Earth; the secondary is an orange giant of magnitude 6.3, 470 light-years from Earth. Rho Herculis is a binary star 402 light-years from Earth, divisible in small amateur telescopes. Both components are blue-green giant stars; the primary is magnitude 4.5 and the secondary is magnitude 5.5. 95 Herculis is a binary star divisible in small telescopes, 470 light-years from Earth. The primary is a silvery giant of magnitude 4.9, and the secondary is an old, reddish giant star of magnitude 5.2. The star HD164669 near the primary may be an optical double. 100 Herculis is a double star easily divisible in small amateur telescopes. Both components are magnitude 5.8 blue-white stars; they are 165 and 230 light-years from Earth. There are several dimmer variable stars in Hercules. 30 Herculis, also called g Herculis, is a semiregular red giant with a period of 3 months. 361 light-years from Earth, it has a minimum magnitude of 6.3 and a maximum magnitude of 4.3. 68 Herculis, also called u Herculis, is a Beta Lyrae-type eclipsing binary star. 865 light-years from Earth, it has a period of 2 days; its minimum magnitude is 5.4 and its maximum magnitude is 4.7. Mu Herculis is 27.4 light-years from Earth. The solar apex, i.e., the point on the sky which marks the direction that the Sun is moving in its orbit around the center of the Milky Way, narrowly figures in Hercules, between Hercules' left elbow (near Omicron Herculis) and Vega (in neighboring Lyra). Planetary systems Fifteen stars in Hercules are known to be orbited by extrasolar planets. 14 Herculis has two planets. The planet 14 Herculis b had the longest period (4.9 years) and widest orbit (2.8 AU) at the time of discovery. The planet 14 Herculis c orbits much further out with very low eccentricity. It was discovered in 2005 but was only confirmed in 2021. HD 149026 has a transiting hot Jupiter planet. HD 154345 has the planet HD 154345 b, a long period (9.095 years) and wide orbit (4.18 AU). HD 164922 has the first long period Saturn-like planet discovered. The mass is 0.36 MJ and semimajor axis of 2.11 AU. More planets have been discovered since. HD 147506 has the most massive transiting planet HAT-P-2b at the time of discovery. The mass is 8.65 MJ. HD 155358 has two planets around the lowest metallicity planet-harboring star (21% Sun). Both planets orbit in mild eccentricities. GSC 03089-00929 has a short transiting planet TrES-3b. The period was 31 hours. Gliese 649 has a saturnian planet around the red dwarf star. HD 156668 has an Earth mass planet with a minimum mass of four Earth masses. HD 164595 is a G-type star with one known planet, HD 164595 b. TOI-561 has four, or possibly five planets. The innermost of which, TOI-561 b, is notable because it is an ultra-short period planet. Deep-sky objects Hercules contains two bright globular clusters: M13, the brightest globular cluster in the northern hemisphere, and M92. It also contains the nearly spherical planetary nebula Abell 39. M13 lies between the stars η Her and ζ Her; it is dim, but may be detected by the unaided eye on a very clear night. M13, visible to both the naked eye and binoculars, is a globular cluster of the 6th magnitude that contains more than 300,000 stars and is 25,200 light-years from Earth. It is also very large, with an apparent diameter of over 0.25 degrees, half the size of the full moon; its physical diameter is more than 100 light-years. Individual stars in M13 are resolvable in a small amateur telescope. M92 is a globular cluster of magnitude 6.4, 26,000 light-years from earth. It is a Shapley class IV cluster, indicating that it is quite concentrated at the center; it has a very clear nucleus. M92 is visible as a fuzzy star in binoculars, like M13; it is denser and smaller than the more celebrated cluster. The oldest globular cluster known at 14 billion years, its stars are resolvable in a medium-aperture amateur telescope. NGC 6229 is a dimmer globular cluster, with a magnitude of 9.4, it is the third-brightest globular in the constellation. 100,000 light-years from Earth, it is a Shapley class IV cluster, meaning that it is fairly rich in the center and quite concentrated at the nucleus. NGC 6210 is a planetary nebula of the 9th magnitude, 4000 light-years from Earth visible as a blue-green elliptical disk in amateur telescopes larger than 75 mm in aperture. AT2018cow, a large astronomical explosion detected on 16 June 2018. As of 22 June 2018, this astronomical event has generated a very large amount of interest among astronomers throughout the world and may be, as of 22 June 2018, considered a supernova tentatively named Supernova 2018cow. The Hercules Cluster (Abell 2151) is a cluster of galaxies in Hercules. The brightest radio source in the constellation is Hercules A, an elliptical galaxy located 2.1 billion light years away with a supermassive black hole with a mass of 2.5-billion-solar-mass that has radio jets that extend for one-and-a-half million light-years. Another bright radio source in Hercules is the quasar 3C 345 which has a jet that appears to move faster than the speed of light. The Hercules–Corona Borealis Great Wall, the largest structure in the universe, is in Hercules. Visualizations Traditional The traditional visualization imagines α Herculis as Hercules's head; its name, Rasalgethi, literally means "head of the kneeling one". Hercules's left hand then points toward Lyra from his shoulder (δ Herculis), and β Herculis, or Kornephoros ("club-bearer") forms his other shoulder. His narrow waist is formed by ε Herculis and ζ Herculis. His right leg is kneeling. Finally, his left leg (with θ Herculis as the knee and ι Herculis the foot) is stepping on Draco's head, the dragon/snake whom Hercules has vanquished and perpetually gloats over for eternities. Keystone asterism A common form found in modern star charts uses the quadrangle formed by π Her, η Her, ζ Her and ε Her (known as the "Keystone" asterism) as the lower half (abdomen) of Hercules's torso. H.A. Rey H. A. Rey has suggested an alternative visualization in which the "Keystone" becomes Hercules's head. This quadrangle lies between two very bright stars: Vega in the constellation Lyra and α CrB (Alphecca) in the constellation Corona Borealis. The hero's right leg contains two bright stars of the third magnitude: α Her (Rasalgethi) and δ Her (Sarin). The latter is the right knee. The hero's left leg contains dimmer stars of the fourth magnitude which do not have Bayer designations but which do have Flamsteed numbers. The star β Her belongs to the hero's outstretched right hand, and is also called Kornephoros. History According to Gavin White, the Greek constellation of Hercules is a distorted version of the Babylonian constellation known as the "Standing Gods" (MUL.DINGIR.GUB.BA.MESH). White argues that this figure was, like the similarly named "Sitting Gods", depicted as a man with a serpent's body instead of legs (the serpent element now being represented on the Greek star map by the figure of Draco that Hercules crushes beneath his feet). He further argues that the original name of Hercules – the 'Kneeler' (see below) – is a conflation of the two Babylonian constellations of the Sitting and Standing Gods. The constellation is also sometimes associated with Gilgamesh, a Sumerian mythological hero. Phoenician tradition is said to have associated this constellation with their sun god, who slew a dragon (Draco). The earliest Greek references to the constellation do not refer to it as Hercules. Aratus describes it as follows: Right there in its [Draco's] orbit wheels a Phantom form, like to a man that strives at a task. That sign no man knows how to read clearly, nor what task he is bent, but men simply call him On His Knees. [ "the Kneeler"]. Now that Phantom, that toils on his knees, seems to sit on bended knee, and from both his shoulders his hands are upraised and stretch, one this way, one that, a fathom's length. Over the middle of the head of the crooked Dragon, he has the tip of his right foot. Here too that Crown [Corona], which glorious Dionysus set to be memorial of the dead Ariadne, wheels beneath the back of the toil-spent Phantom. To the Phantom's back the Crown is near, but by his head mark near at hand the head of Ophiuchus [...] Yonder, too, is the tiny Tortoise, which, while still beside his cradle, Hermes pierced for strings and bade it be called the Lyre [Lyra]: and he brought it into heaven and set it in front of the unknown Phantom. That Croucher on his Knees comes near the Lyre with his left knee, but the top of the Bird's head wheels on the other side, and between the Bird's head and the Phantom's knee is enstarred the Lyre. The constellation is connected with Hercules in (probably 1st century BCE/CE, and attributed to Hyginus), which describes several different myths about the constellation: Eratosthenes (3rd century BCE) is said to have described it as Hercules, placed above Draco (representing the dragon of the Hesperides) and preparing to fight it, holding his lion's skin in his left hand, and a club in his right (this can be found in the ). Panyassis' (5th century BCE) reportedly said Jupiter was impressed by this fight, and made it a constellation, with Hercules kneeling on his right knee, and trying to crush Draco's head with his left foot, while striking with his right hand and holding the lion skin in his left. Araethus (3rd/4th century BCE) is said to have described the constellation as depicting Ceteus son of Lycaon, imploring the gods to restore his daughter Megisto who had been transformed into a bear. Hegesianax (2nd/3rd century BCE), who it says describes it as Theseus lifting the stone at Troezen. Anacreon of Alexandria, who it claims also supports the idea that it depicts Theseus, saying that the constellation Lyra (said to be Theseus' lyre in other sources) is near Theseus. Thamyris blinded by the Muses, kneeling in supplication. Orpheus killed by the women of Thracia for seeing the sacred rituals of Liber (Dionysus). Aeschylus' lost play Prometheus Unbound (5th century BCE), which recounted that when Hercules drives the cattle of Geryon through Liguria (northern Italy), the Ligurians will join forces and attack him, attempting to steal the cattle. Hercules fights until his weapons break, before falling to his knees, wounded. Jupiter, taking pity on his son, provides many stones on the ground, which Hercules uses to fight off the Ligurians. In commemoration of this, Jupiter makes a constellation depicting Hercules in his fighting form. (A quote from this section of the play is preserved in Dionysius of Halicarnassus' Roman Antiquities: "And thou shalt come to Liguria's dauntless host, Where no fault shalt thou find, bold though thou art, With the fray: 'tis fated thy missiles all shall fail.") Ixion with his arms bound for trying to attack Juno. Prometheus bound on Mount Caucasus. The Scholia to Aratus mention three more mythical figures in connection with this constellation: Sisyphus or Tantalus, who suffered in Tartarus for having offended the gods, or Salmoneus, who was struck down by Zeus for his hubris. Another classical author associated the constellation with Atlas. Equivalents In Chinese astronomy, the stars that correspond to Hercules are located in two areas: the Purple Forbidden enclosure (紫微垣, Zǐ Wēi Yuán) and the Heavenly Market enclosure (天市垣, Tiān Shì Yuán). Arab translators of Ptolemy named it in (not to be confused with ), the name for the star Mu Draconis. Hence its Swahili name .
Physical sciences
Other
Astronomy
362631
https://en.wikipedia.org/wiki/Bar%20%28unit%29
Bar (unit)
The bar is a metric unit of pressure defined as 100,000 Pa (100 kPa), though not part of the International System of Units (SI). A pressure of 1 bar is slightly less than the current average atmospheric pressure on Earth at sea level (approximately 1.013 bar). By the barometric formula, 1 bar is roughly the atmospheric pressure on Earth at an altitude of 111 metres at 15 °C. The bar and the millibar were introduced by the Norwegian meteorologist Vilhelm Bjerknes, who was a founder of the modern practice of weather forecasting, with the bar defined as one megadyne per square centimeter. The SI brochure, despite previously mentioning the bar, now omits any mention of it. The bar has been legally recognised in countries of the European Union since 2004. The US National Institute of Standards and Technology (NIST) deprecates its use except for "limited use in meteorology" and lists it as one of several units that "must not be introduced in fields where they are not presently used". The International Astronomical Union (IAU) also lists it under "Non-SI units and symbols whose continued use is deprecated". Units derived from the bar include the megabar (symbol: Mbar), kilobar (symbol: kbar), decibar (symbol: dbar), centibar (symbol: cbar), and millibar (symbol: mbar). Definition and conversion The bar is defined using the SI derived unit, pascal: ≡ 100,000 Pa ≡ 100,000 N/m2. Thus, is equal to: 1,000,000 Ba (barye) (in cgs units); and 1 bar is approximately equal to: 1019.716 centimetres of water (cmH2O) (1 bar approximately corresponds to the gauge pressure of water at a depth of 10 meters). 1 millibar (mbar) is equal to: (0.001 bar) . Origin The word bar has its origin in the Ancient Greek word (), meaning weight. The unit's official symbol is bar; the earlier symbol b is now deprecated and conflicts with the uses of b denoting the unit barn or bit, but it is still encountered, especially as mb (rather than the proper mbar) to denote the millibar. Between 1793 and 1795, the word bar was used for a unit of mass (equal to the modern tonne) in an early version of the metric system. Usage Atmospheric air pressure where standard atmospheric pressure is defined as 1013.25 mbar, 101.325 kPa, 1.01325 bar, which is about . Despite the millibar not being an SI unit, meteorologists and weather reporters worldwide have long measured air pressure in millibar as the values are convenient. After the advent of SI units, some meteorologists began using hectopascals (symbol hPa) which are numerically equivalent to millibar; for the same reason, the hectopascal is now the standard unit used to express barometric pressures in aviation in most countries. For example, the weather office of Environment Canada uses kilopascals and hectopascals on their weather maps. In contrast, Americans are familiar with the use of the millibar in US reports of hurricanes and other cyclonic storms. In fresh water, there is an approximate numerical equivalence between the change in pressure in decibar and the change in depth from the water surface in metres. Specifically, an increase of 1 decibar occurs for every 1.019716 m increase in depth. In sea water with respect to the gravity variation, the latitude and the geopotential anomaly the pressure can be converted into metres' depth according to an empirical formula (UNESCO Tech. Paper 44, p. 25). As a result, decibar is commonly used in oceanography. In scuba diving, bar is also the most widely used unit to express pressure, e.g. 200 bar being a full standard scuba tank, and depth increments of 10 metre of seawater being equivalent to 1 bar of pressure. Many engineers worldwide use the bar as a unit of pressure because, in much of their work, using pascals would involve using very large numbers. In measurement of vacuum and in vacuum engineering, residual pressures are typically given in millibar, although torr or millimeter of mercury (mmHg) were historically common. Pressures resulting from deflagrations are often expressed in units of bar. In the automotive field, turbocharger boost is often described in bar outside the United States. Tire pressure is often specified in bar. In hydraulic machinery components are rated to the maximum system oil pressure, which is typically in hundreds of bar. For example, 300 bar is common for industrial fixed machinery. In the maritime ship industries, pressures in piping systems, such as cooling water systems, is often measured in bar. Unicode has characters for "mb" (), "bar" () and (; "millibar" spelt in katakana), but they exist only for compatibility with legacy Asian encodings and are not intended to be used in new documents. The kilobar, equivalent to 100 MPa, is commonly used in geological systems, particularly in experimental petrology. The abbreviations "bar(a)" and "bara" are sometimes used to indicate absolute pressures, and "bar(g)" and "barg" for gauge pressures. The usage is deprecated but still prevails in the oil industry (often by capitalized "BarG" and "BarA"). As gauge pressure is relative to the current ambient pressure, which may vary in absolute terms by about 50 mbar, "BarG" and "BarA" are not interconvertible. Fuller descriptions such as "gauge pressure of 2 bars" or "2-bar gauge" are recommended.
Physical sciences
Pressure
Basics and measurement
362722
https://en.wikipedia.org/wiki/Heat%20death%20of%20the%20universe
Heat death of the universe
The heat death of the universe (also known as the Big Chill or Big Freeze) is a hypothesis on the ultimate fate of the universe, which suggests the universe will evolve to a state of no thermodynamic free energy, and will therefore be unable to sustain processes that increase entropy. Heat death does not imply any particular absolute temperature; it only requires that temperature differences or other processes may no longer be exploited to perform work. In the language of physics, this is when the universe reaches thermodynamic equilibrium. If the curvature of the universe is hyperbolic or flat, or if dark energy is a positive cosmological constant, the universe will continue expanding forever, and a heat death is expected to occur, with the universe cooling to approach equilibrium at a very low temperature after a long time period. The hypothesis of heat death stems from the ideas of Lord Kelvin who, in the 1850s, took the theory of heat as mechanical energy loss in nature (as embodied in the first two laws of thermodynamics) and extrapolated it to larger processes on a universal scale. This also allowed Kelvin to formulate the heat death paradox, which disproves an infinitely old universe. Origins of the idea The idea of heat death stems from the second law of thermodynamics, of which one version states that entropy tends to increase in an isolated system. From this, the hypothesis implies that if the universe lasts for a sufficient time, it will asymptotically approach a state where all energy is evenly distributed. In other words, according to this hypothesis, there is a tendency in nature towards the dissipation (energy transformation) of mechanical energy (motion) into thermal energy; hence, by extrapolation, there exists the view that, in time, the mechanical movement of the universe will run down as work is converted to heat because of the second law. The conjecture that all bodies in the universe cool off, eventually becoming too cold to support life, seems to have been first put forward by the French astronomer Jean Sylvain Bailly in 1777 in his writings on the history of astronomy and in the ensuing correspondence with Voltaire. In Bailly's view, all planets have an internal heat and are now at some particular stage of cooling. Jupiter, for instance, is still too hot for life to arise there for thousands of years, while the Moon is already too cold. The final state, in this view, is described as one of "equilibrium" in which all motion ceases. The idea of heat death as a consequence of the laws of thermodynamics, however, was first proposed in loose terms beginning in 1851 by Lord Kelvin (William Thomson), who theorized further on the mechanical energy loss views of Sadi Carnot (1824), James Joule (1843) and Rudolf Clausius (1850). Thomson's views were then elaborated over the next decade by Hermann von Helmholtz and William Rankine. History The idea of the heat death of the universe derives from discussion of the application of the first two laws of thermodynamics to universal processes. Specifically, in 1851, Lord Kelvin outlined the view, as based on recent experiments on the dynamical theory of heat: "heat is not a substance, but a dynamical form of mechanical effect, we perceive that there must be an equivalence between mechanical work and heat, as between cause and effect." In 1852, Thomson published On a Universal Tendency in Nature to the Dissipation of Mechanical Energy, in which he outlined the rudiments of the second law of thermodynamics summarized by the view that mechanical motion and the energy used to create that motion will naturally tend to dissipate or run down. The ideas in this paper, in relation to their application to the age of the Sun and the dynamics of the universal operation, attracted the likes of William Rankine and Hermann von Helmholtz. The three of them were said to have exchanged ideas on this subject. In 1862, Thomson published "On the age of the Sun's heat", an article in which he reiterated his fundamental beliefs in the indestructibility of energy (the first law) and the universal dissipation of energy (the second law), leading to diffusion of heat, cessation of useful motion (work), and exhaustion of potential energy, "lost irrecoverably" through the material universe, while clarifying his view of the consequences for the universe as a whole. Thomson wrote: The clock's example shows how Kelvin was unsure whether the universe would eventually achieve thermodynamic equilibrium. Thompson later speculated that restoring the dissipated energy in "vis viva" and then usable work – and therefore revert the clock's direction, resulting in a "rejuvenating universe" – would require "a creative act or an act possessing similar power". Starting from this publication, Kelvin also introduced the heat death paradox (Kelvin's paradox), which challenged the classical concept of an infinitely old universe, since the universe has not achieved its thermodynamic equilibrium, thus further work and entropy production are still possible. The existence of stars and temperature differences can be considered an empirical proof that the universe is not infinitely old. In the years to follow both Thomson's 1852 and the 1862 papers, Helmholtz and Rankine both credited Thomson with the idea, along with his paradox, but read further into his papers by publishing views stating that Thomson argued that the universe will end in a "heat death" (Helmholtz), which will be the "end of all physical phenomena" (Rankine). Current status Proposals about the final state of the universe depend on the assumptions made about its ultimate fate, and these assumptions have varied considerably over the late 20th century and early 21st century. In a hypothesized "open" or "flat" universe that continues expanding indefinitely, either a heat death or a Big Rip is expected to eventually occur. If the cosmological constant is zero, the universe will approach absolute zero temperature over a very long timescale. However, if the cosmological constant is positive, the temperature will asymptote to a non-zero positive value, and the universe will approach a state of maximum entropy in which no further work is possible. Time frame for heat death The theory suggests that from the "Big Bang" through the present day, matter and dark matter in the universe are thought to have been concentrated in stars, galaxies, and galaxy clusters, and are presumed to continue to do so well into the future. Therefore, the universe is not in thermodynamic equilibrium, and objects can do physical work.:§VID The decay time for a supermassive black hole of roughly 1 galaxy mass (1011 solar masses) because of Hawking radiation is in the order of 10100 years, so entropy can be produced until at least that time. Some large black holes in the universe are predicted to continue to grow up to perhaps 1014 during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10106 years. After that time, the universe enters the so-called Dark Era and is expected to consist chiefly of a dilute gas of photons and leptons.:§VIA With only very diffuse matter remaining, activity in the universe will have tailed off dramatically, with extremely low energy levels and extremely long timescales. Speculatively, it is possible that the universe may enter a second inflationary epoch, or assuming that the current vacuum state is a false vacuum, the vacuum may decay into a lower-energy state.:§VE It is also possible that entropy production will cease and the universe will reach heat death.:§VID It is suggested that, over vast periods of time, a spontaneous entropy decrease would eventually occur via the Poincaré recurrence theorem, thermal fluctuations, and fluctuation theorem. Through this, another universe could possibly be created by random quantum fluctuations or quantum tunnelling in roughly years. Opposing views Max Planck wrote that the phrase "entropy of the universe" has no meaning because it admits of no accurate definition. In 2008, Walter Grandy wrote: "It is rather presumptuous to speak of the entropy of a universe about which we still understand so little, and we wonder how one might define thermodynamic entropy for a universe and its major constituents that have never been in equilibrium in their entire existence." According to László Tisza, "If an isolated system is not in equilibrium, we cannot associate an entropy with it." Hans Adolf Buchdahl writes of "the entirely unjustifiable assumption that the universe can be treated as a closed thermodynamic system". According to Giovanni Gallavotti, "there is no universally accepted notion of entropy for systems out of equilibrium, even when in a stationary state". Discussing the question of entropy for non-equilibrium states in general, Elliott H. Lieb and Jakob Yngvason express their opinion as follows: "Despite the fact that most physicists believe in such a nonequilibrium entropy, it has so far proved impossible to define it in a clearly satisfactory way." In Peter Landsberg's opinion: "The third misconception is that thermodynamics, and in particular, the concept of entropy, can without further enquiry be applied to the whole universe. ... These questions have a certain fascination, but the answers are speculations." A 2010 analysis of entropy states, "The entropy of a general gravitational field is still not known", and "gravitational entropy is difficult to quantify". The analysis considers several possible assumptions that would be needed for estimates and suggests that the observable universe has more entropy than previously thought. This is because the analysis concludes that supermassive black holes are the largest contributor. Lee Smolin goes further: "It has long been known that gravity is important for keeping the universe out of thermal equilibrium. Gravitationally bound systems have negative specific heat—that is, the velocities of their components increase when energy is removed. ... Such a system does not evolve toward a homogeneous equilibrium state. Instead it becomes increasingly structured and heterogeneous as it fragments into subsystems." This point of view is also supported by the fact of a recent experimental discovery of a stable non-equilibrium steady state in a relatively simple closed system. It should be expected that an isolated system fragmented into subsystems does not necessarily come to thermodynamic equilibrium and remain in non-equilibrium steady state. Entropy will be transmitted from one subsystem to another, but its production will be zero, which does not contradict the second law of thermodynamics. In popular culture In Isaac Asimov's 1956 short story The Last Question, humans repeatedly wonder how the heat death of the universe can be avoided. In the 1981 Doctor Who story "Logopolis", the Doctor realizes that the Logopolitans have created vents in the universe to expel heat build-up into other universes—"Charged Vacuum Emboitments" or "CVE"—to delay the demise of the universe. The Doctor unwittingly travelled through such a vent in "Full Circle". In the 1995 computer game I Have No Mouth, and I Must Scream, based on Harlan Ellison's short story of the same name, it is stated that AM, the malevolent supercomputer, will survive the heat death of the universe and continue torturing its immortal victims to eternity. In the 2011 anime series Puella Magi Madoka Magica, the antagonist Kyubey reveals he is a member of an alien race who has been creating magical girls for millennia in order to harvest their energy to combat entropy and stave off the heat death of the universe. In the last act of Final Fantasy XIV: Endwalker, the player encounters an alien race known as the Ea who have lost all hope in the future and any desire to live further, all because they have learned of the eventual heat death of the universe and see everything else as pointless due to its probable inevitability. The overarching plot of the Xeelee Sequence concerns the Photino Birds' efforts to accelerate the heat death of the universe by accelerating the rate at which stars become white dwarves. The 2019 hit indie video game Outer Wilds has several themes grappling with the idea of the heat death of the universe, and the theory that the universe is a cycle of big bangs once the previous one has experienced a heat death. In "Singularity Immemorial", the seventh main story event of the mobile game Girls' Frontline: Neural Cloud, the plot is about a virtual sector made to simulate space exploration and the threat of the heat death of the universe. The simulation uses an imitation of Neural Cloud's virus entities known as the Entropics as a stand in for the effects of a heat death.
Physical sciences
Physical cosmology
Astronomy
362771
https://en.wikipedia.org/wiki/Flail%20%28weapon%29
Flail (weapon)
A flail is a weapon consisting of a striking head attached to a handle by a flexible rope, strap, or chain. The chief tactical virtue of the flail is its capacity to strike around a defender's shield or parry. Its chief liability is a lack of precision and the difficulty of using it in close combat, or closely-ranked formations. There are two broad types of flail: a long, two-handed infantry weapon with a cylindrical head, and a shorter weapon with a round metal striking head. The longer cylindrical-headed flail is a hand weapon derived from the agricultural tool of the same name, commonly used in threshing. It was primarily considered a peasant's weapon, and while not common, they were deployed in Germany and Central Europe in the later Late Middle Ages. The smaller, more spherical-headed flail appears to be even less common; it appears occasionally in artwork from the 15th century onward, but many historians have expressed doubts that it ever saw use as an actual military weapon. The peasant flail In the Late Middle Ages, a particular type of flail appears in several works being used as a weapon, which consists of a very long shaft with a hinged, roughly cylindrical striking end. In most cases, these are two-handed agricultural flails, which were sometimes employed as an improvised weapon by peasant armies conscripted into military service or engaged in popular uprisings. For example, in the 1420–1497 period, the Hussites fielded large numbers of peasant foot soldiers armed with this type of flail. Some of these weapons featured anti personnel studs or spikes embedded in the striking end, or are shown being used by armored knights, suggesting they were made or at least modified specifically to be used as weapons. Such modified flails were used in the German Peasants' War in the early 16th century. Several German martial arts manuals or Fechtbücher from the 15th, 16th and 17th century feature illustrations and lessons on how to use the peasant flail (with or without spikes) or how to defend against it when attacked. The military flail The other type of European flail is a shorter weapon consisting of a wooden haft connected by a chain, rope, or leather to one or more striking ends. The kisten, with a spiked or non-spiked head and a leather or rope connection to the haft, is attested in the 10th century in the territories of the Rus', probably being adopted from either the Avars or Khazars. This weapon spread into Central and Eastern Europe in the 11th–13th centuries, and then further west in Western Europe during the 12th and 13th centuries. The medieval military flail ( in French and in German), then, might typically have consisted of a wooden shaft joined by a length of chain to one or more iron-shod wooden bars, or it may have been a ("chain morning star") with one or more metal balls or morning star in the place of the wooden bars. Artwork from the 15th century to the early 17th century shows most of these weapons having handles longer than 3 ft and being wielded with two hands, but a few are shown used in a single hand or with a haft too short to be used two-handed. Despite being very common in fictional works such as cartoons, films and role-playing games as a "quintessential medieval weapon", historical information about flails other than the kisten or derivatives of the peasant flail is rarer than other contemporary weapons, but a notable body of visual and textual sources for Western, Central, and Southern European depictions and descriptions of military are extant, if not particularly easy to find. Some doubt they were used as weapons at all due to the scarcity of genuine specimens as well as the unrealistic way they are depicted in art, as well as the number of pieces in museums that turned out to be 19th century forgeries when analyzed, though these limited and somewhat sensationlist studies have now been largely debunked. Waldman (2005) documented several likely authentic examples of the ball-and-chain flail from private collections as well as several restored illustrations from German, French, and Czech sources. Even the more comprehensive scholarly articles collating the numerous sources for flails note that their use in warfare was likely rare at best, even if such weapons were known about as a concept. Flails are noted as being potentially hazardous to their user in the absence of appropriate training and experience, meaning that, even if a blow were struck, there may have been a long time before the user could ready another swing. Variations outside Europe In Asia, short flails originally employed in threshing rice were adapted into weapons such as the nunchaku or three-section staff. In China, a very similar weapon to the long-handled peasant flail is known as the two-section staff, and Korea has a weapon called a pyeongon. In Japan, there is also a version of the smaller ball-on-a-chain flail called a chigiriki. In the 18th and 19th centuries, the long-handled flail is found in use in India. An example held in the Pitt Rivers Museum has a wooden ball-shaped head studded with iron spikes. Another in the Royal Armouries collection has two spiked iron balls attached by separate chains. The knout, a whip or scourge formerly used in Russia for the punishment of criminals, was the descendant of the flail. It was manufactured in many forms, and its effect was so severe that few of those who were subjected to its full force survived the punishment. The Emperor Nicholas I substituted a milder whip for the knout. Gallery
Technology
Melee weapons
null
362892
https://en.wikipedia.org/wiki/Polyphenol
Polyphenol
Polyphenols () are a large family of naturally occurring phenols. They are abundant in plants and structurally diverse. Polyphenols include phenolic acids, flavonoids, tannic acid, and ellagitannin, some of which have been used historically as dyes and for tanning garments. Etymology The name derives from the Ancient Greek word (, meaning "many, much") and the word ‘phenol’ which refers to a chemical structure formed by attachment of an aromatic benzenoid (phenyl) ring to a hydroxyl (-OH) group (hence the -ol suffix). The term "polyphenol" has been in use at least since 1894. Definition The term polyphenol is not well-defined, but it is generally agreed that they are natural products with "several hydroxyl groups on aromatic rings" including four principal classes: "phenolic acids, flavonoids, stilbenes, and lignans". Flavonoids include flavones, flavonols, flavanols, flavanones, isoflavones, proanthocyanidins, and anthocyanins. Particularly abundant flavanoids in foods are catechin (tea, fruits), hesperetin (citrus fruits), cyanidin (red fruits and berries), daidzein (soybean), proanthocyanidins (apple, grape, cocoa), and quercetin (onion, tea, apples). Phenolic acids include caffeic acid Lignans are polyphenols derived from phenylalanine found in flax seed and other cereals. WBSSH definition The White–Bate-Smith–Swain–Haslam (WBSSH) definition characterized structural characteristics common to plant phenolics used in tanning (i.e., the tannins). In terms of properties, the WBSSH describes the polyphenols as follows: generally moderately water-soluble compounds with molecular weight of 500–4000 Da with >12 phenolic hydroxyl groups with 5–7 aromatic rings per 1000 Da In terms of structures, the WBSSH recognizes two structural family that have these properties: proanthocyanidins and its derivatives galloyl and hexahydroxydiphenoyl esters and their derivatives Quideau definition According to Stéphane Quideau, the term "polyphenol" refers to compounds derived from the shikimate/phenylpropanoid and/or the polyketide pathway, featuring more than one phenolic unit and deprived of nitrogen-based functions. Ellagic acid, a molecule at the core of naturally occurring phenolic compounds of varying sizes, is itself not a polyphenol by the WBSSH definition, but is by the Quideau definition. The raspberry ellagitannin, on the other hand, with its 14 gallic acid moieties (most in ellagic acid-type components), and more than 40 phenolic hydroxyl groups, meets the criteria of both definitions of a polyphenol. Other examples of compounds that fall under both the WBSSH and Quideau definitions include the black tea theaflavin-3-gallate shown below, and the hydrolyzable tannin, tannic acid. Chemistry Polyphenols are reactive species toward oxidation, hence their description as antioxidants in vitro. Structure Polyphenols, such as lignin, are larger molecules (macromolecules). Their upper molecular weight limit is about 800 daltons, which allows for the possibility to rapidly diffuse across cell membranes so that they can reach intracellular sites of action or remain as pigments once the cell senesces. Hence, many larger polyphenols are biosynthesized in situ from smaller polyphenols to non-hydrolyzable tannins and remain undiscovered in the plant matrix. Most polyphenols contain repeating phenolic moieties of pyrocatechol, resorcinol, pyrogallol, and phloroglucinol connected by esters (hydrolyzable tannins) or more stable C-C bonds (nonhydrolyzable condensed tannins). Proanthocyanidins are mostly polymeric units of catechin and epicatechin. Polyphenols often have functional groups beyond hydroxyl groups. Ether ester linkages are common, as are carboxylic acids. Analytical chemistry The analysis techniques are those of phytochemistry: extraction, isolation, structural elucidation, then quantification. Reactivity Polyphenols readily react with metal ions to form coordination complexes, some of which form Metal-phenolic Networks. Extraction Extraction of polyphenols can be performed using a solvent like water, hot water, methanol, methanol/formic acid, methanol/water/acetic or formic acid. Liquid–liquid extraction can be also performed or countercurrent chromatography. Solid phase extraction can also be made on C18 sorbent cartridges. Other techniques are ultrasonic extraction, heat reflux extraction, microwave-assisted extraction, critical carbon dioxide, high-pressure liquid extraction or use of ethanol in an immersion extractor. The extraction conditions (temperature, extraction time, ratio of solvent to raw material, particle size of the sample, solvent type, and solvent concentrations) for different raw materials and extraction methods have to be optimized. Mainly found in the fruit skins and seeds, high levels of polyphenols may reflect only the measured extractable polyphenol (EPP) content of a fruit which may also contain non-extractable polyphenols. Black tea contains high amounts of polyphenol and makes up for 20% of its weight. Concentration can be made by ultrafiltration. Purification can be achieved by preparative chromatography. Analysis techniques Phosphomolybdic acid is used as a reagent for staining phenolics in thin layer chromatography. Polyphenols can be studied by spectroscopy, especially in the ultraviolet domain, by fractionation or paper chromatography. They can also be analysed by chemical characterisation. Instrumental chemistry analyses include separation by high performance liquid chromatography (HPLC), and especially by reversed-phase liquid chromatography (RPLC), can be coupled to mass spectrometry. Purified compounds can be identified by the means of nuclear magnetic resonance. Microscopy analysis The DMACA reagent is an histological dye specific to polyphenols used in microscopy analyses. The autofluorescence of polyphenols can also be used, especially for localisation of lignin and suberin. Where fluorescence of the molecules themselves is insufficient for visualization by light microscopy, DPBA (diphenylboric acid 2-aminoethyl ester, also referred to as Naturstoff reagent A) has traditionally been used, at least in plant science, to enhance the fluorescence signal. Quantification Polyphenolic content in vitro can be quantified by volumetric titration. An oxidizing agent, permanganate, is used to oxidize known concentrations of a standard tannin solution, producing a standard curve. The tannin content of the unknown is then expressed as equivalents of the appropriate hydrolyzable or condensed tannin. Some methods for quantification of total polyphenol content in vitro are based on colorimetric measurements. Some tests are relatively specific to polyphenols (for instance the Porter's assay). Total phenols (or antioxidant effect) can be measured using the Folin–Ciocalteu reaction. Results are typically expressed as gallic acid equivalents. Polyphenols are seldom evaluated by antibody technologies. Other tests measure the antioxidant capacity of a fraction. Some make use of the ABTS radical cation which is reactive towards most antioxidants including phenolics, thiols and vitamin C. During this reaction, the blue ABTS radical cation is converted back to its colorless neutral form. The reaction may be monitored spectrophotometrically. This assay is often referred to as the Trolox equivalent antioxidant capacity (TEAC) assay. The reactivity of the various antioxidants tested are compared to that of Trolox, which is a vitamin E analog. Other antioxidant capacity assays which use Trolox as a standard include the diphenylpicrylhydrazyl (DPPH), oxygen radical absorbance capacity (ORAC), ferric reducing ability of plasma (FRAP) assays or inhibition of copper-catalyzed in vitro human low-density lipoprotein oxidation. New methods including the use of biosensors can help monitor the content of polyphenols in food. Quantitation results produced by the mean of diode array detector–coupled HPLC are generally given as relative rather than absolute values as there is a lack of commercially available standards for all polyphenolic molecules. Applications Some polyphenols are traditionally used as dyes in leather tanning. For instance, in the Indian subcontinent, pomegranate peel, high in tannins and other polyphenols, or its juice, is employed in the dyeing of non-synthetic fabrics. Of some interest in the era of silver-based photography, pyrogallol and pyrocatechin are among the oldest photographic developers. Aspirational use as green chemicals Natural polyphenols have long been proposed as renewable precursors to produce plastics or resins by polymerization with formaldehyde, as well as adhesives for particleboards. The aims are generally to make use of plant residues from grape, olive (called pomaces), or pecan shells left after processing. Occurrence The most abundant polyphenols are the condensed tannins, found in virtually all families of plants. Larger polyphenols are often concentrated in leaf tissue, the epidermis, bark layers, flowers and fruits but also play important roles in the decomposition of forest litter, and nutrient cycles in forest ecology. Absolute concentrations of total phenols in plant tissues differ widely depending on the literature source, type of polyphenols and assay; they are in the range of 1–25% total natural phenols and polyphenols, calculated with reference to the dry green leaf mass. Polyphenols are also found in animals. In arthropods, such as insects, and crustaceans polyphenols play a role in epicuticle hardening (sclerotization). The hardening of the cuticle is due to the presence of a polyphenol oxidase. In crustaceans, there is a second oxidase activity leading to cuticle pigmentation. There is apparently no polyphenol tanning occurring in arachnids cuticle. Biochemistry Polyphenols are thought to play diverse roles in the ecology of plants. These functions include: Release and suppression of growth hormones such as auxin. UV screens to protect against ionizing radiation and to provide coloration (plant pigments). Deterrence of herbivores (sensory properties). Prevention of microbial infections (phytoalexins). Signaling molecules in ripening and other growth processes. In some woods can explain their natural preservation against rot. Flax and Myriophyllum spicatum (a submerged aquatic plant) secrete polyphenols that are involved in allelopathic interactions. Biosynthesis and metabolism Polyphenols incorporate smaller parts and building blocks from simpler natural phenols, which originate from the phenylpropanoid pathway for the phenolic acids or the shikimic acid pathway for gallotannins and analogs. Flavonoids and caffeic acid derivatives are biosynthesized from phenylalanine and malonyl-CoA. Complex gallotannins develop through the in vitro oxidation of 1,2,3,4,6-pentagalloylglucose or dimerization processes resulting in hydrolyzable tannins. For anthocyanidins, precursors of the condensed tannin biosynthesis, dihydroflavonol reductase and leucoanthocyanidin reductase (LAR) are crucial enzymes with subsequent addition of catechin and epicatechin moieties for larger, non-hydrolyzable tannins. The glycosylated form develops from glucosyltransferase activity and increases the solubility of polyphenols. Polyphenol oxidase (PPO) is an enzyme that catalyses the oxidation of o-diphenols to produce o-quinones. It is the rapid polymerisation of o-quinones to produce black, brown or red polyphenolic pigments that causes fruit browning. In insects, PPO is involved in cuticle hardening. Occurrence in food Polyphenols comprise up to 0.2–0.3% fresh weight for many fruits. Consuming common servings of wine, chocolate, legumes or tea may also contribute to about one gram of intake per day. According to a 2005 review on polyphenols: The most important food sources are commodities widely consumed in large quantities such as fruit and vegetables, green tea, black tea, red wine, coffee, chocolate, olives, and extra virgin olive oil. Herbs and spices, nuts and algae are also potentially significant for supplying certain polyphenols. Some polyphenols are specific to particular food (flavanones in citrus fruit, isoflavones in soya, phloridzin in apples); whereas others, such as quercetin, are found in all plant products such as fruit, vegetables, cereals, leguminous plants, tea, and wine. Some polyphenols are considered antinutrients – compounds that interfere with the absorption of essential nutrients – especially iron and other metal ions, which may bind to digestive enzymes and other proteins, particularly in ruminants. In a comparison of cooking methods, phenolic and carotenoid levels in vegetables were retained better by steaming compared to frying. Polyphenols in wine, beer and various nonalcoholic juice beverages can be removed using finings, substances that are usually added at or near the completion of the processing of brewing. Astringency With respect to food and beverages, the cause of astringency is not fully understood, but it is measured chemically as the ability of a substance to precipitate proteins. Astringency increases and bitterness decrease with the mean degree of polymerization. For water-soluble polyphenols, molecular weights between 500 and 3000 were reported to be required for protein precipitation. However, smaller molecules might still have astringent qualities likely due to the formation of unprecipitated complexes with proteins or cross-linking of proteins with simple phenols that have 1,2-dihydroxy or 1,2,3-trihydroxy groups. Flavonoid configurations can also cause significant differences in sensory properties, e.g. epicatechin is more bitter and astringent than its chiral isomer catechin. In contrast, hydroxycinnamic acids do not have astringent qualities, but are bitter. Research Polyphenols are a large, diverse group of compounds, making it difficult to determine their biological effects. They are not considered nutrients, as they are not used for growth, survival or reproduction, nor do they provide dietary energy. Therefore, they do not have recommended daily intake levels, as exist for vitamins, minerals, and fiber. In the United States, the Food and Drug Administration issued guidance to manufacturers that polyphenols cannot be mentioned on food labels as antioxidant nutrients unless physiological evidence exists to verify such a qualification and a Dietary Reference Intake value has been established characteristics which have not been determined for polyphenols. In the European Union, two health claims were authorized between 2012 and 2015: 1) flavanols in cocoa solids at doses exceeding 200 mg per day may contribute to maintenance of vascular elasticity and normal blood flow; 2) olive oil polyphenols (5 mg of hydroxytyrosol and its derivatives (e.g. oleuropein complex and tyrosol) may "contribute to the protection of blood lipids from oxidative damage", if consumed daily. As of 2022, clinical trials that assessed the effect of polyphenols on health biomarkers are limited, with results difficult to interpret due to the wide variation of intake values for both individual polyphenols and total polyphenols. Polyphenols were once considered as antioxidants, but this concept is obsolete. Most polyphenols are metabolized by catechol-O-methyltransferase, and therefore do not have the chemical structure allowing antioxidant activity in vivo; they may exert biological activity as signaling molecules. Some polyphenols are considered to be bioactive compounds for which development of dietary recommendations was under consideration in 2017. Cardiovascular diseases In the 1930s, polyphenols (then called vitamin P) were considered as a factor in capillary permeability, followed by various studies through the 21st century of a possible effect on cardiovascular diseases. For most polyphenols, there is no evidence for an effect on cardiovascular regulation, although there are some reviews showing a minor effect of consuming polyphenols, such as chlorogenic acid or flavan-3-ols, on blood pressure. Cancer Higher intakes of soy isoflavones may be associated with reduced risks of breast cancer in postmenopausal women and prostate cancer in men. A 2019 systematic review found that intake of soy and soy isoflavones is associated with a lower risk of mortality from gastric, colorectal, breast and lung cancers. The study found that an increase in isoflavone consumption by 10 mg per day was associated with a 7% decrease in risk from all cancers, and an increase in consumption of soy protein by 5 grams per day produced a 12% reduction in breast cancer risk. Cognitive function Polyphenols are under preliminary research for possible cognitive effects in healthy adults. Phytoestrogens Isoflavones, which are structurally related to 17β-estradiol, are classified as phytoestrogens. A risk assessment by the European Food Safety Authority found no cause for concern when isoflavones are consumed in a normal diet. Phlebotonic Phlebotonics of heterogeneous composition, consisting partly of citrus peel extracts (flavonoids, such as hesperidin) and synthetic compounds, are used to treat chronic venous insufficiency and hemorrhoids. Some are non-prescription dietary supplements, such as diosmin, while one other – Vasculera (Diosmiplex) – is a prescription medical food intended for treating venous disorders. Their mechanism of action is undefined, and clinical evidence of benefit for using phlebotonics to treat venous diseases is limited. Gut microbiome Polyphenols are extensively metabolized by the gut microbiota and are investigated as a potential metabolic factor in function of the gut microbiota. Toxicity and adverse effects Adverse effects of polyphenol intake range from mild (e.g., gastrointestinal tract symptoms) to severe (e.g., hemolytic anemia or hepatotoxicity). In 1988, hemolytic anemia following polyphenol consumption was documented, resulting in the withdrawal of a catechin-containing drug. Polyphenols, particularly in beverages that contain them in high concentrations (tea, coffee, etc), inhibit the absorption of non-haem iron when consumed together in a single meal. Research is limited on the effect of this inhibition on iron status. Metabolism of polyphenols can result in flavonoid-drug interactions, such as in grapefruit–drug interactions, which involves inhibition of the liver enzyme, CYP3A4, likely by grapefruit furanocoumarins, a class of polyphenol. The European Food Safety Authority established upper limits for some polyphenol-containing supplements and additives, such as green tea extract or curcumin. For most polyphenols found in the diet, an adverse effect beyond nutrient-drug interactions is unlikely.
Physical sciences
Polyphenols
Chemistry
363218
https://en.wikipedia.org/wiki/Long%20ton
Long ton
The long ton, also known as the imperial ton or displacement ton, is a measurement unit equal to 2,240 pounds (1,016.0 kg). It is the name for the unit called the "ton" in the avoirdupois system of weights or Imperial system of measurements. It was standardised in the 13th century. It is used in the United States for bulk commodities. It is not to be confused with the short ton, a unit of weight equal to used in the United States, and Canada before metrication, also referred to simply as a "ton". Unit definition A long ton is defined as exactly 2,240 pounds. The long ton arises from the traditional British measurement system: A long ton is 20 long hundredweight (cwt), each of which is 8 stone Thus, a long ton is Unit equivalences A long ton, also called the weight ton (W/T), imperial ton, or displacement ton, is equal to: exactly 12% more than the 2,000 pounds of the North American short ton, being 20 long hundredweight (112 lb) rather than 20 short hundredweight (100 lb) the weight of of salt water with a density of Usage around the world United Kingdom To comply with the practices of the European Union, the British Imperial ton was explicitly excluded from use for trade by the United Kingdom's Weights and Measures Act of 1985. The measure used since then is the metric ton of 1,000 kilograms, identified through the word "tonne". If still used for measurement, then the word "ton" is taken to refer to an imperial or long ton. United States In the United States, the long ton is commonly used in measuring the displacement of ships and the shipping of baled commodities and bulk goods like iron ore and elemental sulfur. International The long ton was the unit prescribed for warships by the Washington Naval Treaty of 1922; for example, battleships were limited to a displacement of . The long ton is traditionally used as the unit of weight in international contracts for many bulk goods and commodities.
Physical sciences
Mass and weight
Basics and measurement
363430
https://en.wikipedia.org/wiki/Photochemistry
Photochemistry
Photochemistry is the branch of chemistry concerned with the chemical effects of light. Generally, this term is used to describe a chemical reaction caused by absorption of ultraviolet (wavelength from 100 to 400 nm), visible (400–750 nm), or infrared radiation (750–2500 nm). In nature, photochemistry is of immense importance as it is the basis of photosynthesis, vision, and the formation of vitamin D with sunlight. It is also responsible for the appearance of DNA mutations leading to skin cancers. Photochemical reactions proceed differently than temperature-driven reactions. Photochemical paths access high-energy intermediates that cannot be generated thermally, thereby overcoming large activation barriers in a short period of time, and allowing reactions otherwise inaccessible by thermal processes. Photochemistry can also be destructive, as illustrated by the photodegradation of plastics. Concept Grotthuss–Draper law and Stark–Einstein law Photoexcitation is the first step in a photochemical process where the reactant is elevated to a state of higher energy, an excited state. The first law of photochemistry, known as the Grotthuss–Draper law (for chemists Theodor Grotthuss and John W. Draper), states that light must be absorbed by a chemical substance in order for a photochemical reaction to take place. According to the second law of photochemistry, known as the Stark–Einstein law (for physicists Johannes Stark and Albert Einstein), for each photon of light absorbed by a chemical system, no more than one molecule is activated for a photochemical reaction, as defined by the quantum yield. Fluorescence and phosphorescence When a molecule or atom in the ground state (S0) absorbs light, one electron is excited to a higher orbital level. This electron maintains its spin according to the spin selection rule; other transitions would violate the law of conservation of angular momentum. The excitation to a higher singlet state can be from HOMO to LUMO or to a higher orbital, so that singlet excitation states S1, S2, S3... at different energies are possible. Kasha's rule stipulates that higher singlet states would quickly relax by radiationless decay or internal conversion (IC) to S1. Thus, S1 is usually, but not always, the only relevant singlet excited state. This excited state S1 can further relax to S0 by IC, but also by an allowed radiative transition from S1 to S0 that emits a photon; this process is called fluorescence. Alternatively, it is possible for the excited state S1 to undergo spin inversion and to generate a triplet excited state T1 having two unpaired electrons with the same spin. This violation of the spin selection rule is possible by intersystem crossing (ISC) of the vibrational and electronic levels of S1 and T1. According to Hund's rule of maximum multiplicity, this T1 state would be somewhat more stable than S1. This triplet state can relax to the ground state S0 by radiationless ISC or by a radiation pathway called phosphorescence. This process implies a change of electronic spin, which is forbidden by spin selection rules, making phosphorescence (from T1 to S0) much slower than fluorescence (from S1 to S0). Thus, triplet states generally have longer lifetimes than singlet states. These transitions are usually summarized in a state energy diagram or Jablonski diagram, the paradigm of molecular photochemistry. These excited species, either S1 or T1, have a half-empty low-energy orbital, and are consequently more oxidizing than the ground state. But at the same time, they have an electron in a high-energy orbital, and are thus more reducing. In general, excited species are prone to participate in electron transfer processes. Experimental setup Photochemical reactions require a light source that emits wavelengths corresponding to an electronic transition in the reactant. In the early experiments (and in everyday life), sunlight was the light source, although it is polychromatic. Mercury-vapor lamps are more common in the laboratory. Low-pressure mercury-vapor lamps mainly emit at 254 nm. For polychromatic sources, wavelength ranges can be selected using filters. Alternatively, laser beams are usually monochromatic (although two or more wavelengths can be obtained using nonlinear optics), and LEDs have a relatively narrow band that can be efficiently used, as well as Rayonet lamps, to get approximately monochromatic beams. The emitted light must reach the targeted functional group without being blocked by the reactor, medium, or other functional groups present. For many applications, quartz is used for the reactors as well as to contain the lamp. Pyrex absorbs at wavelengths shorter than 275 nm. The solvent is an important experimental parameter. Solvents are potential reactants, and for this reason, chlorinated solvents are avoided because the C–Cl bond can lead to chlorination of the substrate. Strongly-absorbing solvents prevent photons from reaching the substrate. Hydrocarbon solvents absorb only at short wavelengths and are thus preferred for photochemical experiments requiring high-energy photons. Solvents containing unsaturation absorb at longer wavelengths and can usefully filter out short wavelengths. For example, cyclohexane and acetone "cut off" (absorb strongly) at wavelengths shorter than 215 and 330 nm, respectively. Typically, the wavelength employed to induce a photochemical process is selected based on the absorption spectrum of the reactive species, most often the absorption maximum. Over the last years, however, it has been demonstrated that, in the majority of bond-forming reactions, the absorption spectrum does not allow selecting the optimum wavelength to achieve the highest reaction yield based on absorptivity. This fundamental mismatch between absorptivity and reactivity has been elucidated with so-called photochemical action plots. Photochemistry in combination with flow chemistry Continuous-flow photochemistry offers multiple advantages over batch photochemistry. Photochemical reactions are driven by the number of photons that are able to activate molecules causing the desired reaction. The large surface-area-to-volume ratio of a microreactor maximizes the illumination, and at the same time allows for efficient cooling, which decreases the thermal side products. Principles In the case of photochemical reactions, light provides the activation energy. Simplistically, light is one mechanism for providing the activation energy required for many reactions. If laser light is employed, it is possible to selectively excite a molecule so as to produce a desired electronic and vibrational state. Equally, the emission from a particular state may be selectively monitored, providing a measure of the population of that state. If the chemical system is at low pressure, this enables scientists to observe the energy distribution of the products of a chemical reaction before the differences in energy have been smeared out and averaged by repeated collisions. The absorption of a photon by a reactant molecule may also permit a reaction to occur not just by bringing the molecule to the necessary activation energy, but also by changing the symmetry of the molecule's electronic configuration, enabling an otherwise-inaccessible reaction path, as described by the Woodward–Hoffmann selection rules. A [2+2] cycloaddition reaction is one example of a pericyclic reaction that can be analyzed using these rules or by the related frontier molecular orbital theory. Some photochemical reactions are several orders of magnitude faster than thermal reactions; reactions as fast as 10−9 seconds and associated processes as fast as 10−15 seconds are often observed. The photon can be absorbed directly by the reactant or by a photosensitizer, which absorbs the photon and transfers the energy to the reactant. The opposite process, when a photoexcited state is deactivated by a chemical reagent, is called quenching. Most photochemical transformations occur through a series of simple steps known as primary photochemical processes. One common example of these processes is the excited state proton transfer. Photochemical reactions Examples of photochemical reactions Photosynthesis: Plants use solar energy to convert carbon dioxide and water into glucose and oxygen. Human formation of vitamin D by exposure to sunlight. Bioluminescence: e.g. In fireflies, an enzyme in the abdomen catalyzes a reaction that produces light. Polymerizations started by photoinitiators, which decompose upon absorbing light to produce the free radicals for radical polymerization. Photodegradation of many substances, e.g. polyvinyl chloride and Fp. Medicine bottles are often made from darkened glass to protect the drugs from photodegradation. Photochemical rearrangements, e.g. photoisomerization, hydrogen atom transfer, and photochemical electrocyclic reactions. Photodynamic therapy: Light is used to destroy tumors by the action of singlet oxygen generated by photosensitized reactions of triplet oxygen. Typical photosensitizers include tetraphenylporphyrin and methylene blue. The resulting singlet oxygen is an aggressive oxidant, capable of converting C–H bonds into C–OH groups. Diazo printing process Photoresist technology, used in the production of microelectronic components. Vision is initiated by a photochemical reaction of rhodopsin. Toray photochemical production of ε-caprolactame. Photochemical production of artemisinin, an anti-malaria drug. Photoalkylation, used for the light-induced addition of alkyl groups to molecules. DNA: photodimerization leading to cyclobutane pyrimidine dimers. Organic photochemistry Examples of photochemical organic reactions are electrocyclic reactions, radical reactions, photoisomerization, and Norrish reactions. Alkenes undergo many important reactions that proceed via a photon-induced π to π* transition. The first electronic excited state of an alkene lacks the π-bond, so that rotation about the C–C bond is rapid and the molecule engages in reactions not observed thermally. These reactions include cis-trans isomerization and cycloaddition to other (ground state) alkene to give cyclobutane derivatives. The cis-trans isomerization of a (poly)alkene is involved in retinal, a component of the machinery of vision. The dimerization of alkenes is relevant to the photodamage of DNA, where thymine dimers are observed upon illuminating DNA with UV radiation. Such dimers interfere with transcription. The beneficial effects of sunlight are associated with the photochemically-induced retro-cyclization (decyclization) reaction of ergosterol to give vitamin D. In the DeMayo reaction, an alkene reacts with a 1,3-diketone reacts via its enol to yield a 1,5-diketone. Still another common photochemical reaction is Howard Zimmerman's di-π-methane rearrangement. In an industrial application, about 100,000 tonnes of benzyl chloride are prepared annually by the gas-phase photochemical reaction of toluene with chlorine. The light is absorbed by chlorine molecules, the low energy of this transition being indicated by the yellowish color of the gas. The photon induces homolysis of the Cl-Cl bond, and the resulting chlorine radical converts toluene to the benzyl radical: Cl2 + hν → 2 Cl· C6H5CH3 + Cl· → C6H5CH2· + HCl C6H5CH2· + Cl· → C6H5CH2Cl Mercaptans can be produced by photochemical addition of hydrogen sulfide (H2S) to alpha olefins. Inorganic and organometallic photochemistry Coordination complexes and organometallic compounds are also photoreactive. These reactions can entail cis-trans isomerization. More commonly, photoreactions result in dissociation of ligands, since the photon excites an electron on the metal to an orbital that is antibonding with respect to the ligands. Thus, metal carbonyls that resist thermal substitution undergo decarbonylation upon irradiation with UV light. UV-irradiation of a THF solution of molybdenum hexacarbonyl gives the THF complex, which is synthetically useful: Mo(CO)6 + THF → Mo(CO)5(THF) + CO In a related reaction, photolysis of iron pentacarbonyl affords diiron nonacarbonyl (see figure): 2 Fe(CO)5 → Fe2(CO)9 + CO Select photoreactive coordination complexes can undergo oxidation-reduction processes via single electron transfer. This electron transfer can occur within the inner or outer coordination sphere of the metal. Types of photochemical reactions Here are some different types of photochemical reactions- Photo-dissociation: AB + hν → A* + B* Photo induced rearrangements, isomerization: A + hν → B Photo-addition: A + B + hν → AB + C Photo-substitution: A + BC + hν → AB + C Photo-redox reaction: A + B + hν → A− + B+ Historical Although bleaching has long been practiced, the first photochemical reaction was described by Trommsdorff in 1834. He observed that crystals of the compound α-santonin when exposed to sunlight turned yellow and burst. In a 2007 study the reaction was described as a succession of three steps taking place within a single crystal. The first step is a rearrangement reaction to a cyclopentadienone intermediate (2), the second one a dimerization in a Diels–Alder reaction (3), and the third one an intramolecular [2+2]cycloaddition (4). The bursting effect is attributed to a large change in crystal volume on dimerization. Specialized journals Journal of Photochemistry and Photobiology ChemPhotoChem Photochemistry and Photobiology Photochemical & Photobiological Sciences Photochemistry Learned societies Inter-American Photochemical Society European Photochemistry Association Asian and Oceanian Photochemistry Association International conferences IUPAC SYmposium on Photochemistry (biennial) International Conference on Photochemitry (biennial) The organization of these conferences is facilitated by the International Foundation for Photochemistry.
Physical sciences
Chemistry: General
null
363442
https://en.wikipedia.org/wiki/Inorganic%20compound
Inorganic compound
An inorganic compound is typically a chemical compound that lacks carbon–hydrogen bonds⁠that is, a compound that is not an organic compound. The study of inorganic compounds is a subfield of chemistry known as inorganic chemistry. Inorganic compounds comprise most of the Earth's crust, although the compositions of the deep mantle remain active areas of investigation. All allotropes (structurally different pure forms of an element) and some simple carbon compounds are often considered inorganic. Examples include the allotropes of carbon (graphite, diamond, buckminsterfullerene, graphene, etc.), carbon monoxide , carbon dioxide , carbides, and salts of inorganic anions such as carbonates, cyanides, cyanates, thiocyanates, isothiocyanates, etc. Many of these are normal parts of mostly organic systems, including organisms; describing a chemical as inorganic does not necessarily mean that it cannot occur within living things. History Friedrich Wöhler's conversion of ammonium cyanate into urea in 1828 is often cited as the starting point of modern organic chemistry. In Wöhler's era, there was widespread belief that organic compounds were characterized by a vital spirit. In the absence of vitalism, the distinction between inorganic and organic chemistry is merely semantic. Modern usage The Inorganic Crystal Structure Database (ICSD) in its definition of "inorganic" carbon compounds, states that such compounds may contain either C-H or C-C bonds, but not both. The book series Inorganic Syntheses does not define inorganic compounds. The majority of its content deals with metal complexes of organic ligands. IUPAC does not offer a definition of "inorganic" or "inorganic compound" but does define inorganic polymer as "...skeletal structure that does not include carbon atoms."
Physical sciences
Chemical compounds: General
null
363483
https://en.wikipedia.org/wiki/Augite
Augite
Augite, also known as Augurite, is a common rock-forming pyroxene mineral with formula . The crystals are monoclinic and prismatic. Augite has two prominent cleavages, meeting at angles near 90 degrees. Characteristics Augite is a solid solution in the pyroxene group. Diopside and hedenbergite are important endmembers in augite, but augite can also contain significant aluminium, titanium, and sodium and other elements. The calcium content of augite is limited by a miscibility gap between it and pigeonite and orthopyroxene: when occurring with either of these other pyroxenes, the calcium content of augite is a function of temperature and pressure, but mostly of temperature, and so can be useful in reconstructing temperature histories of rocks. With declining temperature, augite may exsolve lamellae of pigeonite and/or orthopyroxene. There is also a miscibility gap between augite and omphacite, but this gap occurs at higher temperatures. There are no industrial or economic uses for this mineral. Locations Augite is an essential mineral in mafic igneous rocks; for example, gabbro and basalt and common in ultramafic rocks. It also occurs in relatively high-temperature metamorphic rocks such as mafic granulite and metamorphosed iron formations. It commonly occurs in association with orthoclase, sanidine, labradorite, olivine, leucite, amphiboles and other pyroxenes. Occasional specimens have a shiny appearance that give rise to the mineral's name, which is from the Greek augites, meaning "brightness", although ordinary specimens have a dull (dark green, brown or black) luster. It was named by Abraham Gottlob Werner in 1792.
Physical sciences
Silicate minerals
Earth science
363545
https://en.wikipedia.org/wiki/Stockholm%20Metro
Stockholm Metro
The Stockholm Metro () is a rapid transit system in Stockholm, the capital city of Sweden. Its first line opened in 1950 as the first metro line in the Nordic countries. Today, the system consists of three lines and 100 stations, of which 47 are underground and 53 above ground. The system is owned by Region Stockholm via SL, the public transport authority for Stockholm County. It is the only metro system in Sweden. The metro's three coloured lines, Green, Red, and Blue, together form seven routes with different termini. All of these routes pass through the city centre, creating a highly centralised network. The main interchange for all three lines is T-Centralen station, where they intersect. In addition to T-Centralen, the system has three other interchange stations: Fridhemsplan, Slussen, and Gamla stan. Various extensions to the system are currently under construction, An extension to the north-west of Blue Line is expected to open in 2027, while extensions to its south are expected to open in 2030. Construction of a new Yellow Line to the west of the city centre is scheduled to start in 2025. In 2019, the Stockholm metro transported 462 million passengers, equivalent to approximately 1.27 million on a typical weekday. The metro system has been operated by MTR since 2 November 2009, whose contract expires at the end of 2024. The system is equipped with ticket barriers. SL operates the metro's ticketing system, with ticketing available via the SL app and rechargeable travel cards. Contactless payment is also accepted at the gates. Ticketing can also be purchased at station booths and select local retailers. SL phased out ticket machines on its network in 2022. The Stockholm metro has been referred to as 'the world’s longest art gallery,' featuring decorations at more than 90 of its 100 stations, including sculptures, rock formations, mosaics, paintings, light installations, engravings, and reliefs created by over 150 artists. History Before the Metro In the late 19th century, Stockholm’s suburbs expanded thanks to the development of local railways such as Djursholmsbanan and Saltsjöbanan. By 1900, electrified trams extended into the city centre, and by 1915 Stockholms Spårvägar (SS) was managing a growing tram network, including new suburban lines as the city incorporated areas such as Bromma and Brännkyrka. With further suburbs planned, it became evident that trams would not meet the city's future transport needs, prompting underground railway proposals. Stockholm's politicians were also inspired by large cities such as London, Paris and New York where metros had already been built. Through the 1920s, various investigations were carried out by the city. In 1930 a traffic committee was appointed by Stockholm's city council at the initiative of city councilor Yngve Larsson with the task of solving the capital's major traffic problems. The First Tunnel The first step towards an underground transit system was the construction of the Södertunneln tram tunnel under Södermalm. Approved by the city council on 30 March 1931, following recommendations from the 1930 Traffic Committee. Construction commenced in autumn 1931, and the project, costing 4.5 million kronor, was inaugurated on 30 September 1933. Södertunneln included three stations: Slussen, Södra Bantorget (now Medborgarplatsen), and Ringvägen (now Skanstull). The stations were designed by architect Holger Blom and inspired by Berlin’s U-Bahn. The tunnel operated as a premetro service with existing tramlines connecting to it. This project marked the first use of the term "Tunnelbanan," and the first use of station entrances distinguished by a "T" in a circle. Plans for a Full-Scale Metro System The 1930s also brought significant changes to the political and economic landscape of housing construction in Stockholm, with a new municipal plan for multi-family dwellings in the suburbs. A considerable debate unfolded across political parties, but a metro system came to be viewed as the optimal solution to the city’s housing crisis and increasing congestion in the city centre. In 1941, Stockholm City Council voted to develop a large-scale metro system, based on plans from the 1930 Traffic Committee and a further 1940 report. This decision called for the Södertunneln and southern suburban tram lines to be extended to Norrmalm, connecting with the western suburban tram lines through a tunnel under Sveavägen. Initial Construction The Stockholm Metro's formal construction began in 1944, following the 1941 city council decision. The first focus was extending the Södertunneln southward, beyond Gullmarsplan. During this period, several other lines were built to premetro standards, including routes from Kristineberg to Islandstorget, using the new Tranebergsbron bridge; Skanstull to Blåsut, including construction of the Skanstullsbron bridge; and Telefonplan to Hägerstensåsen. In late 1944, a population study revealed that Stockholm’s rapid population growth would demand greater capacity for the planned metro. As a result, two significant decisions were made: the line between T-Centralen and Slussen would be constructed with four tracks instead of two, and platform lengths were increased from six-car (100 meters) to eight-car (145 meters) to accommodate more passengers. Full Metro The first part of the metro was opened on 1 October 1950, from Slussen to Hökarängen, having been converted from tram to metro operation. In 1951, a second branch from Slussen to Stureby was opened (which was also tram operated until then). In 1952, a second line, from Hötorget to the western suburbs was opened. In 1957, the two parts were connected with a line between Hötorget and Slussen, with two new intermediate stations: T-Centralen, adjacent to Stockholm Central station, Gamla stan in Stockholm's old town, forming the Green Line. Through the 1950s, the Green Line was extended piece by piece. The Red Line was opened in 1964, from T-Centralen over Liljeholmen ending in Fruängen and Örnsberg, both in the Southwest. It was extended piece by piece until 1978, when it reached Mörby centrum via a bridge over Stocksundet sea strait. The third and final system, the Blue Line, was opened in 1975, with two lines running northwest from the city center. As the construction requirements have become more strict over the years, newer segments have more tunnels than older ones, and the Blue Line is almost entirely tunnelled. The latest addition to the whole network, Skarpnäck station, was opened in 1994. Network Stations There are 100 stations in use in the Stockholm metro (of which 47 are underground). One station, Kymlinge, was built but never put into use. One station has been taken out of use and demolished. The old surface station at Bagarmossen was demolished and replaced with a new underground station, this being prior to the metro extension to the Skarpnäck metro station. The Stockholm metro is well known for the decoration of its stations. Several of the stations (especially on the Blue Line) are left with the bedrock exposed, crude and unfinished, or as part of the decorations.. Lines The following details relate to the present network. The designations "Blue line", etc., have only been used since the late 1970s, and officially only since the 1990s. They originated from the fact that the "blue line" tended to operate newer train stock painted blue, while the "Green line" had older stock in the original green livery. There was never any red painted stock, though, but red (or originally orange) was chosen to differentiate this line from the other two networks on route maps. Green Line The Green line (officially Tunnelbana 1, or "Metro 1") has three routes and 49 stations: 12 underground (nine concrete, three rock) and 37 above ground stations. It is long. It was opened on 1 October 1950 (between Slussen and Hökarängen stations) and is used by 451,000 passengers per workday or 146 million per year (2005) Red Line The Red line (Tunnelbana 2) has two routes and 36 stations: 20 underground (four concrete, 16 rock) and 15 above ground stations. It is long (only shorter than the Green line), and was opened on 5 April 1964. It is used by 394,000 passengers per workday or 128 million per year (2005). Blue Line Blue line (Tunnelbana 3) has two routes and 20 stations: 19 underground (all rock) and one elevated station. It is long. It was opened on 31 August 1975 and is used by 171,000 passengers per workday or 55 million per year (2005).Trains operate from 05:00 to 01:00, with extended all night service on Fridays and Saturdays. All lines have trains every 10 minutes during the day, reduced to every 15 minutes in early mornings and late evenings, and every 30 minutes at night. Additional trains during peak hours gives a train every 5–6 minutes on most stations, with 2–3 minutes between trains on the central parts of the network. The metro contains four interchanges (T-Centralen, Slussen, Gamla Stan and Fridhemsplan) and lacks any kind of circular or partly circular line (although Stockholm has a semi-circular light rail line, Tvärbanan). A wide majority of the metro stations are located in suburbs, but the network is centred on T-Centralen where all trains in the entire network pass. In the past, there have been additional route numbers in use for trains operated on part of a line, or during peak hours only. For example, route 23 was used for a peak relief train for route 13, which in the 1970s was operated between Sätra and Östermalmstorg and during the 1990s between Norsborg and Mörby Centrum. There is a connection to the main rail network, which is used for deliveries of new trains and some other purposes. In this case trains are pulled by locomotives since the electrical and other standards are different. This connection consists of a track to Tvärbanan at the Globen station and a rail track from the Liljeholmen Tvärbanan station to the Älvsjö railway station. Network Map Rolling stock The Stockholm metro operates two main types of rolling stock: the SL C20 and SL C30. Previously, the system used the older C1–C15 trains, collectively known as the Cx stock. These trains were gradually retired, and fully discontinued by 2024 after more than 40 years of service. Currently, the Stockholm metro operates 271 trainsets of the C20 stock and 116 trainsets of the C30 stock. The Green Line exclusively uses the C20 stock, while the Blue Line also primarily relies on C20 trains. The Red Line uses a mix of C20 and C30 trainsets. Stockholm metro's trains are based at several depots, including Hammarby, Högdalen, and depots for the Green Line; Norsborg depot and Nyboda depot for the Red Line; and for the Blue Line. Train configuration varies depending on the stock type, however a full-length train measures approximately and accommodates around 1,250 passengers, with seating available for 290 to 380 people. C20 trains are typically composed of two or three trainsets connected in double or triple configurations, resulting in trains with six or nine cars. C30 trains consist of two trainsets connected in a double configuration, forming an eight-car train. The now-retired Cx stock operated in six or eight-car configurations. The Blue Line, along with the Red Line between Stadion and Mörby Centrum, were built with longer platforms to accommodate ten-car Cx stock trains. However, when the C20 trains were introduced, it became apparent that configurations of four C20 trainsets—equivalent in length to ten Cx stock cars—were too long for the platforms. As a result, ten-car trains only operated on the Blue Line, where most platforms (except at Husby) were designed to accommodate their length. On the Red Line, most platforms were only long enough to accommodate eight-car Cx stock trains. As a result, ten-car trains were never used in service on the Red Line, except at the six stations between Stadion and Mörby Centrum, which could accommodate them. The naming convention for rolling stock reflects the system's history: the prefix A denotes motorised trams, B indicates unmotorised tram trailers, and C is used for metro cars. The Stockholm metro traces its origins to a tramway system, and the older sections of the metro operated as tramways for several years before conversion. Current rolling stock C20 The C20 stock (also branded C25 or C20U during refurbishment) is double-articulated, in length, in width, in height, and weighs . It uses only four bogies, two under the middle part, and one under each end part of car. The car takes 126 seated passengers, and 288 standing passengers. Three such units normally form a train. The C20 stock cars were built between 1997 and 2004 and first entered service in 1998. A single prototype car designated C20F stock is in use. Built on Bombardier Transportation's FICAS technology, it has a lighter body, much thinner side walls, and more space compared to the regular C20, by using a sandwich-like composite construction of the body. It also has air-conditioning for passenger area, whereas standard C20 has air-conditioning only for the driver's cab. However only the last 70 C20 units produced (2200-2270) are equipped with air conditioning in the drivers cab. All other C20 units completely lack air conditioning. Therefore units lacking air conditioning are usually placed in the middle of trains and moved to the blue line during the summer, where the air conditioning is the least needed, as it is almost exclusively underground. The C20F weighs , other exterior measurements are the same as for the C20. The C20F has the same number of seats as the C20, but has space for 323 standing passengers. After about 20 years in service (22 years for the oldest cars and 16 years for the youngest cars), the C20 had reached about half its lifetime, and a refurbishment was necessary. The first refurbished train set (three cars) was officially put into service on November 20, 2020. The refurbishment of all cars was completed in 2024. These refurbished cars, also known as C25, feature an upgraded interior similar to the C30 among other improvements. All original C20 units had been refurbished by February 2024. C30 The C30 is a new articulated train type manufactured by Bombardier Transportation which is delivered since 2018 for use on the red line. The first C30 train entered service on the red line on 11 August 2020. They are formed in semi-permanent four car units with open gangways between cars, and with two bogies under each car. Two such units form a train. Compared to previous stock, the cars have fewer seats arranged in mixed longitudinal/transverse layout for increased capacity, similar to the C1 and refurbished C20 trains. The C30 is the first full Stockholm metro train type to feature air-conditioning in both the passenger compartments and driver's cabs and are expected to cost 5 billion kronor. Former rolling stock Cx (C1 - C15) The name Cx collectively refers to all the older types C1–C15. The last ride with a Cx car in the Stockholm Metro took place on 10 February 2024 on a C14 car. C14 were to in length, in width, to in height, and weigh 29 metric tons. The cars took 48 seated passengers, and 108 to 110 standing passengers. The C14 and C15 trains were built in the mid-1980s. As of 12 January 2024, the C6, C14 and C15 have been taken out of service permanently. Infrastructure and Operation Safety and Technology The Stockholm metro runs electrically using a third rail with a nominal operating voltage of 650 V DC on line 13, 14, 17, 18 and 19; and 750 V DC on lines 10 and 11. Traffic on the metro operates on left-hand side, similarly to mainline trains in Sweden. When the metro system opened in 1950, cars and trams still drove on the left in Sweden. The maximum speed is on the Red and Blue Lines and on the Green Line ( at the platforms). Maximum acceleration and deceleration is 0.8 m/s2. The reason for the lower speed limit on the Green Line is due to tighter curves than on the other lines, because the Green Line was built by cut and cover under streets in the inner city, while the other lines are bored at greater depth. Two safety systems exist on the metro: the older system manufactured by Union Switch & Signal in use on the Red and Blue Lines and a modern automatic train operation (ATO) system in use on the Green Line manufactured by Siemens Mobility. To allow close-running trains with a high level of safety, the metro uses a continuous signal safety system that sends information continually to the train's safety system. The signal is picked up from the rail tracks through two antennas placed in front of the first wheel axle and compared with data about the train's speed. Automatic braking is triggered if the train exceeds the maximum permitted speed at any time. The driver is given information about the speed limit through a display in the driver's cabin; in C20 stock, and in Cx stock outfitted for operation with the new signal system installed on the Green Line, this is a speedometer with a red maximum speed indicator (needle), while the traditional display in the Cx stock is a set of three lights indicating one of three permitted speeds (high, medium, low). The system allows two trains to come close to each other but prevents collisions occurring at speeds greater than . More modern systems also ensure that stop signals are not passed. Another possibility is automatic train operation, which helps the driver by driving the train automatically. However, the driver still operates the door controls and allows the train to start. As of , ATO is only available on the Green line, where a new signal system was installed in the late-1990s. This signal system, together with the C20 rolling stock, permits the use of ATO. The signalling system on the Red Line was supposed to be replaced with a Communications-based train control (CBTC) system manufactured by Ansaldo STS under a contract awarded by SL in 2010, however SL cancelled said contract in 2017, reportedly after repeated delays in project implementation. Graffiti Since the mid-1980s, graffiti has been a recurring issue in the Stockholm metro. Previously, graffiti on trains and stations often remained visible for weeks or months. In recent years, stricter zero-tolerance policies have been implemented, with graffiti-covered trains immediately removed from service and station graffiti cleaned within 24 hours. In 2018, graffiti damage was reported over an area of 166,475 square meters, equivalent to 30 football fields. Costs for addressing graffiti and vandalism reached 192 million SEK in 2020 (approx. €18 million), driven in part by large-scale repairs like replacing damaged windows. In December 2019, several prolific graffiti offenders, referred to as "storklottrare," were arrested. In 2022, they were convicted for extensive vandalism committed in 2018 and 2019. Two received prison sentences of 1 and 1.5 years, while a third was given a conditional sentence and fines. They were collectively ordered to pay nearly 2 million SEK (approx. €190,000) in damages. These legal actions were credited with deterring further graffiti activity. SL also introduced new measures, including fencing, radar-equipped surveillance cameras, and rapid cleaning protocols. By 2022, reported graffiti damage had decreased by 57% compared to 2018, totalling 72,904 square meters (13.5 football fields). Costs dropped to 119 million SEK in 2022 (approx. €11 million), with the metro seeing the largest improvement. Art and popular culture Art The Stockholm metro is often described as the "world’s longest art gallery," and is famous for the public art integrated into 94 of its 100 stations, including sculptures, rock formations, mosaics, paintings, light installations, engravings, and reliefs created by over 150 artists. Beyond aesthetics, Region Stockholm believes that art at the stations contributes to a calm and safe environment and reduces vandalism and graffiti, and that travellers find it easier to orient themselves when each station has its own identity. Advocacy for art on the metro was driven by artists Vera Nilsson and Siri Derkert, who faced resistance from officials and politicians who questioned its relevance and cost. However by 1957, Stockholm City Council had approved the integration of art into the metro, and established a program to commission works for new stations, starting with T-Centralen. The first major art installations in the Stockholm metro appeared in the 1960s, with Siri Derkert's 1965 work at Östermalmstorg Station, featuring sandblasted engravings focused on feminism, world peace, and environmentalism. These early installations often reflected modernist aesthetics and social commentary. In the 1970s, as the metro expanded, including with the opening of the Blue Line, large-scale, immersive artworks became common. Artists collaborated closely with architects, as seen at Solna Centrum (1975), where Anders Åberg and Karl-Olov Björk created a red and green cave-like design addressing urbanisation and environmental issues. In the 1980s and 1990s, the art program diversified, with stations like Kungsträdgården (designed by Ulrik Samuelson) incorporating historical motifs and archaeological elements. At Rissne, a fresco about the history of Earth's civilisations runs along both sides of the platform. Some installations have sparked debate. In 2017, Liv Strömquist’s temporary exhibition The Night Garden at Slussen station featured sketches depicting subjects such as menstruation. While some praised the work for addressing taboos, others criticised it as inappropriate for a public setting. SL defended the work as part of its commitment to diverse artistic expressions. Art projects continue to be an integral part all new station designs. Since 2015, public competitions have been held to select artists for new stations on the Blue and Green lines. These new works are developed in close collaboration with architects and engineers, and often respond to the specific context of each station, reflecting local history, culture, or natural surroundings. For example at Barkarbystaden station, Helena Byström’s work incorporates dynamic video art that references the area’s military aviation history. Urban legends The modern railway network, which was inaugurated in 1950, has racked up several mythical urban legends over the years, notably involving ghost phenomena, especially of the horror genre. The most famous of these is the legend of the Silver Train (), a silver colored ghost train that traffics the Stockholm Metro and carries dead people to the afterlife. The legend is said to originate from the C5-cars, an aluminium prototype metro train which never received paint and usually ran at night. The C5 carries the nickname of "the Silver Arrow" (Silverpilen), which has since carried over to the ghost train. Another notable urban legend, especially in connection with the Silver Train, surrounds the unfinished Kymlinge metro station, which was built but never taken into service. The legend says no living get off at Kymlinge, only the dead. This is usually combined with the legend of the Silver Train, which is said to only stop at Kymlinge. Future In 2013, it was announced that agreement had been reached on the future of several extensions. Preliminary planning started in 2016 and revenue service on the first sections is projected to begin in the mid 2020s. In 2017, another agreement was reached regarding several public transportation projects in Stockholm, including a fourth metro line. The extensions, which are the first in 40 years, will add 18 new metro stations making the total number of stations 118. It is the policy of the Stockholm Metro that all new extensions and lines are built underground. Altogether, this amounts to the following new constructions: Blue Line to Nacka and the southern suburbs From Kungsträdgården, there will be a new station at Sofia on Södermalm, after which the line splits with one branch continuing to Nacka (with three new intermediate stations), and the other to new underground platforms at Gullmarsplan after which it will take over the current Green line branch to Hagsätra. The surface-level stations Globen and Enskede gård on the Hagsätra branch will be closed and replaced by a new underground station at Slakthusområdet. This allows higher frequencies on the Green Line branches to Farsta strand and Skarpnäck which are currently limited by the fact that three branches pass the bottleneck at T-Centralen. Blue Line to Barkarby Station An extension of the Blue Line north-west from Akalla to Barkarby railway station via Barkarbystaden, a new development on the former site of Barkarby Airport. Green Line to Arenastaden From Odenplan via the new development at , Södra Hagalund and ending in (roughly around the vicinity of the Strawberry Arena and Westfield Mall of Scandinavia), with construction of this segment expected to finish in 2028. Was originally referred to as the Yellow Line after a competition was held by Stockholm City Council in 2014, but was redesignated as a new branch of the Green Line in May 2023 and the Yellow Line designation was subsequently only used to refer to the below-mentioned Fridhemsplan–Älvsjö subway line Yellow Line A new, automated line between Fridhemsplan and Älvsjö via Liljeholmen, Årstaberg, Årstafältet and Östbergahöjden (has been renamed since a station at Roslagsbanan is already called Östberga), since May 2023 referred to as the Yellow Line. New Depot To support the expansion of the Stockholm Metro, the Högdalen depot is being extended with new underground staging areas to service trains on both the Blue and Green lines. A new underground connection will link the depot to the Green line’s Farsta branch.
Technology
Scandinavia
null
363559
https://en.wikipedia.org/wiki/Pancreatic%20cancer
Pancreatic cancer
Pancreatic cancer arises when cells in the pancreas, a glandular organ behind the stomach, begin to multiply out of control and form a mass. These cancerous cells have the ability to invade other parts of the body. A number of types of pancreatic cancer are known. The most common, pancreatic adenocarcinoma, accounts for about 90% of cases, and the term "pancreatic cancer" is sometimes used to refer only to that type. These adenocarcinomas start within the part of the pancreas that makes digestive enzymes. Several other types of cancer, which collectively represent the majority of the non-adenocarcinomas, can also arise from these cells. About 1–2% of cases of pancreatic cancer are neuroendocrine tumors, which arise from the hormone-producing cells of the pancreas. These are generally less aggressive than pancreatic adenocarcinoma. Signs and symptoms of the most-common form of pancreatic cancer may include yellow skin, abdominal or back pain, unexplained weight loss, light-colored stools, dark urine, and loss of appetite. Usually, no symptoms are seen in the disease's early stages, and symptoms that are specific enough to suggest pancreatic cancer typically do not develop until the disease has reached an advanced stage. By the time of diagnosis, pancreatic cancer has often spread to other parts of the body. Pancreatic cancer rarely occurs before the age of 40, and more than half of cases of pancreatic adenocarcinoma occur in those over 70. Risk factors for pancreatic cancer include tobacco smoking, obesity, diabetes, and certain rare genetic conditions. About 25% of cases are linked to smoking, and 5–10% are linked to inherited genes. Pancreatic cancer is usually diagnosed by a combination of medical imaging techniques such as ultrasound or computed tomography, blood tests, and examination of tissue samples (biopsy). The disease is divided into stages, from early (stage I) to late (stage IV). Screening the general population has not been found to be effective. The risk of developing pancreatic cancer is lower among non-smokers, and people who maintain a healthy weight and limit their consumption of red or processed meat; the risk is greater for men, smokers, and those with diabetes. There is some evidence that links high levels of red meat consumption to increased risk of pancreatic cancer. Smokers' risk of developing the disease decreases immediately upon quitting, and almost returns to that of the rest of the population after 20 years. Pancreatic cancer can be treated with surgery, radiotherapy, chemotherapy, palliative care, or a combination of these. Treatment options are partly based on the cancer stage. Surgery is the only treatment that can cure pancreatic adenocarcinoma, and may also be done to improve quality of life without the potential for cure. Pain management and medications to improve digestion are sometimes needed. Early palliative care is recommended even for those receiving treatment that aims for a cure. Pancreatic cancer is among the most deadly forms of cancer globally, with one of the lowest survival rates. In 2015, pancreatic cancers of all types resulted in 411,600 deaths globally. Pancreatic cancer is the fifth-most-common cause of death from cancer in the United Kingdom, and the third most-common in the United States. The disease occurs most often in the developed world, where about 70% of the new cases in 2012 originated. Pancreatic adenocarcinoma typically has a very poor prognosis; after diagnosis, 25% of people survive one year and 12% live for five years. For cancers diagnosed early, the five-year survival rate rises to about 20%. Neuroendocrine cancers have better outcomes; at five years from diagnosis, 65% of those diagnosed are living, though survival considerably varies depending on the type of tumor. Types The many types of pancreatic cancer can be divided into two general groups. The vast majority of cases (about 95%) occur in the part of the pancreas that produces digestive enzymes, known as the exocrine component. Several subtypes of exocrine pancreatic cancers are described, but their diagnosis and treatment have much in common. The small minority of cancers that arise in the hormone-producing (endocrine) tissue of the pancreas have different clinical characteristics and are called pancreatic neuroendocrine tumors, sometimes abbreviated as "PanNETs". Both groups occur mainly (but not exclusively) in people over 40, and are slightly more common in men, but some rare subtypes mainly occur in women or children. Exocrine cancers The exocrine group is dominated by pancreatic adenocarcinoma (variations of this name may add "invasive" and "ductal"), which is by far the most common type, representing about 85% of all pancreatic cancers. Nearly all these start in the ducts of the pancreas, as pancreatic ductal adenocarcinoma (PDAC). This is despite the fact that the tissue from which it arises – the pancreatic ductal epithelium – represents less than 10% of the pancreas by cell volume, because it constitutes only the ducts (an extensive but capillary-like duct-system fanning out) within the pancreas. This cancer originates in the ducts that carry secretions (such as enzymes and bicarbonate) away from the pancreas. About 60–70% of adenocarcinomas occur in the head of the pancreas. The next-most common type, acinar cell carcinoma of the pancreas, arises in the clusters of cells that produce these enzymes, and represents 5% of exocrine pancreas cancers. Like the 'functioning' endocrine cancers described below, acinar cell carcinomas may cause over-production of certain molecules, in this case digestive enzymes, which may cause symptoms such as skin rashes and joint pain. Cystadenocarcinomas account for 1% of pancreatic cancers, and they have a better prognosis than the other exocrine types. Pancreatoblastoma is a rare form, mostly occurring in childhood, and with a relatively good prognosis. Other exocrine cancers include adenosquamous carcinomas, signet ring cell carcinomas, hepatoid carcinomas, colloid carcinomas, undifferentiated carcinomas, and undifferentiated carcinomas with osteoclast-like giant cells. Solid pseudopapillary tumor is a rare low-grade neoplasm that mainly affects younger women, and generally has a very good prognosis. Pancreatic mucinous cystic neoplasms are a broad group of pancreas tumors that have varying malignant potential. They are being detected at a greatly increased rate as CT scans become more powerful and common, and discussion continues as how best to assess and treat them, given that many are benign. Neuroendocrine The small minority of tumors that arise elsewhere in the pancreas are mainly pancreatic neuroendocrine tumors (PanNETs). Neuroendocrine tumors (NETs) are a diverse group of benign or malignant tumors that arise from the body's neuroendocrine cells, which are responsible for integrating the nervous and endocrine systems. NETs can start in most organs of the body, including the pancreas, where the various malignant types are all considered to be rare. PanNETs are grouped into 'functioning' and 'nonfunctioning' types, depending on the degree to which they produce hormones. The functioning types secrete hormones such as insulin, gastrin, and glucagon into the bloodstream, often in large quantities, giving rise to serious symptoms such as low blood sugar, but also favoring relatively early detection. The most common functioning PanNETs are insulinomas and gastrinomas, named after the hormones they secrete. The nonfunctioning types do not secrete hormones in a sufficient quantity to give rise to overt clinical symptoms, so nonfunctioning PanNETs are often diagnosed only after the cancer has spread to other parts of the body. As with other neuroendocrine tumors, the history of the terminology and classification of PanNETs is complex. PanNETs are sometimes called "islet cell cancers", though they are now known to not actually arise from islet cells as previously thought. Signs and symptoms Since pancreatic cancer usually does not cause recognizable symptoms in its early stages, the disease is typically not diagnosed until it has spread beyond the pancreas itself. This is one of the main reasons for the generally poor survival rates. Exceptions to this are the functioning PanNETs, where over-production of various active hormones can give rise to symptoms (which depend on the type of hormone). Common presenting symptoms of pancreatic adenocarcinoma include: Pain in the upper abdomen or back, often spreading from around the stomach to the back. The location of the pain can indicate the part of the pancreas where a tumor is located. The pain may be worse at night and may increase over time to become severe and unremitting. It may be slightly relieved by bending forward. In the UK, about half of new cases of pancreatic cancer are diagnosed following a visit to a hospital emergency department for pain or jaundice. In up to two-thirds of people, abdominal pain is the main symptom, for 46% of the total accompanied by jaundice, with 13% having jaundice without pain. Jaundice, a yellow tint to the whites of the eyes or skin, with or without pain, and possibly in combination with darkened urine, results when a cancer in the head of the pancreas obstructs the common bile duct as it runs through the pancreas. Unexplained weight loss, either from loss of appetite, or loss of exocrine function resulting in poor digestion. The tumor may compress neighboring organs, disrupting digestive processes and making it difficult for the stomach to empty, which may cause nausea and a feeling of fullness. The undigested fat leads to foul-smelling, fatty feces that are difficult to flush away. Constipation is also common. At least 50% of people with pancreatic adenocarcinoma have diabetes at the time of diagnosis. While long-standing diabetes is a known risk factor for pancreatic cancer (see Risk factors), the cancer can itself cause diabetes, in which case recent onset of diabetes could be considered an early sign of the disease. People over 50 who develop diabetes have eight times the usual risk of developing pancreatic adenocarcinoma within three years, after which the relative risk declines. Other findings Trousseau's syndrome – in which blood clots form spontaneously in the portal blood vessels (portal vein thrombosis), the deep veins of the extremities (deep vein thrombosis), or the superficial veins (superficial vein thrombosis) anywhere on the body – may be associated with pancreatic cancer, and is found in about 10% of cases. Clinical depression has been reported in association with pancreatic cancer in some 10–20% of cases, and can be a hindrance to optimal management. The depression sometimes appears before the diagnosis of cancer, suggesting that it may be brought on by the biology of the disease. Other common manifestations of the disease include weakness and tiring easily, dry mouth, sleep problems, and a palpable abdominal mass. Symptoms of spread The spread of pancreatic cancer to other organs (metastasis) may also cause symptoms. Typically, pancreatic adenocarcinoma first spreads to nearby lymph nodes, and later to the liver or to the peritoneal cavity, large intestine, or lungs. Uncommonly, it spreads to the bones or brain. Cancers in the pancreas may also be secondary cancers that have spread from other parts of the body. This is uncommon, found in only about 2% of cases of pancreatic cancer. Kidney cancer is by far the most common cancer to spread to the pancreas, followed by colorectal cancer, and then cancers of the skin, breast, and lung. Surgery may be performed on the pancreas in such cases, whether in hope of a cure or to alleviate symptoms. Risk factors Risk factors for pancreatic adenocarcinoma include: Age, sex, and ethnicity – the risk of developing pancreatic cancer increases with age. Most cases occur after age 65, while cases before age 40 are uncommon. The disease is slightly more common in men than in women. In the United States, it is over 1.5 times more common in African Americans, though incidence in Africa is low. Cigarette smoking is the best-established avoidable risk factor for pancreatic cancer, approximately doubling risk among long-term smokers, the risk increasing with the number of cigarettes smoked and the years of smoking. The risk declines slowly after smoking cessation, taking some 20 years to return to almost that of nonsmokers. Obesity – a body mass index greater than 35 increases relative risk by about half. Family history – 5–10% of pancreatic cancer cases have an inherited component, where people have a family history of pancreatic cancer. The risk escalates greatly if more than one first-degree relative had the disease, and more modestly if they developed it before the age of 50. Most of the genes involved have not been identified. Hereditary pancreatitis gives a greatly increased lifetime risk of pancreatic cancer of 30–40% to the age of 70. Screening for early pancreatic cancer may be offered to individuals with hereditary pancreatitis on a research basis. Some people may choose to have their pancreas surgically removed to prevent cancer from developing in the future. Pancreatic cancer has been associated with these other rare hereditary syndromes: Peutz–Jeghers syndrome due to mutations in the STK11 tumor suppressor gene (very rare, but a very strong risk factor); dysplastic nevus syndrome (or familial atypical multiple mole and melanoma syndrome, FAMMM-PC) due to mutations in the CDKN2A tumor suppressor gene; autosomal recessive ATM and autosomal dominantly inherited mutations in the BRCA2 and PALB2 genes; hereditary non-polyposis colon cancer (Lynch syndrome); and familial adenomatous polyposis. PanNETs have been associated with multiple endocrine neoplasia type 1 (MEN1) and von Hippel Lindau syndromes. Chronic pancreatitis appears to almost triple risk, and as with diabetes, new-onset pancreatitis may be a symptom of a tumor. The risk of pancreatic cancer in individuals with familial pancreatitis is particularly high. Diabetes mellitus is a risk factor for pancreatic cancer and (as noted in the Signs and symptoms section) new-onset diabetes may also be an early sign of the disease. People who have been diagnosed with type 2 diabetes for longer than 10 years may have a 50% increased risk, as compared with individuals without diabetes. In 2021, Venturi reported that the pancreas is able to absorb in great quantity radioactive cesium (Cs-134 and Cs-137) causing chronic pancreatitis and probably pancreatic cancer with damage of pancreatic islands, causing type 3c (pancreatogenic) diabetes. Chronic pancreatitis, pancreatic cancer and diabetes mellitus increased in contaminated populations, particularly children and adolescents, after Fukushima and Chernobyl nuclear incidents. At the same time, worldwide pancreatic diseases, diabetes and environmental radiocesium are increasing. Specific types of food (as distinct from obesity) have not been clearly shown to increase the risk of pancreatic cancer. Dietary factors for which some evidence shows slightly increased risk include processed meat, red meat, and meat cooked at very high temperatures (e.g. by frying, broiling, or grilling). Alcohol Drinking alcohol excessively is a major cause of chronic pancreatitis, which in turn predisposes to pancreatic cancer, but considerable research has failed to firmly establish alcohol consumption as a direct risk factor for pancreatic cancer. Overall, the association is consistently weak and the majority of studies have found no association, with smoking a strong confounding factor. The evidence is stronger for a link with heavy drinking, of at least six drinks per day. Pathophysiology Precancer Exocrine cancers are thought to arise from several types of precancerous lesions within the pancreas, but these lesions do not always progress to cancer, and the increased numbers detected as a byproduct of the increasing use of CT scans for other reasons are not all treated. Apart from pancreatic serous cystadenomas, which are almost always benign, four types of precancerous lesion are recognized. The first is pancreatic intraepithelial neoplasia (PanIN). These lesions are microscopic abnormalities in the pancreas and are often found in autopsies of people with no diagnosed cancer. These lesions may progress from low to high grade and then to a tumor. More than 90% of cases at all grades carry a faulty KRAS gene, while in grades 2 and 3, damage to three further genes – CDKN2A (p16), p53, and SMAD4 – are increasingly often found. A second type is the intraductal papillary mucinous neoplasm (IPMN). These are macroscopic lesions, which are found in about 2% of all adults. This rate rises to about 10% by age 70. These lesions have about a 25% risk of developing into invasive cancer. They may have KRAS gene mutations (40–65% of cases) and in the GNAS Gs alpha subunit and RNF43, affecting the Wnt signaling pathway. Even if removed surgically, a considerably increased risk remains of pancreatic cancer developing subsequently. The third type, pancreatic mucinous cystic neoplasm (MCN), mainly occurs in women, and may remain benign or progress to cancer. If these lesions become large, cause symptoms, or have suspicious features, they can usually be successfully removed by surgery. A fourth type of cancer that arises in the pancreas is the intraductal tubulopapillary neoplasm. This type was recognised by the WHO in 2010 and constitutes about 1–3% of all pancreatic neoplasms. Mean age at diagnosis is 61 years (range 35–78 years). About 50% of these lesions become invasive. Diagnosis depends on histology, as these lesions are very difficult to differentiate from other lesions on either clinical or radiological grounds. Invasive cancer The genetic events found in ductal adenocarcinoma have been well characterized, and complete exome sequencing has been done for the common types of tumor. Four genes have each been found to be mutated in the majority of adenocarcinomas: KRAS (in 95% of cases), CDKN2A (also in 95%), TP53 (75%), and SMAD4 (55%). The last of these is especially associated with a poor prognosis. SWI/SNF mutations/deletions occur in about 10–15% of the adenocarcinomas. The genetic alterations in several other types of pancreatic cancer and precancerous lesions have also been researched. Transcriptomics analyses and mRNA sequencing for the common forms of pancreatic cancer have found that 75% of human genes are expressed in the tumors, with some 200 genes more specifically expressed in pancreatic cancer as compared to other tumor types. Pancreatic ductal adenocarcinoma cancer cells are known to secrete immunosuppressive cytokines, creating to a tumor microenvironment that inhibits immune detection and blocks anti-cancer immunity. Cancer associated fibroblasts secrete fibrous tissue (desmoplasia) consisting of matrix metalloproteinases and hyaluronan which blocks the host's CD8+ T-cells from reaching the tumor. Tumor associated macrophages, neutrophils and regulatory T-cells secrete cytokines and work to create a tumor microenvironment that promotes cancer growth. PanNETs The genes often found mutated in pancreatic neuroendocrine tumors (PanNETs) are different from those in exocrine pancreatic cancer. For example, KRAS mutation is normally absent. Instead, hereditary MEN1 gene mutations give risk to MEN1 syndrome, in which primary tumors occur in two or more endocrine glands. About 40–70% of people born with a MEN1 mutation eventually develop a PanNet. Other genes that are frequently mutated include DAXX, mTOR, and ATRX. Diagnosis The symptoms of pancreatic adenocarcinoma do not usually appear in the disease's early stages, and they are not individually distinctive to the disease. The symptoms at diagnosis vary according to the location of the cancer in the pancreas, which anatomists divide (from left to right on most diagrams) into the thick head, the neck, and the tapering body, ending in the tail. Regardless of a tumor's location, the most common symptom is unexplained weight loss, which may be considerable. A large minority (between 35% and 47%) of people diagnosed with the disease will have had nausea, vomiting, or a feeling of weakness. Tumors in the head of the pancreas typically also cause jaundice, pain, loss of appetite, dark urine, and light-colored stools. Tumors in the body and tail typically also cause pain. People sometimes have recent onset of atypical type 2 diabetes that is difficult to control, a history of recent but unexplained blood vessel inflammation caused by blood clots (thrombophlebitis) known as Trousseau sign, or a previous attack of pancreatitis. A doctor may suspect pancreatic cancer when the onset of diabetes in someone over 50 years old is accompanied by typical symptoms such as unexplained weight loss, persistent abdominal or back pain, indigestion, vomiting, or fatty feces. Jaundice accompanied by a painlessly swollen gallbladder (known as Courvoisier's sign) may also raise suspicion, and can help differentiate pancreatic cancer from gallstones. Medical imaging techniques, such as computed tomography (CT scan) and endoscopic ultrasound (EUS) are used both to confirm the diagnosis and to help decide whether the tumor can be surgically removed (its "resectability"). On contrast CT scan, pancreatic cancer typically shows a gradually increasing radiocontrast uptake, rather than a fast washout as seen in a normal pancreas or a delayed washout as seen in chronic pancreatitis. Magnetic resonance imaging and positron emission tomography may also be used, and magnetic resonance cholangiopancreatography may be useful in some cases. Abdominal ultrasound is less sensitive and will miss small tumors, but can identify cancers that have spread to the liver and build-up of fluid in the peritoneal cavity (ascites). It may be used for a quick and cheap first examination before other techniques. A biopsy by fine needle aspiration, often guided by endoscopic ultrasound, may be used where there is uncertainty over the diagnosis, but a histologic diagnosis is not usually required for removal of the tumor by surgery to go ahead. Liver function tests can show a combination of results indicative of bile duct obstruction (raised conjugated bilirubin, γ-glutamyl transpeptidase and alkaline phosphatase levels). CA19-9 (carbohydrate antigen 19.9) is a tumor marker that is frequently elevated in pancreatic cancer. However, it lacks sensitivity and specificity, not least because 5% of people lack the Lewis (a) antigen and cannot produce CA19-9. It has a sensitivity of 80% and specificity of 73% in detecting pancreatic adenocarcinoma, and is used for following known cases rather than diagnosis. All those with pancreatic cancer require genetic testing as high risk oncogenic mutations may provide prognostic information and certain mutations with high risk features require first degree relatives to undergo genetic testing as well. Histopathology The most common form of pancreatic cancer (adenocarcinoma) is typically characterized by moderately to poorly differentiated glandular structures on microscopic examination. There is typically considerable desmoplasia or formation of a dense fibrous stroma or structural tissue consisting of a range of cell types (including myofibroblasts, macrophages, lymphocytes and mast cells) and deposited material (such as type I collagen and hyaluronic acid). This creates a tumor microenvironment that is short of blood vessels (hypovascular) and so of oxygen (tumor hypoxia). It is thought that this prevents many chemotherapy drugs from reaching the tumor, as one factor making the cancer especially hard to treat. Staging Exocrine cancers Pancreatic cancer is usually staged following a CT scan. The most widely used cancer staging system for pancreatic cancer is the one formulated by the American Joint Committee on Cancer (AJCC) together with the Union for International Cancer Control (UICC). The AJCC-UICC staging system designates four main overall stages, ranging from early to advanced disease, based on TNM classification of Tumor size, spread to lymph Nodes, and Metastasis. To help decide treatment, the tumors are also divided into three broader categories based on whether surgical removal seems possible: in this way, tumors are judged to be "resectable", "borderline resectable", or "unresectable". When the disease is still in an early stage (AJCC-UICC stages I and II), without spread to large blood vessels or distant organs such as the liver or lungs, surgical resection of the tumor can normally be performed, if the patient is willing to undergo this major operation and is thought to be sufficiently fit. The AJCC-UICC staging system allows distinction between stage III tumors that are judged to be "borderline resectable" (where surgery is technically feasible because the celiac axis and superior mesenteric artery are still free) and those that are "unresectable" (due to more locally advanced disease); in terms of the more detailed TNM classification, these two groups correspond to T3 and T4 respectively. Locally advanced adenocarcinomas have spread into neighboring organs, which may be any of the following (in roughly decreasing order of frequency): the duodenum, stomach, transverse colon, spleen, adrenal gland, or kidney. Very often they also spread to the important blood or lymphatic vessels and nerves that run close to the pancreas, making surgery far more difficult. Typical sites for metastatic spread (stage IV disease) are the liver, peritoneal cavity and lungs, all of which occur in 50% or more of fully advanced cases. PanNETs The 2010 WHO classification of tumors of the digestive system grades all the pancreatic neuroendocrine tumors (PanNETs) into three categories, based on their degree of cellular differentiation (from "NET G1" through to the poorly differentiated "NET G3"). The U.S. National Comprehensive Cancer Network recommends use of the same AJCC-UICC staging system as pancreatic adenocarcinoma. Using this scheme, the stage-by-stage outcomes for PanNETs are dissimilar to those of the exocrine cancers. A different TNM system for PanNETs has been proposed by the European Neuroendocrine Tumor Society. Prevention and screening Apart from not smoking, the American Cancer Society recommends keeping a healthy weight, and increasing consumption of fruits, vegetables, and whole grains, while decreasing consumption of red and processed meat, although there is no consistent evidence this will prevent or reduce pancreatic cancer specifically. A 2014 review of research concluded that there was evidence that consumption of citrus fruits and curcumin reduced risk of pancreatic cancer, while there was possibly a beneficial effect from whole grains, folate, selenium, and non-fried fish. In the general population, screening of large groups is not considered effective and may be harmful as of 2019, although newer techniques, and the screening of tightly targeted groups, are being evaluated. Nevertheless, regular screening with endoscopic ultrasound and MRI/CT imaging is recommended for those at high risk from inherited genetics. A 2019 meta-analysis found that use of aspirin might be negatively associated with the incidence risk of pancreatic cancer, but found no significant relationship with pancreatic cancer mortality. Management Exocrine cancer A key assessment that is made after diagnosis is whether surgical removal of the tumor is possible (see Staging), as this is the only cure for this cancer. Whether or not surgical resection can be offered depends on how much the cancer has spread. The exact location of the tumor is also a significant factor, and CT can show how it relates to the major blood vessels passing close to the pancreas. The general health of the person must also be assessed, though age in itself is not an obstacle to surgery. Chemotherapy and, to a lesser extent, radiotherapy are likely to be offered to most people, whether or not surgery is possible. Specialists advise that the management of pancreatic cancer should be in the hands of a multidisciplinary team including specialists in several aspects of oncology, and is, therefore, best conducted in larger centers. Surgery Surgery with the intention of a cure is only possible in around one-fifth (20%) of new cases. Although CT scans help, in practice it can be difficult to determine whether the tumor can be fully removed (its "resectability"), and it may only become apparent during surgery that it is not possible to successfully remove the tumor without damaging other vital tissues. Whether or not surgical resection can be offered depends on various factors, including the precise extent of local anatomical adjacency to, or involvement of, the venous or arterial blood vessels, as well as surgical expertise and a careful consideration of projected post-operative recovery. The age of the person is not in itself a reason not to operate, but their general performance status needs to be adequate for a major operation. One particular feature that is evaluated is the encouraging presence, or discouraging absence, of a clear layer or plane of fat creating a barrier between the tumor and the vessels. Traditionally, an assessment is made of the tumor's proximity to major venous or arterial vessels, in terms of "abutment" (defined as the tumor touching no more than half a blood vessel's circumference without any fat to separate it), "encasement" (when the tumor encloses most of the vessel's circumference), or full vessel involvement. A resection that includes encased sections of blood vessels may be possible in some cases, particularly if preliminary neoadjuvant therapy is feasible, using chemotherapy and/or radiotherapy. Even when the operation appears to have been successful, cancerous cells are often found around the edges ("margins") of the removed tissue, when a pathologist examines them microscopically (this will always be done), indicating the cancer has not been entirely removed. Furthermore, cancer stem cells are usually not evident microscopically, and if they are present they may continue to develop and spread. An exploratory laparoscopy (a small, camera-guided surgical procedure) may therefore be performed to gain a clearer idea of the outcome of a full operation. For cancers involving the head of the pancreas, the Whipple procedure is the most commonly attempted curative surgical treatment. This is a major operation which involves removing the pancreatic head and the curve of the duodenum together ("pancreato-duodenectomy"), making a bypass for food from the stomach to the jejunum ("gastro-jejunostomy") and attaching a loop of jejunum to the cystic duct to drain bile ("cholecysto-jejunostomy"). It can be performed only if the person is likely to survive major surgery and if the cancer is localized without invading local structures or metastasizing. It can, therefore, be performed only in a minority of cases. Cancers of the tail of the pancreas can be resected using a procedure known as a distal pancreatectomy, which often also entails removal of the spleen. Nowadays, this can often be done using minimally invasive surgery. Although curative surgery no longer entails the very high death rates that occurred until the 1980s, a high proportion of people (about 30–45%) still have to be treated for a post-operative sickness that is not caused by the cancer itself. The most common complication of surgery is difficulty in emptying the stomach. Certain more limited surgical procedures may also be used to ease symptoms (see Palliative care): for instance, if the cancer is invading or compressing the duodenum or colon. In such cases, bypass surgery might overcome the obstruction and improve quality of life but is not intended as a cure. Chemotherapy After surgery, adjuvant chemotherapy with gemcitabine or 5-FU can be offered if the person is sufficiently fit, after a recovery period of one to two months. In people not suitable for curative surgery, chemotherapy may be used to extend life or improve its quality. Before surgery, neoadjuvant chemotherapy or chemoradiotherapy may be used in cases that are considered to be "borderline resectable" (see Staging) in order to reduce the cancer to a level where surgery could be beneficial. In other cases neoadjuvant therapy remains controversial, because it delays surgery. Gemcitabine was approved by the United States Food and Drug Administration (FDA) in 1997, after a clinical trial reported improvements in quality of life and a five-week improvement in median survival duration in people with advanced pancreatic cancer. This was the first chemotherapy drug approved by the FDA primarily for a nonsurvival clinical trial endpoint. Chemotherapy using gemcitabine alone was the standard for about a decade, as a number of trials testing it in combination with other drugs failed to demonstrate significantly better outcomes. However, the combination of gemcitabine with erlotinib was found to increase survival modestly, and erlotinib was licensed by the FDA for use in pancreatic cancer in 2005. The FOLFIRINOX chemotherapy regimen using four drugs was found more effective than gemcitabine, but with substantial side effects, and is thus only suitable for people with good performance status. This is also true of protein-bound paclitaxel (nab-paclitaxel), which was licensed by the FDA in 2013 for use with gemcitabine in pancreas cancer. By the end of 2013, both FOLFIRINOX and nab-paclitaxel with gemcitabine were regarded as good choices for those able to tolerate the side-effects, and gemcitabine remained an effective option for those who were not. A head-to-head trial between the two new options is awaited, and trials investigating other variations continue. However, the changes of the last few years have only increased survival times by a few months. Clinical trials are often conducted for novel adjuvant therapies. Radiotherapy The role of radiotherapy as an auxiliary (adjuvant) treatment after potentially curative surgery has been controversial since the 1980s. In the early 2000s the European Study Group for Pancreatic Cancer Research (ESPAC) showed prognostic superiority of adjuvant chemotherapy over chemoradiotherapy. The European Society for Medical Oncology recommends that adjuvant radiotherapy should only be used for people enrolled in clinical trials. However, there is a continuing tendency for clinicians in the US to be more ready to use adjuvant radiotherapy than those in Europe. Many clinical trials have tested a variety of treatment combinations since the 1980s, but have failed to settle the matter conclusively. Radiotherapy may form part of treatment to attempt to shrink a tumor to a resectable state, but its use on unresectable tumors remains controversial as there are conflicting results from clinical trials. The preliminary results of one trial, presented in 2013, "markedly reduced enthusiasm" for its use on locally advanced tumors. PanNETs Treatment of PanNETs, including the less common malignant types, may include a number of approaches. Some small tumors of less than 1 cm. that are identified incidentally, for example on a CT scan performed for other purposes, may be followed by watchful waiting. This depends on the assessed risk of surgery which is influenced by the site of the tumor and the presence of other medical problems. Tumors within the pancreas only (localized tumors), or with limited metastases, for example to the liver, may be removed by surgery. The type of surgery depends on the tumor location, and the degree of spread to lymph nodes. For localized tumors, the surgical procedure may be much less extensive than the types of surgery used to treat pancreatic adenocarcinoma described above, but otherwise surgical procedures are similar to those for exocrine tumors. The range of possible outcomes varies greatly; some types have a very high survival rate after surgery while others have a poor outlook. As all this group are rare, guidelines emphasize that treatment should be undertaken in a specialized center. Use of liver transplantation may be considered in certain cases of liver metastasis. For functioning tumors, the somatostatin analog class of medications, such as octreotide, can reduce the excessive production of hormones. Lanreotide can slow tumor growth. If the tumor is not amenable to surgical removal and is causing symptoms, targeted therapy with everolimus or sunitinib can reduce symptoms and slow progression of the disease. Standard cytotoxic chemotherapy is generally not very effective for PanNETs, but may be used when other drug treatments fail to prevent the disease from progressing, or in poorly differentiated PanNET cancers. Radiation therapy is occasionally used if there is pain due to anatomic extension, such as metastasis to bone. Some PanNETs absorb specific peptides or hormones, and these PanNETs may respond to nuclear medicine therapy with radiolabeled peptides or hormones such as iobenguane (iodine-131-MIBG). Radiofrequency ablation (RFA), cryoablation, and hepatic artery embolization may also be used. Palliative care Palliative care is medical care which focuses on treatment of symptoms from serious illness, such as cancer, and improving quality of life. Because pancreatic adenocarcinoma is usually diagnosed after it has progressed to an advanced stage, palliative care as a treatment of symptoms is often the only treatment possible. Palliative care focuses not on treating the underlying cancer, but on treating symptoms such as pain or nausea, and can assist in decision-making, including when or if hospice care will be beneficial. Pain can be managed with medications such as opioids or through procedural intervention, by a nerve block on the celiac plexus (CPB). This alters or, depending on the technique used, destroys the nerves that transmit pain from the abdomen. CPB is a safe and effective way to reduce the pain, which generally reduces the need to use opioid painkillers, which have significant negative side effects. Other symptoms or complications that can be treated with palliative surgery are obstruction by the tumor of the intestines or bile ducts. For the latter, which occurs in well over half of cases, a small metal tube called a stent may be inserted by endoscope to keep the ducts draining. Palliative care can also help treat depression that often comes with the diagnosis of pancreatic cancer. Both surgery and advanced inoperable tumors often lead to digestive system disorders from a lack of the exocrine products of the pancreas (exocrine insufficiency). These can be treated by taking pancreatin which contains manufactured pancreatic enzymes, and is best taken with food. Difficulty in emptying the stomach (delayed gastric emptying) is common and can be a serious problem, involving hospitalization. Treatment may involve a variety of approaches, including draining the stomach by nasogastric aspiration and drugs called proton-pump inhibitors or H2 antagonists, which both reduce production of gastric acid. Medications like metoclopramide can also be used to clear stomach contents. Prognosis Pancreatic adenocarcinoma and the other less common exocrine cancers have a very poor prognosis, as they are normally diagnosed at a late stage when the cancer is already locally advanced or has spread to other parts of the body. Outcomes are much better for PanNETs: Many are benign and completely without clinical symptoms, and even those cases not treatable with surgery have an average five-year survival rate of 16%, although the outlook varies considerably according to the type. For locally advanced and metastatic pancreatic adenocarcinomas, which together represent over 80% of cases, numerous trials comparing chemotherapy regimes have shown increased survival times, but not to more than one year. Overall five-year survival for pancreatic cancer in the US has improved from 2% in cases diagnosed in 1975–1977, and 4% in 1987–1989 diagnoses, to 6% in 2003–2009. In the less than 20% of cases of pancreatic adenocarcinoma with a diagnosis of a localized and small cancerous growth (less than 2 cm in Stage T1), about 20% of Americans survive to five years. About 1500 genes are linked to outcomes in pancreatic adenocarcinoma. These include both unfavorable genes, where high expression is related to poor outcome, for example C-Met and MUC-1, and favorable genes where high expression is associated with better survival, for example the transcription factor PELP1. Distribution In 2015, pancreatic cancers of all types resulted in 411,600 deaths globally. In 2014, an estimated 46,000 people in the US are expected to be diagnosed with pancreatic cancer and 40,000 to die of it. Although it accounts for only 2.5% of new cases, pancreatic cancer is responsible for 6% of cancer deaths each year. It is the seventh-highest cause of death from cancer worldwide. Pancreatic cancer is the fifth most-common cause of death from cancer in the United Kingdom, and the third most-common in the United States. Globally, pancreatic cancer is the 11th most-common cancer in women and the 12th most-common in men. The majority of recorded cases occur in developed countries. People from the United States have an average lifetime risk of about 1 in 67 (or 1.5%) of developing the disease, slightly higher than the figure for the UK. The disease is more common in men than women, although the difference in rates has narrowed over recent decades, probably reflecting earlier increases in female smoking. In the United States, the risk for African Americans is over 50% greater than for whites, but the rates in Africa and East Asia are much lower than those in North America or Europe. The United States, Central, and eastern Europe, and Argentina and Uruguay all have high rates. PanNETs The annual incidence of clinically recognized pancreatic neuroendocrine tumors (PanNETs) is low (about 5 per one million person-years) and is dominated by the non-functioning types. Somewhere between 45% and 90% of PanNETs are thought to be of the non-functioning types. Studies of autopsies have uncovered small PanNETs rather frequently, suggesting that the prevalence of tumors that remain inert and asymptomatic may be relatively high. Overall PanNETs are thought to account for about 1 to 2% of all pancreatic tumors. The definition and classification of PanNETs has changed over time, affecting what is known about their epidemiology and clinical relevance. History Recognition and diagnosis The earliest recognition of pancreatic cancer has been attributed to the 18th-century Italian scientist Giovanni Battista Morgagni, the historical father of modern-day anatomic pathology, who claimed to have traced several cases of cancer in the pancreas. Many 18th and 19th-century physicians were skeptical about the existence of the disease, given the similar appearance of pancreatitis. Some case reports were published in the 1820s and 1830s, and a genuine histopathologic diagnosis was eventually recorded by the American clinician Jacob Mendes Da Costa, who also doubted the reliability of Morgagni's interpretations. By the start of the 20th century, cancer of the head of the pancreas had become a well-established diagnosis. Regarding the recognition of PanNETs, the possibility of cancer of the islet cells was initially suggested in 1888. The first case of hyperinsulinism due to a tumor of this type was reported in 1927. Recognition of a non-insulin-secreting type of PanNET is generally ascribed to the American surgeons, R. M. Zollinger and E. H. Ellison, who gave their names to Zollinger–Ellison syndrome, after postulating the existence of a gastrin-secreting pancreatic tumor in a report of two cases of unusually severe peptic ulcers published in 1955. In 2010, the WHO recommended that PanNETs be referred to as "neuroendocrine" rather than "endocrine" tumors. Small precancerous neoplasms for many pancreatic cancers are being detected at greatly increased rates by modern medical imaging. One type, the intraductal papillary mucinous neoplasm (IPMN) was first described by Japanese researchers in 1982. It was noted in 2010 that: "For the next decade, little attention was paid to this report; however, over the subsequent 15 years, there has been a virtual explosion in the recognition of this tumor." Surgery The first reported partial pancreaticoduodenectomy was performed by the Italian surgeon Alessandro Codivilla in 1898, but the patient only survived 18 days before succumbing to complications. Early operations were compromised partly because of mistaken beliefs that people would die if their duodenum were removed, and also, at first, if the flow of pancreatic juices stopped. Later it was thought, also mistakenly, that the pancreatic duct could simply be tied up without serious adverse effects; in fact, it will very often leak later on. In 1907–1908, after some more unsuccessful operations by other surgeons, experimental procedures were tried on corpses by French surgeons. In 1912 the German surgeon Walther Kausch was the first to remove large parts of the duodenum and pancreas together (en bloc). This was in Breslau, now Wrocław, in Poland. In 1918 it was demonstrated, in operations on dogs, that it is possible to survive even after complete removal of the duodenum, but no such result was reported in human surgery until 1935, when the American surgeon Allen Oldfather Whipple published the results of a series of three operations at Columbia Presbyterian Hospital in New York. Only one of the patients had the duodenum entirely removed, but he survived for two years before dying of metastasis to the liver. The first operation was unplanned, as cancer was only discovered in the operating theater. Whipple's success showed the way for the future, but the operation remained a difficult and dangerous one until recent decades. He published several refinements to his procedure, including the first total removal of the duodenum in 1940, but he only performed a total of 37 operations. The discovery in the late 1930s that vitamin K prevented bleeding with jaundice, and the development of blood transfusion as an everyday process, both improved post-operative survival, but about 25% of people never left hospital alive as late as the 1970s. In the 1970s a group of American surgeons wrote urging that the procedure was too dangerous and should be abandoned. Since then outcomes in larger centers have improved considerably, and mortality from the operation is often less than 4%. In 2006 a report was published of a series of 1,000 consecutive pancreatico-duodenectomies performed by a single surgeon from Johns Hopkins Hospital between 1969 and 2003. The rate of these operations had increased steadily over this period, with only three of them before 1980, and the median operating time reduced from 8.8 hours in the 1970s to 5.5 hours in the 2000s, and mortality within 30 days or in hospital was only 1%. Another series of 2,050 operations at the Massachusetts General Hospital between 1941 and 2011 showed a similar picture of improvement. Research directions Early-stage research on pancreatic cancer includes studies of genetics and early detection, treatment at different cancer stages, surgical strategies, and targeted therapies, such as inhibition of growth factors, immune therapies, and vaccines. Bile acids may have a role in the carcinogenesis of pancreatic cancer. A key question is the timing of events as the disease develops and progresses – particularly the role of diabetes, and how and when the disease spreads. The knowledge that new onset of diabetes can be an early sign of the disease could facilitate timely diagnosis and prevention if a workable screening strategy can be developed. The European Registry of Hereditary Pancreatitis and Familial Pancreatic Cancer (EUROPAC) trial is aiming to determine whether regular screening is appropriate for people with a family history of the disease. Keyhole surgery (laparoscopy) rather than Whipple's procedure, particularly in terms of recovery time, is being evaluated. Irreversible electroporation is a relatively novel ablation technique with potential for downstaging and prolonging survival in persons with locally advanced disease, especially for tumors in proximity to peri-pancreatic vessels without risk of vascular trauma. Efforts are underway to develop new drugs, including those targeting molecular mechanisms for cancer onset, stem cells, and cell proliferation. A further approach involves the use of immunotherapy, such as oncolytic viruses. Galectin-specific mechanisms of the tumor microenvironment are under study. The nanoparticles assist in the sustained and targeted release of a drug regimen to cancer/tumor-specific sites rather than affecting healthy cells, leading to negligible or no toxicity.
Biology and health sciences
Cancer
Health
363734
https://en.wikipedia.org/wiki/Adobe%20Creative%20Suite
Adobe Creative Suite
Adobe Creative Suite (CS) is a discontinued software suite of graphic design, video editing, and web development applications developed by Adobe Systems. The last of the Creative Suite versions, Adobe Creative Suite 6 (CS6), was launched at a release event on April 23, 2012, and released on May 7, 2012. CS6 was the last of the Adobe design tools to be physically shipped as boxed software as future releases and updates would be delivered via download only. On May 6, 2013, Adobe announced that CS6 would be the last version of the Creative Suite, and that future versions of their creative software would only be available via their Adobe Creative Cloud subscription model. Adobe also announced that it would continue to support CS6 and would provide bug fixes and security updates through the next major upgrades of both Mac and Windows operating systems (as of 2013). The Creative Suite packages were pulled from Adobe's online store in 2013, but were still available on their website until January 2017. Applications The following table shows the different details of the core applications in the various Adobe Creative Suite editions. Each edition may come with all these apps included or only a subset. Editions Adobe sold Creative Suite applications in several different combinations called "editions", these included: Adobe Creative Suite 6 Design Standard is an edition of the Adobe Creative Suite 6 family of products intended for professional print, web, interactive and mobile designers. Adobe Creative Suite 6 Design & Web Premium is an edition of the Adobe Creative Suite 6 family of products intended for professional web designers and developers. Adobe Creative Suite 6 Production Premium is an edition of the Adobe Creative Suite 6 family of products intended for professional rich media and video post-production experts who create projects for film, video, broadcast, web, DVD, Blu-ray Disc, and mobile devices. Adobe Creative Suite 6 Master Collection contains applications from all of the above editions. Adobe Flash Catalyst, Adobe Contribute, Adobe OnLocation, and Adobe Device Central, previously available in CS5.5, have been dropped from the CS6 line-up. Adobe Prelude and Adobe Encore are not released as standalone products. Adobe Encore is available as part of Adobe Premiere Pro. Adobe InCopy, a word processing application that integrates with Adobe InDesign, is also part of the Creative Suite family, but is not included in any CS6 edition. In March 2013, it was reported that Adobe would no longer sell boxed copies of the Creative Suite software, instead offering digital downloads and monthly subscriptions. History Creative Suite 1 and 2 The first version of Adobe Creative Suite was released in September 2003 and Creative Suite 2 in April 2005. The first two versions (CS and CS2) were available in two editions. The Standard Edition included: Adobe Bridge (since CS2) Adobe Illustrator Adobe InCopy Adobe InDesign Adobe Photoshop Adobe Premiere Pro (since CS2) Adobe ImageReady Adobe Version Cue Design guide and training resources Adobe Stock Photos The Premium Edition also included: Adobe Acrobat Professional (Version 8 in CS2.3) Adobe Dreamweaver (since CS2.3) Adobe GoLive Creative Suite helped InDesign become the dominant publishing software, replacing QuarkXPress, because customers who purchased the suite for Photoshop and Illustrator received InDesign at no additional cost. Adobe shut down the "activation" servers for CS2 in December 2012, making it impossible for licensed users to reinstall the software if needed. In response to complaints, Adobe then made available for download a version of CS2 that did not require online activation, and published a serial number to activate it offline. Because there was no mechanism to prevent people who had never purchased a CS2 license from downloading and activating it, it was widely thought that the aging software had become freeware, despite Adobe's later explanation that it was intended only for people who had "legitimately purchased CS2". The later shutdown of the CS3 and CS4 activation servers was handled differently, with registered users given the opportunity to get individual serial numbers for offline activation, rather than a published one. Creative Suite Production Studio Adobe Creative Suite Production Studio (previously Adobe Video Collection) was a suite of programs for acquiring, editing, and distributing digital video and audio that was released during the same timeframe as Adobe Creative Suite 2. The suite was available in standard and premium editions. The Adobe Production Studio Premium edition consisted of: Adobe After Effects Professional Adobe Audition Adobe Bridge Adobe Encore DVD Adobe Premiere Pro Adobe Photoshop Adobe Illustrator Adobe Dynamic Link (Not sold separately) The Standard edition consisted of: Adobe After Effects Standard Adobe Bridge Adobe Premiere Pro Adobe Photoshop Since CS3, Adobe Production Studio became part of the Creative Suite family. The equivalent version for Production Studio Premium is the Adobe Creative Suite Production Premium. Macromedia Studio Macromedia Studio was a suite of programs designed for web content creation designed and distributed by Macromedia. After Adobe's 2005 acquisition of Macromedia, Macromedia Studio 8 was replaced, modified, and integrated into two editions of the Adobe Creative Suite family of software from version 2.3 onwards. The closest relatives of Macromedia Studio 8 are now called Adobe Creative Suite Web Premium. Core applications from Macromedia Studio have been merged with Adobe Creative Suite since CS3, including Flash, Dreamweaver, and Fireworks. Some Macromedia applications were absorbed into existing Adobe products, e.g. FreeHand has been replaced with Adobe Illustrator. Director and ColdFusion are not part of Adobe Creative Suite and will only be available as standalone products. The final version of Macromedia Studio released include: Macromedia Studio MX Released May 29, 2002, internally it was version 6 and the first incarnation of the studio to use the "MX" suffix, which for marketing purposes was a shorthand abbreviation that meant "Maximize". Studio MX included Dreamweaver, Flash, FreeHand, Fireworks and a developer edition of ColdFusion. Macromedia Studio MX Plus Released February 10, 2003, sometimes referred to as MX 1.1. MX Plus was a special edition release of MX that included Freehand MX (replacing Freehand 10), Contribute and DevNet Resource Kit Special Edition in addition to the existing MX suite of products. Macromedia Studio MX 2004 Released September 10, 2003, despite its name, it is internally version 7. Studio MX 2004 included FreeHand along with updated versions of Dreamweaver, Flash and Fireworks. An alternate version of Studio MX 2004 included Flash Professional and a new interface for Dreamweaver. Macromedia Studio 8 Released September 13, 2005, Studio 8 was the last version of Macromedia Studio. It comprised Dreamweaver 8, Flash 8, Flash 8 Video Converter, Fireworks 8, Contribute 3 and FlashPaper. Creative Suite 3 Adobe Creative Suite 3 (CS3) was announced on March 27, 2007; it introduced universal binaries for all major programs for the Apple Macintosh, as well as including all of the core applications from Macromedia Studio and Production Studio. Some Creative Suite programs also began using the Presto layout engine used in the Opera web browser. Adobe began selling CS3 applications in six different combinations called "editions." Design Standard & Premium and Web Standard & Premium began shipping on April 16, 2007, and Production Premium and Master Collection editions began shipping on July 2, 2007. The latest released CS3 version was version 3.3, released on June 2, 2008. In this version Fireworks CS3 was included in Design Premium and all editions that had included Acrobat 8 Pro had it replaced with Acrobat 9 Pro. Below is a matrix of the applications included in each edition of CS3 version 3.3: CS3 included several programs, including Dreamweaver, Flash Professional, and Fireworks that were developed by Macromedia, a former rival acquired by Adobe in 2005. It also included Adobe OnLocation and Adobe Ultra that were developed by Serious Magic, also a firm acquired by Adobe in 2006. Adobe dropped the following programs (that were previously included in CS2) from the CS3 software bundles: Adobe GoLive (replaced by Adobe Dreamweaver) Adobe ImageReady (merged into Adobe Photoshop and replaced by Adobe Fireworks) Adobe Audition (replaced by Adobe Soundbooth) Adobe had announced that it would continue to develop Audition as a standalone product, while GoLive had been discontinued. Adobe GoLive 9 was released as a standalone product on June 10, 2007. Adobe Audition 3 was announced as a standalone product on September 6, 2007. Adobe had discontinued ImageReady and had replaced it with Fireworks, with some of ImageReady's features integrated into Photoshop. Audition became part of the Creative Suite again in CS5.5 when Soundbooth was discontinued. Creative Suite 4 Adobe Creative Suite 4 (CS4) was announced on September 23, 2008, and officially released on October 15, 2008. All applications in CS4 featured the same user interface, with a new tabbed interface for working with concurrently running Adobe CS4 programs where multiple documents can be opened inside multiple tabs contained in a single window. Adobe CS4 was also developed to perform better under 64-bit and multi-core processors. On MS Windows, Adobe Photoshop CS4 ran natively as a 64-bit application. Although they were not natively 64-bit applications, Adobe After Effects CS4 and Adobe Premiere Pro CS4 had been optimized for 64-bit computers. However, there were no 64-bit versions of CS4 available for Mac OS X. Additionally, CS4 was the last version of Adobe Creative Suite installable on the PowerPC architecture on Mac OS X, although not all applications in the suite are available for PowerPC. The unavailable products on PowerPC include the featured applications within the Production Premium collection (Soundbooth, Encore, After Effects, Premiere, and OnLocation). In early testing of 64-bit support in Adobe Photoshop CS4, overall performance gains ranged from 8% to 12%, due to the fact that 64-bit applications could address larger amounts of memory and thus resulted in less file swapping — one of the biggest factors that can affect data processing speed. Two programs were dropped from the CS4 line-up: Adobe Ultra, a vector keying application which utilizes image analysis technology to produce high quality chroma key effects in less than ideal lighting environments and provides keying of a subject into a virtual 3D environment through virtual set technology, and Adobe Stock Photos. Below is a matrix of the applications that were bundled in each of the software suites for CS4: Creative Suite 5 Adobe Creative Suite 5 (CS5) was released on April 30, 2010. From CS5 onwards, Windows versions of Adobe Premiere Pro CS5 and Adobe After Effects CS5 were 64-bit only and required at least Windows Vista 64-bit or a later 64-bit Windows version. Windows XP Professional x64 Edition was no longer supported. The Mac versions of the CS5 programs were rewritten using macOS's Cocoa APIs in an effort to modernize the codebase. These new Mac versions dropped support for PowerPC-based Macs and were 64-bit Intel-only. Adobe Version Cue, an application that enabled users to track and manipulate file metadata and automate the process of collaboratively reviewing documents among groups of people, and the Adobe Creative Suite Web Standard edition, previously available in CS4, were dropped from the CS5 line-up. Below is a matrix of the applications that were bundled in each of the software suites for CS5: Creative Suite 5.5 Following the release of CS5 in April 2010, Adobe changed its release strategy to an every other year release of major number installments. CS5.5 was presented on April 12, 2011, as an in-between program until CS6. The update helped developers optimize websites for a variety of tablets, smart phones, and other devices. At the same time, Adobe announced a subscription-based pay service as an alternative to full purchase. On July 1, 2011, Adobe Systems announced its Switcher Program, which will allow people who had purchased any version of Apple's Final Cut Pro or Avid Media Composer to receive a 50 percent discount on Creative Suite CS5.5 Production Premium or Premiere Pro CS5.5. Not all products were upgraded to CS5.5 in this release; applications that were upgraded to CS5.5 included Adobe InDesign, Adobe Flash Catalyst, Adobe Flash Professional, Adobe Dreamweaver, Adobe Premiere Pro, Adobe After Effects, and Adobe Device Central. Adobe Audition also replaced Adobe Soundbooth in CS5.5, Adobe Story was first offered as an AIR-powered screenwriting and preproduction application, and Adobe Acrobat X Pro replaced Acrobat 9.3 Pro. Below is a matrix of the applications that were bundled in each of the software suites for CS5.5: Creative Suite 6 During an Adobe conference call on June 21, 2011, CEO Shantanu Narayen said that the April 2011 launch of CS5.5 was "the first release in our transition to an annual release cycle", adding, "We intend to ship the next milestone release of Creative Suite in 2012." On March 21, 2012, Adobe released a freely available beta version of Adobe Photoshop CS6. The final version of Adobe CS6 was launched on a release event April 23, 2012, and first shipped May 7. Adobe also launched a subscription-based offering named Adobe Creative Cloud where users are able to gain access to individual applications or the full Adobe Creative Suite 6 suite on a per-month basis, plus additional cloud storage spaces and services. The native 64-bit Windows applications available in Creative Suite 6 were Photoshop, Illustrator, After Effects (64-bit only), Premiere Pro (64-bit only), Encore (64-bit only), SpeedGrade (64-bit only) and Bridge. Discontinuation On May 5, 2013, during the opening keynote of its Adobe MAX conference, Adobe announced that it was retiring the "Creative Suite" branding in favor of "Creative Cloud", and making all future feature updates to its software (now appended with "CC" instead of "CS", e.g. Photoshop CC) available via the Creative Cloud subscription service rather than through the purchasing of perpetual licenses. Customers must pay a subscription fee and if they stop paying, they will lose access to the proprietary file formats, which are not backward-compatible with the Creative Suite (Adobe admitted that this is a valid concern). Individual subscribers must have an Internet connection to download the software and to use the 2 GB of provided storage space (or the additionally purchased 20 GB), and must validate the license monthly. Adobe's decision to make the subscription service the only sales route for its creative software was met with strong criticism (see Creative Cloud controversy). Several online articles began offering replacements of Photoshop, Illustrator, and other programs, with free software such as GIMP and Inkscape or competing products such as Affinity Designer, CorelDRAW, PaintShop Pro, and Pixelmator directly offering alternatives. In addition to many of the products formerly part of the Creative Suite (one product, Fireworks, was announced as having reached the end of its development cycle), Creative Cloud also offers subscription-exclusive products such as Adobe Muse and the Adobe Edge family, Web-based file and website hosting, Typekit fonts, and access to the Behance social media platform. The new CC versions of their applications, and the full launch of the updated Creative Cloud service, was announced for June 17, 2013. New versions with major feature updates have been released regularly, with a refresh of the file formats occurring in October 2014. Adobe also announced that it would continue to offer bug fixes for the CS6 products so that they will continue to run on the next versions of Microsoft Windows and Apple OS X. However, they have said there are no updates planned to enable CS6 to run in macOS Catalina.
Technology
Multimedia_2
null
363903
https://en.wikipedia.org/wiki/Newtonian%20fluid
Newtonian fluid
A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector. A fluid is Newtonian only if the tensors that describe the viscous stress and the strain rate are related by a constant viscosity tensor that does not depend on the stress state and velocity of the flow. If the fluid is also isotropic (i.e., its mechanical properties are the same along any direction), the viscosity tensor reduces to two real coefficients, describing the fluid's resistance to continuous shear deformation and continuous compression or expansion, respectively. Newtonian fluids are the easiest mathematical models of fluids that account for viscosity. While no real fluid fits the definition perfectly, many common liquids and gases, such as water and air, can be assumed to be Newtonian for practical calculations under ordinary conditions. However, non-Newtonian fluids are relatively common and include oobleck (which becomes stiffer when vigorously sheared) and non-drip paint (which becomes thinner when sheared). Other examples include many polymer solutions (which exhibit the Weissenberg effect), molten polymers, many solid suspensions, blood, and most highly viscous fluids. Newtonian fluids are named after Isaac Newton, who first used the differential equation to postulate the relation between the shear strain rate and shear stress for such fluids. Definition An element of a flowing liquid or gas will endure forces from the surrounding fluid, including viscous stress forces that cause it to gradually deform over time. These forces can be mathematically first order approximated by a viscous stress tensor, usually denoted by . The deformation of a fluid element, relative to some previous state, can be first order approximated by a strain tensor that changes with time. The time derivative of that tensor is the strain rate tensor, that expresses how the element's deformation is changing with time; and is also the gradient of the velocity vector field at that point, often denoted . The tensors and can be expressed by 3×3 matrices, relative to any chosen coordinate system. The fluid is said to be Newtonian if these matrices are related by the equation where is a fixed 3×3×3×3 fourth order tensor that does not depend on the velocity or stress state of the fluid. Incompressible isotropic case For an incompressible and isotropic Newtonian fluid in laminar flow only in the direction x (i.e. where viscosity is isotropic in the fluid), the shear stress is related to the strain rate by the simple constitutive equation where is the shear stress ("skin drag") in the fluid, is a scalar constant of proportionality, the dynamic viscosity of the fluid is the derivative in the direction y, normal to x, of the flow velocity component u that is oriented along the direction x. In case of a general 2D incompressibile flow in the plane x, y, the Newton constitutive equation become: where: is the shear stress ("skin drag") in the fluid, is the partial derivative in the direction y of the flow velocity component u that is oriented along the direction x. is the partial derivative in the direction x of the flow velocity component v that is oriented along the direction y. We can now generalize to the case of an incompressible flow with a general direction in the 3D space, the above constitutive equation becomes where is the th spatial coordinate is the fluid's velocity in the direction of axis is the -th component of the stress acting on the faces of the fluid element perpendicular to axis . It is the ij-th component of the shear stress tensor or written in more compact tensor notation where is the flow velocity gradient. An alternative way of stating this constitutive equation is: where is the rate-of-strain tensor. So this decomposition can be made explicit as: This constitutive equation is also called the Newton law of viscosity. The total stress tensor can always be decomposed as the sum of the isotropic stress tensor and the deviatoric stress tensor (): In the incompressible case, the isotropic stress is simply proportional to the thermodynamic pressure : and the deviatoric stress is coincident with the shear stress tensor : The stress constitutive equation then becomes or written in more compact tensor notation where is the identity tensor. General compressible case The Newton's constitutive law for a compressible flow results from the following assumptions on the Cauchy stress tensor: the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient , or more simply the rate-of-strain tensor: the deviatoric stress is linear in this variable: , where is independent on the strain rate tensor, is the fourth-order tensor representing the constant of proportionality, called the viscosity or elasticity tensor, and : is the double-dot product. the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor is symmetric, by Helmholtz decomposition it can be expressed in terms of two scalar Lamé parameters, the second viscosity and the dynamic viscosity , as it is usual in linear elasticity: where is the identity tensor, and is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as: Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow: Given this relation, and since the trace of the identity tensor in three dimensions is three: the trace of the stress tensor in three dimensions becomes: So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics: Introducing the bulk viscosity , we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics: which can also be arranged in the other usual form: Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term: and the deviatoric stress tensor is still coincident with the shear stress tensor (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity: Note that the incompressible case correspond to the assumption that the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with . So one returns to the expressions for pressure and deviatoric stress seen in the preceding paragraph. Both bulk viscosity and dynamic viscosity need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state. Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion. In some cases, the second viscosity can be assumed to be constant in which case, the effect of the volume viscosity is that the mechanical pressure is not equivalent to the thermodynamic pressure: as demonstrated below. However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming . The assumption of setting is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect. Finally, note that Stokes hypothesis is less restrictive that the one of incompressible flow. In fact, in the incompressible flow both the bulk viscosity term, and the shear viscosity term in the divergence of the flow velocity term disappears, while in the Stokes hypothesis the first term also disappears but the second one still remains. For anisotropic fluids More generally, in a non-isotropic Newtonian fluid, the coefficient that relates internal friction stresses to the spatial derivatives of the velocity field is replaced by a nine-element viscous stress tensor . There is general formula for friction force in a liquid: The vector differential of friction force is equal the viscosity tensor increased on vector product differential of the area vector of adjoining a liquid layers and rotor of velocity: where is the viscosity tensor. The diagonal components of viscosity tensor is molecular viscosity of a liquid, and not diagonal components – turbulence eddy viscosity. Newton's law of viscosity The following equation illustrates the relation between shear rate and shear stress for a fluid with laminar flow only in the direction x: where: is the shear stress in the components x and y, i.e. the force component on the direction x per unit surface that is normal to the direction y (so it is parallel to the direction x) is the dynamic viscosity, and is the flow velocity gradient along the direction y, that is normal to the flow velocity . If viscosity does not vary with rate of deformation the fluid is Newtonian. Power law model The power law model is used to display the behavior of Newtonian and non-Newtonian fluids and measures shear stress as a function of strain rate. The relationship between shear stress, strain rate and the velocity gradient for the power law model are: where is the absolute value of the strain rate to the (n−1) power; is the velocity gradient; n is the power law index. If n < 1 then the fluid is a pseudoplastic. n = 1 then the fluid is a Newtonian fluid. n > 1 then the fluid is a dilatant. Fluid model The relationship between the shear stress and shear rate in a casson fluid model is defined as follows: where τ0 is the yield stress and where α depends on protein composition and H is the Hematocrit number. Examples Water, air, alcohol, glycerol, and thin motor oil are all examples of Newtonian fluids over the range of shear stresses and shear rates encountered in everyday life. Single-phase fluids made up of small molecules are generally (although not exclusively) Newtonian.
Physical sciences
Fluid mechanics
Physics
363985
https://en.wikipedia.org/wiki/Metacentric%20height
Metacentric height
The metacentric height (GM) is a measurement of the initial static stability of a floating body. It is calculated as the distance between the centre of gravity of a ship and its metacentre. A larger metacentric height implies greater initial stability against overturning. The metacentric height also influences the natural period of rolling of a hull, with very large metacentric heights being associated with shorter periods of roll which are uncomfortable for passengers. Hence, a sufficiently, but not excessively, high metacentric height is considered ideal for passenger ships. Different centres The centre of buoyancy is at the centre of mass of the volume of water that the hull displaces. This point is referred to as B in naval architecture. The centre of gravity of the ship is commonly denoted as point G or CG. When a ship is at equilibrium, the centre of buoyancy is vertically in line with the centre of gravity of the ship. The metacentre is the point where the lines intersect (at angle φ) of the upward force of buoyancy of φ ± dφ. When the ship is vertical, the metacentre lies above the centre of gravity and so moves in the opposite direction of heel as the ship rolls. This distance is also abbreviated as GM. As the ship heels over, the centre of gravity generally remains fixed with respect to the ship because it just depends on the position of the ship's weight and cargo, but the surface area increases, increasing BMφ. Work must be done to roll a stable hull. This is converted to potential energy by raising the centre of mass of the hull with respect to the water level or by lowering the centre of buoyancy or both. This potential energy will be released in order to right the hull and the stable attitude will be where it has the least magnitude. It is the interplay of potential and kinetic energy that results in the ship having a natural rolling frequency. For small angles, the metacentre, Mφ, moves with a lateral component so it is no longer directly over the centre of mass. The righting couple on the ship is proportional to the horizontal distance between two equal forces. These are gravity acting downwards at the centre of mass and the same magnitude force acting upwards through the centre of buoyancy, and through the metacentre above it. The righting couple is proportional to the metacentric height multiplied by the sine of the angle of heel, hence the importance of metacentric height to stability. As the hull rights, work is done either by its centre of mass falling, or by water falling to accommodate a rising centre of buoyancy, or both. For example, when a perfectly cylindrical hull rolls, the centre of buoyancy stays on the axis of the cylinder at the same depth. However, if the centre of mass is below the axis, it will move to one side and rise, creating potential energy. Conversely if a hull having a perfectly rectangular cross section has its centre of mass at the water line, the centre of mass stays at the same height, but the centre of buoyancy goes down as the hull heels, again storing potential energy. When setting a common reference for the centres, the molded (within the plate or planking) line of the keel (K) is generally chosen; thus, the reference heights are: KB – to Centre of Buoyancy KG – to Centre of Gravity KMT – to Transverse Metacentre Metacentre When a ship heels (rolls sideways), the centre of buoyancy of the ship moves laterally. It might also move up or down with respect to the water line. The point at which a vertical line through the heeled centre of buoyancy crosses the line through the original, vertical centre of buoyancy is the metacentre. The metacentre remains directly above the centre of buoyancy by definition. In the diagram above, the two Bs show the centres of buoyancy of a ship in the upright and heeled conditions. The metacentre, M, is considered to be fixed relative to the ship for small angles of heel; however, at larger angles the metacentre can no longer be considered fixed, and its actual location must be found to calculate the ship's stability. It can be calculated using the formulae: Where KB is the centre of buoyancy (height above the keel), I is the second moment of area of the waterplane around the rotation axis in metres4, and V is the volume of displacement in metres3. KM is the distance from the keel to the metacentre. Stable floating objects have a natural rolling frequency, just like a weight on a spring, where the frequency is increased as the spring gets stiffer. In a boat, the equivalent of the spring stiffness is the distance called "GM" or "metacentric height", being the distance between two points: "G" the centre of gravity of the boat and "M", which is a point called the metacentre. Metacentre is determined by the ratio between the inertia resistance of the boat and the volume of the boat. (The inertia resistance is a quantified description of how the waterline width of the boat resists overturning.) Wide and shallow hulls have high transverse metacentres, whilst narrow and deep hulls have low metacentres . Ignoring the ballast, wide and shallow means that the ship is very quick to roll, and narrow and deep means that the ship is very hard to overturn and is stiff. "G", is the center of gravity. "GM", the stiffness parameter of a boat, can be lengthened by lowering the center of gravity or changing the hull form (and thus changing the volume displaced and second moment of area of the waterplane) or both. An ideal boat strikes a balance. Very tender boats with very slow roll periods are at risk of overturning, but are comfortable for passengers. However, vessels with a higher metacentric height are "excessively stable" with a short roll period resulting in high accelerations at the deck level. Sailing yachts, especially racing yachts, are designed to be stiff, meaning the distance between the centre of mass and the metacentre is very large in order to resist the heeling effect of the wind on the sails. In such vessels, the rolling motion is not uncomfortable because of the moment of inertia of the tall mast and the aerodynamic damping of the sails. Righting arm The metacentric height is an approximation for the vessel stability at a small angle (0-15 degrees) of heel. Beyond that range, the stability of the vessel is dominated by what is known as a righting moment. Depending on the geometry of the hull, naval architects must iteratively calculate the center of buoyancy at increasing angles of heel. They then calculate the righting moment at this angle, which is determined using the equation: Where RM is the righting moment, GZ is the righting arm and is the displacement. Because the vessel displacement is constant, common practice is to simply graph the righting arm vs the angle of heel. The righting arm (known also as GZ — see diagram): the horizontal distance between the lines of buoyancy and gravity. at small angles of heel There are several important factors that must be determined with regards to righting arm/moment. These are known as the maximum righting arm/moment, the point of deck immersion, the downflooding angle, and the point of vanishing stability. The maximum righting moment is the maximum moment that could be applied to the vessel without causing it to capsize. The point of deck immersion is the angle at which the main deck will first encounter the sea. Similarly, the downflooding angle is the angle at which water will be able to flood deeper into the vessel. Finally, the point of vanishing stability is a point of unstable equilibrium. Any heel lesser than this angle will allow the vessel to right itself, while any heel greater than this angle will cause a negative righting moment (or heeling moment) and force the vessel to continue to roll over. When a vessel reaches a heel equal to its point of vanishing stability, any external force will cause the vessel to capsize. Sailing vessels are designed to operate with a higher degree of heel than motorized vessels and the righting moment at extreme angles is of high importance. Monohulled sailing vessels should be designed to have a positive righting arm (the limit of positive stability) to at least 120° of heel, although many sailing yachts have stability limits down to 90° (mast parallel to the water surface). As the displacement of the hull at any particular degree of list is not proportional, calculations can be difficult, and the concept was not introduced formally into naval architecture until about 1970. Stability GM and rolling period The metacentre has a direct relationship with a ship's rolling period. A ship with a small GM will be "tender" - have a long roll period. An excessively low or negative GM increases the risk of a ship capsizing in rough weather, for example HMS Captain or the Vasa. It also puts the vessel at risk of potential for large angles of heel if the cargo or ballast shifts, such as with the Cougar Ace. A ship with low GM is less safe if damaged and partially flooded because the lower metacentric height leaves less safety margin. For this reason, maritime regulatory agencies such as the International Maritime Organization specify minimum safety margins for seagoing vessels. A larger metacentric height on the other hand can cause a vessel to be too "stiff"; excessive stability is uncomfortable for passengers and crew. This is because the stiff vessel quickly responds to the sea as it attempts to assume the slope of the wave. An overly stiff vessel rolls with a short period and high amplitude which results in high angular acceleration. This increases the risk of damage to the ship and to cargo and may cause excessive roll in special circumstances where eigenperiod of wave coincide with eigenperiod of ship roll. Roll damping by bilge keels of sufficient size will reduce the hazard. Criteria for this dynamic stability effect remain to be developed. In contrast, a "tender" ship lags behind the motion of the waves and tends to roll at lesser amplitudes. A passenger ship will typically have a long rolling period for comfort, perhaps 12 seconds while a tanker or freighter might have a rolling period of 6 to 8 seconds. The period of roll can be estimated from the following equation: where g is the gravitational acceleration, a44 is the added radius of gyration and k is the radius of gyration about the longitudinal axis through the centre of gravity and is the stability index. Damaged stability If a ship floods, the loss of stability is caused by the increase in KB, the centre of buoyancy, and the loss of waterplane area - thus a loss of the waterplane moment of inertia - which decreases the metacentric height. This additional mass will also reduce freeboard (distance from water to the deck) and the ship's downflooding angle (minimum angle of heel at which water will be able to flow into the hull). The range of positive stability will be reduced to the angle of down flooding resulting in a reduced righting lever. When the vessel is inclined, the fluid in the flooded volume will move to the lower side, shifting its centre of gravity toward the list, further extending the heeling force. This is known as the free surface effect. Free surface effect In tanks or spaces that are partially filled with a fluid or semi-fluid (fish, ice, or grain for example) as the tank is inclined the surface of the liquid, or semi-fluid, stays level. This results in a displacement of the centre of gravity of the tank or space relative to the overall centre of gravity. The effect is similar to that of carrying a large flat tray of water. When an edge is tipped, the water rushes to that side, which exacerbates the tip even further. The significance of this effect is proportional to the cube of the width of the tank or compartment, so two baffles separating the area into thirds will reduce the displacement of the centre of gravity of the fluid by a factor of 9. This is of significance in ship fuel tanks or ballast tanks, tanker cargo tanks, and in flooded or partially flooded compartments of damaged ships. Another worrying feature of free surface effect is that a positive feedback loop can be established, in which the period of the roll is equal or almost equal to the period of the motion of the centre of gravity in the fluid, resulting in each roll increasing in magnitude until the loop is broken or the ship capsizes. This has been significant in historic capsizes, most notably the and the . Transverse and longitudinal metacentric heights There is also a similar consideration in the movement of the metacentre forward and aft as a ship pitches. Metacentres are usually separately calculated for transverse (side to side) rolling motion and for lengthwise longitudinal pitching motion. These are variously known as and , GM(t) and GM(l), or sometimes GMt and GMl . Technically, there are different metacentric heights for any combination of pitch and roll motion, depending on the moment of inertia of the waterplane area of the ship around the axis of rotation under consideration, but they are normally only calculated and stated as specific values for the limiting pure pitch and roll motion. Measurement The metacentric height is normally estimated during the design of a ship but can be determined by an inclining test once it has been built. This can also be done when a ship or offshore floating platform is in service. It can be calculated by theoretical formulas based on the shape of the structure. The angle(s) obtained during the inclining experiment are directly related to GM. By means of the inclining experiment, the 'as-built' centre of gravity can be found; obtaining GM and KM by experiment measurement (by means of pendulum swing measurements and draft readings), the centre of gravity KG can be found. So KM and GM become the known variables during inclining and KG is the wanted calculated variable (KG = KM-GM)
Physical sciences
Fluid mechanics
Physics
364380
https://en.wikipedia.org/wiki/Quantum%20foam
Quantum foam
Quantum foam (or spacetime foam, or spacetime bubble) is a theoretical quantum fluctuation of spacetime on very small scales due to quantum mechanics. The theory predicts that at this small scale, particles of matter and antimatter are constantly created and destroyed. These subatomic objects are called virtual particles. The idea was devised by John Wheeler in 1955. Background With an incomplete theory of quantum gravity, it is impossible to be certain what spacetime looks like at small scales. However, there is no definitive reason that spacetime needs to be fundamentally smooth. It is possible that instead, in a quantum theory of gravity, spacetime would consist of many small, ever-changing regions in which space and time are not definite, but fluctuate in a foam-like manner. Wheeler suggested that the uncertainty principle might imply that over sufficiently small distances and sufficiently brief intervals of time, the "very geometry of spacetime fluctuates". These fluctuations could be large enough to cause significant departures from the smooth spacetime seen at macroscopic scales, giving spacetime a "foamy" character. Experimental results The experimental proof of the Casimir effect, which is possibly caused by virtual particles, is strong evidence for the existence of virtual particles. The g-2 experiment, which predicts the strength of magnets formed by muons and electrons, also supports their existence. In 2005, during observations of gamma-ray photons arriving from the blazar Markarian 501, MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) telescopes detected that some of the photons at different energy levels arrived at different times, suggesting that some of the photons had moved more slowly and thus were in violation of special relativity's notion that the speed of light is constant, a discrepancy which could be explained by the irregularity of quantum foam. Subsequent experiments were, however, unable to confirm the supposed variation on the speed of light due to graininess of space. Other experiments involving the polarization of light from distant gamma ray bursts have also produced contradictory results. More Earth-based experiments are ongoing or proposed. Constraints on the size of quantum fluctuations The fluctuations characteristic of a spacetime foam would be expected to occur on a length scale on the order of the Planck length (≈ 10−35 m), but some models of quantum gravity predict much larger fluctuations. Photons should be slowed by quantum foam, with the rate depending on the wavelength of the photons. This would violate Lorentz invariance. But observations of radiation from nearby quasars by Floyd Stecker of NASA's Goddard Space Flight Center failed to find evidence of violation of Lorentz invariance. A foamy spacetime also sets limits on the accuracy with which distances can be measured because photons should diffuse randomly through a spacetime foam, similar to light diffusing by passing through fog. This should cause the image quality of very distant objects observed through telescopes to degrade. X-ray and gamma-ray observations of quasars using NASA's Chandra X-ray Observatory, the Fermi Gamma-ray Space Telescope and ground-based gamma-ray observations from the Very Energetic Radiation Imaging Telescope Array (VERITAS) showed no detectable degradation at the farthest observed distances, implying that spacetime is smooth at least down to distances 1000 times smaller than the nucleus of a hydrogen atom, setting a bound on the size of quantum fluctuations of spacetime. Relation to other theories The vacuum fluctuations provide vacuum with a non-zero energy known as vacuum energy. Spin foam theory is a modern attempt to make Wheeler's idea quantitative.
Physical sciences
Quantum mechanics
Physics
364478
https://en.wikipedia.org/wiki/Absorption%20spectroscopy
Absorption spectroscopy
Absorption spectroscopy is spectroscopy that involves techniques that measure the absorption of electromagnetic radiation, as a function of frequency or wavelength, due to its interaction with a sample. The sample absorbs energy, i.e., photons, from the radiating field. The intensity of the absorption varies as a function of frequency, and this variation is the absorption spectrum. Absorption spectroscopy is performed across the electromagnetic spectrum. Absorption spectroscopy is employed as an analytical chemistry tool to determine the presence of a particular substance in a sample and, in many cases, to quantify the amount of the substance present. Infrared and ultraviolet–visible spectroscopy are particularly common in analytical applications. Absorption spectroscopy is also employed in studies of molecular and atomic physics, astronomical spectroscopy and remote sensing. There is a wide range of experimental approaches for measuring absorption spectra. The most common arrangement is to direct a generated beam of radiation at a sample and detect the intensity of the radiation that passes through it. The transmitted energy can be used to calculate the absorption. The source, sample arrangement and detection technique vary significantly depending on the frequency range and the purpose of the experiment. Following are the major types of absorption spectroscopy: Absorption spectrum A material's absorption spectrum is the fraction of incident radiation absorbed by the material over a range of frequencies of electromagnetic radiation. The absorption spectrum is primarily determined by the atomic and molecular composition of the material. Radiation is more likely to be absorbed at frequencies that match the energy difference between two quantum mechanical states of the molecules . The absorption that occurs due to a transition between two states is referred to as an absorption line and a spectrum is typically composed of many lines. The frequencies at which absorption lines occur, as well as their relative intensities, primarily depend on the electronic and molecular structure of the sample. The frequencies will also depend on the interactions between molecules in the sample, the crystal structure in solids, and on several environmental factors (e.g., temperature, pressure, electric field, magnetic field). The lines will also have a width and shape that are primarily determined by the spectral density or the density of states of the system. Theory It is a branch of atomic spectra where, Absorption lines are typically classified by the nature of the quantum mechanical change induced in the molecule or atom. Rotational lines, for instance, occur when the rotational state of a molecule is changed. Rotational lines are typically found in the microwave spectral region. Vibrational lines correspond to changes in the vibrational state of the molecule and are typically found in the infrared region. Electronic lines correspond to a change in the electronic state of an atom or molecule and are typically found in the visible and ultraviolet region. X-ray absorptions are associated with the excitation of inner shell electrons in atoms. These changes can also be combined (e.g. rotation–vibration transitions), leading to new absorption lines at the combined energy of the two changes. The energy associated with the quantum mechanical change primarily determines the frequency of the absorption line but the frequency can be shifted by several types of interactions. Electric and magnetic fields can cause a shift. Interactions with neighboring molecules can cause shifts. For instance, absorption lines of the gas phase molecule can shift significantly when that molecule is in a liquid or solid phase and interacting more strongly with neighboring molecules. The width and shape of absorption lines are determined by the instrument used for the observation, the material absorbing the radiation and the physical environment of that material. It is common for lines to have the shape of a Gaussian or Lorentzian distribution. It is also common for a line to be described solely by its intensity and width instead of the entire shape being characterized. The integrated intensity—obtained by integrating the area under the absorption line—is proportional to the amount of the absorbing substance present. The intensity is also related to the temperature of the substance and the quantum mechanical interaction between the radiation and the absorber. This interaction is quantified by the transition moment and depends on the particular lower state the transition starts from, and the upper state it is connected to. The width of absorption lines may be determined by the spectrometer used to record it. A spectrometer has an inherent limit on how narrow a line it can resolve and so the observed width may be at this limit. If the width is larger than the resolution limit, then it is primarily determined by the environment of the absorber. A liquid or solid absorber, in which neighboring molecules strongly interact with one another, tends to have broader absorption lines than a gas. Increasing the temperature or pressure of the absorbing material will also tend to increase the line width. It is also common for several neighboring transitions to be close enough to one another that their lines overlap and the resulting overall line is therefore broader yet. Relation to transmission spectrum Absorption and transmission spectra represent equivalent information and one can be calculated from the other through a mathematical transformation. A transmission spectrum will have its maximum intensities at wavelengths where the absorption is weakest because more light is transmitted through the sample. An absorption spectrum will have its maximum intensities at wavelengths where the absorption is strongest. Relation to emission spectrum Emission is a process by which a substance releases energy in the form of electromagnetic radiation. Emission can occur at any frequency at which absorption can occur, and this allows the absorption lines to be determined from an emission spectrum. The emission spectrum will typically have a quite different intensity pattern from the absorption spectrum, though, so the two are not equivalent. The absorption spectrum can be calculated from the emission spectrum using Einstein coefficients. Relation to scattering and reflection spectra The scattering and reflection spectra of a material are influenced by both its refractive index and its absorption spectrum. In an optical context, the absorption spectrum is typically quantified by the extinction coefficient, and the extinction and index coefficients are quantitatively related through the Kramers–Kronig relations. Therefore, the absorption spectrum can be derived from a scattering or reflection spectrum. This typically requires simplifying assumptions or models, and so the derived absorption spectrum is an approximation. Applications Absorption spectroscopy is useful in chemical analysis because of its specificity and its quantitative nature. The specificity of absorption spectra allows compounds to be distinguished from one another in a mixture, making absorption spectroscopy useful in wide variety of applications. For instance, Infrared gas analyzers can be used to identify the presence of pollutants in the air, distinguishing the pollutant from nitrogen, oxygen, water, and other expected constituents. The specificity also allows unknown samples to be identified by comparing a measured spectrum with a library of reference spectra. In many cases, it is possible to determine qualitative information about a sample even if it is not in a library. Infrared spectra, for instance, have characteristics absorption bands that indicate if carbon-hydrogen or carbon-oxygen bonds are present. An absorption spectrum can be quantitatively related to the amount of material present using the Beer–Lambert law. Determining the absolute concentration of a compound requires knowledge of the compound's absorption coefficient. The absorption coefficient for some compounds is available from reference sources, and it can also be determined by measuring the spectrum of a calibration standard with a known concentration of the target. Remote sensing One of the unique advantages of spectroscopy as an analytical technique is that measurements can be made without bringing the instrument and sample into contact. Radiation that travels between a sample and an instrument will contain the spectral information, so the measurement can be made remotely. Remote spectral sensing is valuable in many situations. For example, measurements can be made in toxic or hazardous environments without placing an operator or instrument at risk. Also, sample material does not have to be brought into contact with the instrument—preventing possible cross contamination. Remote spectral measurements present several challenges compared to laboratory measurements. The space in between the sample of interest and the instrument may also have spectral absorptions. These absorptions can mask or confound the absorption spectrum of the sample. These background interferences may also vary over time. The source of radiation in remote measurements is often an environmental source, such as sunlight or the thermal radiation from a warm object, and this makes it necessary to distinguish spectral absorption from changes in the source spectrum. To simplify these challenges, differential optical absorption spectroscopy has gained some popularity, as it focusses on differential absorption features and omits broad-band absorption such as aerosol extinction and extinction due to rayleigh scattering. This method is applied to ground-based, airborne, and satellite-based measurements. Some ground-based methods provide the possibility to retrieve tropospheric and stratospheric trace gas profiles. Astronomy Astronomical spectroscopy is a particularly significant type of remote spectral sensing. In this case, the objects and samples of interest are so distant from earth that electromagnetic radiation is the only means available to measure them. Astronomical spectra contain both absorption and emission spectral information. Absorption spectroscopy has been particularly important for understanding interstellar clouds and determining that some of them contain molecules. Absorption spectroscopy is also employed in the study of extrasolar planets. Detection of extrasolar planets by transit photometry also measures their absorption spectrum and allows for the determination of the planet's atmospheric composition, temperature, pressure, and scale height, and hence allows also for the determination of the planet's mass. Atomic and molecular physics Theoretical models, principally quantum mechanical models, allow for the absorption spectra of atoms and molecules to be related to other physical properties such as electronic structure, atomic or molecular mass, and molecular geometry. Therefore, measurements of the absorption spectrum are used to determine these other properties. Microwave spectroscopy, for example, allows for the determination of bond lengths and angles with high precision. In addition, spectral measurements can be used to determine the accuracy of theoretical predictions. For example, the Lamb shift measured in the hydrogen atomic absorption spectrum was not expected to exist at the time it was measured. Its discovery spurred and guided the development of quantum electrodynamics, and measurements of the Lamb shift are now used to determine the fine-structure constant. Experimental methods Basic approach The most straightforward approach to absorption spectroscopy is to generate radiation with a source, measure a reference spectrum of that radiation with a detector and then re-measure the sample spectrum after placing the material of interest in between the source and detector. The two measured spectra can then be combined to determine the material's absorption spectrum. The sample spectrum alone is not sufficient to determine the absorption spectrum because it will be affected by the experimental conditions—the spectrum of the source, the absorption spectra of other materials between the source and detector, and the wavelength dependent characteristics of the detector. The reference spectrum will be affected in the same way, though, by these experimental conditions and therefore the combination yields the absorption spectrum of the material alone. A wide variety of radiation sources are employed in order to cover the electromagnetic spectrum. For spectroscopy, it is generally desirable for a source to cover a broad swath of wavelengths in order to measure a broad region of the absorption spectrum. Some sources inherently emit a broad spectrum. Examples of these include globars or other black body sources in the infrared, mercury lamps in the visible and ultraviolet, and X-ray tubes. One recently developed, novel source of broad spectrum radiation is synchrotron radiation, which covers all of these spectral regions. Other radiation sources generate a narrow spectrum, but the emission wavelength can be tuned to cover a spectral range. Examples of these include klystrons in the microwave region and lasers across the infrared, visible, and ultraviolet region (though not all lasers have tunable wavelengths). The detector employed to measure the radiation power will also depend on the wavelength range of interest. Most detectors are sensitive to a fairly broad spectral range and the sensor selected will often depend more on the sensitivity and noise requirements of a given measurement. Examples of detectors common in spectroscopy include heterodyne receivers in the microwave, bolometers in the millimeter-wave and infrared, mercury cadmium telluride and other cooled semiconductor detectors in the infrared, and photodiodes and photomultiplier tubes in the visible and ultraviolet. If both the source and the detector cover a broad spectral region, then it is also necessary to introduce a means of resolving the wavelength of the radiation in order to determine the spectrum. Often a spectrograph is used to spatially separate the wavelengths of radiation so that the power at each wavelength can be measured independently. It is also common to employ interferometry to determine the spectrum—Fourier transform infrared spectroscopy is a widely used implementation of this technique. Two other issues that must be considered in setting up an absorption spectroscopy experiment include the optics used to direct the radiation and the means of holding or containing the sample material (called a cuvette or cell). For most UV, visible, and NIR measurements the use of precision quartz cuvettes are necessary. In both cases, it is important to select materials that have relatively little absorption of their own in the wavelength range of interest. The absorption of other materials could interfere with or mask the absorption from the sample. For instance, in several wavelength ranges it is necessary to measure the sample under vacuum or in a noble gas environment because gases in the atmosphere have interfering absorption features. Specific approaches Astronomical spectroscopy Cavity ring-down spectroscopy (CRDS) Laser absorption spectrometry (LAS) Mössbauer spectroscopy Photoacoustic spectroscopy Photoemission spectroscopy Photothermal optical microscopy Photothermal spectroscopy Reflectance spectroscopy Reflection-absorption infrared spectroscopy (RAIRS) Total absorption spectroscopy (TAS) Tunable diode laser absorption spectroscopy (TDLAS) X-ray absorption fine structure (XAFS) X-ray absorption near edge structure (XANES)
Physical sciences
Analytical chemistry
null
364625
https://en.wikipedia.org/wiki/Muscovy%20duck
Muscovy duck
The Muscovy duck (Cairina moschata) is a duck native to the Americas, from the Rio Grande Valley of Texas and Mexico south to Argentina and Uruguay. Feral Muscovy ducks are found in New Zealand, Australia, and in Central and Eastern Europe. Small wild and feral breeding populations have also established themselves in the United States, particularly in Florida, Louisiana, Massachusetts, the Big Island of Hawaii, as well as in many other parts of North America, including southern Canada. It is a large duck, with the males about long, and weighing up to . Females are noticeably smaller, and only grow to , roughly half the males' size. The bird is predominantly black and white, with the back feathers being iridescent and glossy in males, while the females are more drab. The amount of white on the neck and head is variable, as well as the bill, which can be yellow, pink, black, or any mixture of these colors. It may have white patches or bars on the wings, which become more noticeable during flight. Both sexes have pink or red wattles around the bill, those of the male being larger and more brightly colored. Although the Muscovy duck is a tropical bird, it adapts well to cooler climates, thriving in weather as cold as and able to survive even colder conditions. In general, Barbary duck is the term used for C. moschata in a culinary context. The domestic subspecies, Cairina moschata domestica, is commonly known in Spanish as the pato criollo. They have been bred since pre-Columbian times by Native Americans and are heavier and less able to fly long distances than the wild subspecies. Their plumage color is also more variable. Other names for the domestic breed in Spanish are pato casero ("household duck") and pato mudo ("mute duck"). Description All Muscovy ducks have long claws on their feet and a wide, flat tail. In the domestic drake (male), length is about and weight is , while the domestic hen (female) is much smaller, at in length and in weight. Large domesticated males often weigh up to , and large domesticated females up to . The true wild Muscovy duck, from which all domestic Muscovies originated, is blackish, with large white wing patches. Length can range from , wingspan from and weight from . On the head, the wild male has a short crest on the nape. The bill is black with a speckling of pale pink. A blackish or dark red knob can be seen at the bill base, which is similar in colour to the bare skin of the face. The eyes are yellowish-brown. The legs and webbed feet are blackish. The wild female is similar in plumage, but much smaller, with a feathered face and lacking the prominent knob. The juvenile is duller overall, with little or no white on the upperwing. Domesticated birds may look similar; most are dark brown or black mixed with white, particularly on the head. Other colors, such as lavender or all-white, are also seen. Both sexes have a nude black-and-red or all-red face; the drake also has pronounced caruncles at the base of the bill and a low erectile crest of feathers. C. moschata ducklings are mostly yellow with buff-brown markings on the tail and wings. For a while after hatching, juveniles lack the distinctive wattles associated with adult individuals, and resemble the offspring of various other ducks, such as mallards. Some domesticated ducklings have a dark head and blue eyes, others a light brown crown and dark markings on their nape. They are agile and speedy precocial birds. The drake has a low breathy call, and the hen a quiet trilling coo. The karyotype of the Muscovy duck is 2n=80, consisting of three pairs of macrochromosomes, 36 pairs of microchromosomes, and a pair of sex chromosomes. The two largest macrochromosome pairs are submetacentric, while all other chromosomes are acrocentric or probably telocentric for the smallest microchromosomes. The submetacentric chromosomes and the Z (female) chromosome show rather little constitutive heterochromatin (C bands), while the W chromosomes are at least two-thirds heterochromatin. Male Muscovy ducks have helical penises that become erect to in s. Females have vaginas that coil in the opposite direction that appear to have evolved to limit forced copulation by males. Etymology Common name "Muscovy" "Muscovy" is an old name for the region of Russia surrounding Moscow, but these ducks are neither native there nor were introduced there before they became known in Western Europe. It is not quite clear how the term came about; it very likely originated between 1550 and 1600, but did not become widespread until somewhat later. In one suggestion, it has been claimed that the Company of Merchant Adventurers to New Lands traded these ducks to Europe occasionally after 1550; this chartered company became eventually known as the "Muscovy Company" or "Muscovite Company" so the ducks might thus have come to be called "Muscovite ducks" or "Muscovy ducks" in keeping with the common practice of attaching the importer's name to the products they sold. But while the Muscovite Company initiated vigorous trade with Russia, they hardly, if at all, traded produce from the Americas; thus, they are unlikely to have traded C. moschata to a significant extent. Alternatively—just as in the "turkey" (which is also from North America, not Turkey) and the "guineafowl" (which are not limited to Guinea)—"Muscovy" might be simply a generic term for an exotic place, in reference to the singular appearance of these birds. This is evidenced by other names suggesting the species came from lands where it is not actually native, but from where much "outlandish" produce was imported at that time (see below). Yet another view—not incompatible with either of those discussed above—connects the species with the Muisca, a Native American nation in today's Colombia. The duck is native to these lands also, and it is likely that it was kept by the Muisca as a domestic animal to some extent. It is conceivable that a term like "Muisca duck", hard to comprehend for the average European of those times, would be corrupted into something more familiar. Likewise, the Miskito Indians of the Miskito Coast in Nicaragua and Honduras heavily relied on it as a domestic species, and the ducks as well may have been named after this region. Species name "moschata" Linnaeus’ description of Anas moschata only consists of a curt but entirely unequivocal [Anas] facie nuda papillosa ("A duck with a naked and carunculated face"), and his primary reference is his earlier work Fauna Svecica. But Linnaeus refers also to older sources, wherein much information on the origin of the common name is found. Conrad Gessner is given by Linnaeus as a source, but the Historia animalium mentions the Muscovy duck only in passing. Ulisse Aldrovandi discusses the species in detail, referring to the wild birds and its domestic breeds variously as anas cairina, anas indica or anas libyca – "duck from Cairo", "Indian duck" (in reference to the West Indies) or "Libyan duck". But his anas indica (based, like Gessner's brief discussion, ultimately on the reports of Christopher Columbus's travels) also seems to have included another species, perhaps a whistling-duck (Dendrocygna). Already however the species is tied to some more or less nondescript "exotic" locality – "Libya" could still refer to any place in Northern Africa at that time – where it did not natively occur. Francis Willughby discusses "The Muscovy duck" as anas moschata and expresses his belief that Aldrovandi's and Gessner's anas cairina, anas indica and anas libyca (which he calls "The Guiny duck", adding another mistaken place of origin to the list) refer to the very same species. Finally, John Ray attempts to clear up the confusion by providing an alternative explanation for the name's etymology: In English, it is called The Muscovy-Duck, though this is not transferred from Muscovia [the Neo-Latin name of Muscovy], but from the rather strong musk odour it exudes. Linnaeus came to witness the birds' "gamey" aroma first-hand, as he attests in the Fauna Svecica and again in the travelogue of this 1746 Västergötland excursion. Similarly, the Russian name of this species, muskusnaya utka (Мускусная утка), means "musk duck" – without any reference to Moscow – as do the Bokmål and Danish moskusand, Dutch muskuseend, Finnish myskisorsa, French canard musqué, German Moschusente, Italian anatra muschiata, Spanish pato almizclado and Swedish myskand. In English, however, musk duck refers to the Australian species Biziura lobata. Genus name "Cairina" The currently assigned genus name Cairina, meanwhile, traces its origin to Aldrovandi and the mistaken belief that the birds came from Egypt: translated, the current scientific name of the Muscovy duck means "the musky one from Cairo". Other names In some regions the name "Barbary duck" is used for domestic and "Muscovy duck" for wild birds; in other places, "Barbary duck" refers specifically to the dressed carcass, while "Muscovy duck" applies to living C. moschata, regardless of whether they are wild or domestic. In general, "Barbary duck" is the usual term for C. moschata in a culinary context. Taxonomy and systematics The species was first scientifically described by Carl Linnaeus in his 1758 edition of Systema Naturae as Anas moschata, literally meaning "musk duck". It was later transferred to the genus Cairina, making its current binomial name Cairina moschata. The Muscovy duck was formerly placed into the paraphyletic "perching duck" assemblage, but subsequently moved to the dabbling duck subfamily (Anatinae). Analysis of the mtDNA sequences of the cytochrome b and NADH dehydrogenase subunit 2 genes, however, indicates that it might be closer to the genus Aix and better placed in the shelduck subfamily Tadorninae. In addition, the other species of Cairina, the rare white-winged duck (C. scutulata), seems to belong to a distinct genus (Asarcornis). Ecology This non-migratory species normally inhabits forested swamps, lakes, streams and nearby grassland and farm crops, and often roosts in trees at night. The Muscovy duck's diet consists of plant material (such as the roots, stems, leaves, and seeds of aquatic plants and grasses, as well as terrestrial plants, including agricultural crops) obtained by grazing or dabbling in shallow water, and small fish, amphibians, reptiles, crustaceans, spiders, insects, millipedes, and worms. This is an aggressive duck; males often fight over food, territory or mates. The females fight with each other less often. Some adults will peck at the ducklings if they are eating at the same food source. The Muscovy duck has benefited from nest boxes in Mexico, but is somewhat uncommon in much of the eastern part of its range due to excessive hunting. It is not considered a globally threatened species by the IUCN, however, as it is widely distributed. Reproduction This species, like the mallard, does not form stable pairs. They will mate on land or in water. Domestic Muscovy ducks can breed up to three times each year. The hen lays a clutch of 8–16 white eggs, usually in a tree hole or hollow, which are incubated for 35 days. The sitting hen will leave the nest once a day from 20 minutes to one and a half hours, and will then defecate, drink water, eat and sometimes bathe. Once the eggs begin to hatch, it may take 24 hours for all the chicks to break through their shells. When feral chicks are born, they usually stay with their mother for about 10–12 weeks. Their bodies cannot produce all the heat they need, especially in temperate regions, so they will stay close to the mother, especially at night. Often, the drake will stay in close contact with the brood for several weeks. The male will walk with the young during their normal travels in search for food, providing protection. Anecdotal evidence from East Anglia, U.K. suggests that, in response to different environmental conditions, other adults assist in protecting chicks and providing warmth at night. It has been suggested that this is in response to local efforts to cull the eggs, which has led to an atypical distribution of males and females, as well as young and mature birds. For the first few weeks of their lives, Muscovy chicks feed on grains, corn, grass, insects, and almost anything that moves. Their mother instructs them at an early age how to feed. Feral bird Feral Muscovy ducks can breed near urban and suburban lakes and on farms, nesting in tree cavities or on the ground, under shrubs in yards, on apartment balconies, or under roof overhangs. Some feral populations, such as that in southern Florida, have a reputation of becoming pests on occasion. At night they often sleep at water, if there is a water source available, to flee quickly from predators if awakened. Small populations of Muscovy ducks can also be found in Ely, Cambridgeshire, Calstock, Cornwall, and Lincoln, Lincolnshire, U.K. Muscovy ducks have also been spotted in the Walsall Arboretum. There has been a small population in the Pavilion Gardens public park in Buxton, Derbyshire for many years. In the U.S., Muscovy ducks are considered a non-native species. An owner may raise them for food production only (not for hunting). Similarly, if the ducks have no owner, 50CFR Part 21 (Migratory Bird Permits) allows the removal or destruction of the ducks, their eggs and their nests anywhere in the United States outside of Hidalgo, Starr and Zapata Counties in Texas, where they are considered indigenous. The population in southern Florida is considered, with numbers in the several thousands, to be established enough to be considered "countable" for bird watchers. Legal methods to restrict breeding include not feeding these ducks, deterring them with noise or chasing them away. Although legislation passed in the U.S. prohibiting trade of Muscovy ducks, Fish and Wildlife Services intend to revise the regulations. They are not currently implementing them, though release of Muscovy ducks to the wild outside their natural range is prohibited. Domestication Muscovy ducks had been domesticated by various Native American cultures in the Americas when Columbus arrived in the Bahamas. A few were brought onto Columbus' ship the Santa Maria, they then sailed back to Europe by the 16th century. The Muscovy duck has been domesticated for centuries, and is widely traded as "Barbary duck". Muscovy breeds are popular because they have stronger-tasting meat—sometimes compared to roast beef—than the usual domestic ducks, which are descendants of the mallard (Anas platyrhynchos). The meat is lean when compared to the fatty meat of mallard-derived ducks, its leanness and tenderness being often compared to veal. Muscovy ducks are also less noisy, and sometimes marketed as a "quackless" duck; even though they are not completely silent, they do not actually quack (except in cases of extreme stress). Many backyard duck owners report that Muscovy ducks have more personality than mallard-derived ducks, often comparing them to dogs for their tameness and willingness to approach owners for food or stroking. The carcass of a Muscovy duck is also much heavier than most other domestic ducks, which makes it ideal for the dinner table. Domesticated Muscovy ducks, like those pictured, often have plumage features differing from other wild birds. White breeds are preferred for meat production, as darker ones can have much melanin in the skin, which some people find unappealing. The Muscovy duck can be crossed with mallards in captivity to produce hybrids known as mulards ("mule ducks") because they are sterile. Muscovy drakes are commercially crossed with mallard-derived hens either naturally or by artificial insemination. The 40–60% of eggs that are fertile result in birds raised only for their meat or for production of foie gras: they grow fast like mallard-derived breeds, but to a large size like Muscovy ducks. Conversely, though crossing mallard-derived drakes with Muscovy hens is possible, the offspring are neither desirable for meat nor for egg production. In addition, Muscovy ducks are reportedly crossbred in Israel with mallards to produce kosher duck products. The kashrut status of the Muscovy duck has been a matter of rabbinic discussion for over 150 years. A study examining birds in northwestern Colombia for blood parasites found the Muscovy duck to be more frequently infected with Haemoproteus and malaria (Plasmodium) parasites than chickens, domestic pigeons, domestic turkeys and, in fact, almost all wild bird species also studied. It was noted that in other parts of the world, chickens were more susceptible to such infections than in the study area, but it may well be that Muscovy ducks are generally more often infected with such parasites (which might not cause pronounced disease, though, and are harmless to humans). Gallery
Biology and health sciences
Anseriformes
Animals
364774
https://en.wikipedia.org/wiki/Conformal%20field%20theory
Conformal field theory
A conformal field theory (CFT) is a quantum field theory that is invariant under conformal transformations. In two dimensions, there is an infinite-dimensional algebra of local conformal transformations, and conformal field theories can sometimes be exactly solved or classified. Conformal field theory has important applications to condensed matter physics, statistical mechanics, quantum statistical mechanics, and string theory. Statistical and condensed matter systems are indeed often conformally invariant at their thermodynamic or quantum critical points. Scale invariance vs conformal invariance In quantum field theory, scale invariance is a common and natural symmetry, because any fixed point of the renormalization group is by definition scale invariant. Conformal symmetry is stronger than scale invariance, and one needs additional assumptions to argue that it should appear in nature. The basic idea behind its plausibility is that local scale invariant theories have their currents given by where is a Killing vector and is a conserved operator (the stress-tensor) of dimension exactly . For the associated symmetries to include scale but not conformal transformations, the trace has to be a non-zero total derivative implying that there is a non-conserved operator of dimension exactly . Under some assumptions it is possible to completely rule out this type of non-renormalization and hence prove that scale invariance implies conformal invariance in a quantum field theory, for example in unitary compact conformal field theories in two dimensions. While it is possible for a quantum field theory to be scale invariant but not conformally invariant, examples are rare. For this reason, the terms are often used interchangeably in the context of quantum field theory. Two dimensions vs higher dimensions The number of independent conformal transformations is infinite in two dimensions, and finite in higher dimensions. This makes conformal symmetry much more constraining in two dimensions. All conformal field theories share the ideas and techniques of the conformal bootstrap. But the resulting equations are more powerful in two dimensions, where they are sometimes exactly solvable (for example in the case of minimal models), in contrast to higher dimensions, where numerical approaches dominate. The development of conformal field theory has been earlier and deeper in the two-dimensional case, in particular after the 1983 article by Belavin, Polyakov and Zamolodchikov. The term conformal field theory has sometimes been used with the meaning of two-dimensional conformal field theory, as in the title of a 1997 textbook. Higher-dimensional conformal field theories have become more popular with the AdS/CFT correspondence in the late 1990s, and the development of numerical conformal bootstrap techniques in the 2000s. Global vs local conformal symmetry in two dimensions The global conformal group of the Riemann sphere is the group of Möbius transformations , which is finite-dimensional. On the other hand, infinitesimal conformal transformations form the infinite-dimensional Witt algebra: the conformal Killing equations in two dimensions, reduce to just the Cauchy-Riemann equations, , the infinity of modes of arbitrary analytic coordinate transformations yield the infinity of Killing vector fields . Strictly speaking, it is possible for a two-dimensional conformal field theory to be local (in the sense of possessing a stress-tensor) while still only exhibiting invariance under the global . This turns out to be unique to non-unitary theories; an example is the biharmonic scalar. This property should be viewed as even more special than scale without conformal invariance as it requires to be a total second derivative. Global conformal symmetry in two dimensions is a special case of conformal symmetry in higher dimensions, and is studied with the same techniques. This is done not only in theories that have global but not local conformal symmetry, but also in theories that do have local conformal symmetry, for the purpose of testing techniques or ideas from higher-dimensional CFT. In particular, numerical bootstrap techniques can be tested by applying them to minimal models, and comparing the results with the known analytic results that follow from local conformal symmetry. Conformal field theories with a Virasoro symmetry algebra In a conformally invariant two-dimensional quantum theory, the Witt algebra of infinitesimal conformal transformations has to be centrally extended. The quantum symmetry algebra is therefore the Virasoro algebra, which depends on a number called the central charge. This central extension can also be understood in terms of a conformal anomaly. It was shown by Alexander Zamolodchikov that there exists a function which decreases monotonically under the renormalization group flow of a two-dimensional quantum field theory, and is equal to the central charge for a two-dimensional conformal field theory. This is known as the Zamolodchikov C-theorem, and tells us that renormalization group flow in two dimensions is irreversible. In addition to being centrally extended, the symmetry algebra of a conformally invariant quantum theory has to be complexified, resulting in two copies of the Virasoro algebra. In Euclidean CFT, these copies are called holomorphic and antiholomorphic. In Lorentzian CFT, they are called left-moving and right moving. Both copies have the same central charge. The space of states of a theory is a representation of the product of the two Virasoro algebras. This space is a Hilbert space if the theory is unitary. This space may contain a vacuum state, or in statistical mechanics, a thermal state. Unless the central charge vanishes, there cannot exist a state that leaves the entire infinite dimensional conformal symmetry unbroken. The best we can have is a state that is invariant under the generators of the Virasoro algebra, whose basis is . This contains the generators of the global conformal transformations. The rest of the conformal group is spontaneously broken. Conformal symmetry Definition and Jacobian For a given spacetime and metric, a conformal transformation is a transformation that preserves angles. We will focus on conformal transformations of the flat -dimensional Euclidean space or of the Minkowski space . If is a conformal transformation, the Jacobian is of the form where is the scale factor, and is a rotation (i.e. an orthogonal matrix) or Lorentz transformation. Conformal group The conformal group is locally isomorphic to (Euclidean) or (Minkowski). This includes translations, rotations (Euclidean) or Lorentz transformations (Minkowski), and dilations i.e. scale transformations This also includes special conformal transformations. For any translation , there is a special conformal transformation where is the inversion such that In the sphere , the inversion exchanges with . Translations leave fixed, while special conformal transformations leave fixed. Conformal algebra The commutation relations of the corresponding Lie algebra are where generate translations, generates dilations, generate special conformal transformations, and generate rotations or Lorentz transformations. The tensor is the flat metric. Global issues in Minkowski space In Minkowski space, the conformal group does not preserve causality. Observables such as correlation functions are invariant under the conformal algebra, but not under the conformal group. As shown by Lüscher and Mack, it is possible to restore the invariance under the conformal group by extending the flat Minkowski space into a Lorentzian cylinder. The original Minkowski space is conformally equivalent to a region of the cylinder called a Poincaré patch. In the cylinder, global conformal transformations do not violate causality: instead, they can move points outside the Poincaré patch. Correlation functions and conformal bootstrap In the conformal bootstrap approach, a conformal field theory is a set of correlation functions that obey a number of axioms. The -point correlation function is a function of the positions and other parameters of the fields . In the bootstrap approach, the fields themselves make sense only in the context of correlation functions, and may be viewed as efficient notations for writing axioms for correlation functions. Correlation functions depend linearly on fields, in particular . We focus on CFT on the Euclidean space . In this case, correlation functions are Schwinger functions. They are defined for , and do not depend on the order of the fields. In Minkowski space, correlation functions are Wightman functions. They can depend on the order of the fields, as fields commute only if they are spacelike separated. A Euclidean CFT can be related to a Minkowskian CFT by Wick rotation, for example thanks to the Osterwalder-Schrader theorem. In such cases, Minkowskian correlation functions are obtained from Euclidean correlation functions by an analytic continuation that depends on the order of the fields. Behaviour under conformal transformations Any conformal transformation acts linearly on fields , such that is a representation of the conformal group, and correlation functions are invariant: Primary fields are fields that transform into themselves via . The behaviour of a primary field is characterized by a number called its conformal dimension, and a representation of the rotation or Lorentz group. For a primary field, we then have Here and are the scale factor and rotation that are associated to the conformal transformation . The representation is trivial in the case of scalar fields, which transform as . For vector fields, the representation is the fundamental representation, and we would have . A primary field that is characterized by the conformal dimension and representation behaves as a highest-weight vector in an induced representation of the conformal group from the subgroup generated by dilations and rotations. In particular, the conformal dimension characterizes a representation of the subgroup of dilations. In two dimensions, the fact that this induced representation is a Verma module appears throughout the literature. For higher-dimensional CFTs (in which the maximally compact subalgebra is larger than the Cartan subalgebra), it has recently been appreciated that this representation is a parabolic or generalized Verma module. Derivatives (of any order) of primary fields are called descendant fields. Their behaviour under conformal transformations is more complicated. For example, if is a primary field, then is a linear combination of and . Correlation functions of descendant fields can be deduced from correlation functions of primary fields. However, even in the common case where all fields are either primaries or descendants thereof, descendant fields play an important role, because conformal blocks and operator product expansions involve sums over all descendant fields. The collection of all primary fields , characterized by their scaling dimensions and the representations , is called the spectrum of the theory. Dependence on field positions The invariance of correlation functions under conformal transformations severely constrain their dependence on field positions. In the case of two- and three-point functions, that dependence is determined up to finitely many constant coefficients. Higher-point functions have more freedom, and are only determined up to functions of conformally invariant combinations of the positions. The two-point function of two primary fields vanishes if their conformal dimensions differ. If the dilation operator is diagonalizable (i.e. if the theory is not logarithmic), there exists a basis of primary fields such that two-point functions are diagonal, i.e. . In this case, the two-point function of a scalar primary field is where we choose the normalization of the field such that the constant coefficient, which is not determined by conformal symmetry, is one. Similarly, two-point functions of non-scalar primary fields are determined up to a coefficient, which can be set to one. In the case of a symmetric traceless tensor of rank , the two-point function is where the tensor is defined as The three-point function of three scalar primary fields is where , and is a three-point structure constant. With primary fields that are not necessarily scalars, conformal symmetry allows a finite number of tensor structures, and there is a structure constant for each tensor structure. In the case of two scalar fields and a symmetric traceless tensor of rank , there is only one tensor structure, and the three-point function is where we introduce the vector Four-point functions of scalar primary fields are determined up to arbitrary functions of the two cross-ratios The four-point function is then Operator product expansion The operator product expansion (OPE) is more powerful in conformal field theory than in more general quantum field theories. This is because in conformal field theory, the operator product expansion's radius of convergence is finite (i.e. it is not zero). Provided the positions of two fields are close enough, the operator product expansion rewrites the product of these two fields as a linear combination of fields at a given point, which can be chosen as for technical convenience. The operator product expansion of two fields takes the form where is some coefficient function, and the sum in principle runs over all fields in the theory. (Equivalently, by the state-field correspondence, the sum runs over all states in the space of states.) Some fields may actually be absent, in particular due to constraints from symmetry: conformal symmetry, or extra symmetries. If all fields are primary or descendant, the sum over fields can be reduced to a sum over primaries, by rewriting the contributions of any descendant in terms of the contribution of the corresponding primary: where the fields are all primary, and is the three-point structure constant (which for this reason is also called OPE coefficient). The differential operator is an infinite series in derivatives, which is determined by conformal symmetry and therefore in principle known. Viewing the OPE as a relation between correlation functions shows that the OPE must be associative. Furthermore, if the space is Euclidean, the OPE must be commutative, because correlation functions do not depend on the order of the fields, i.e. . The existence of the operator product expansion is a fundamental axiom of the conformal bootstrap. However, it is generally not necessary to compute operator product expansions and in particular the differential operators . Rather, it is the decomposition of correlation functions into structure constants and conformal blocks that is needed. The OPE can in principle be used for computing conformal blocks, but in practice there are more efficient methods. Conformal blocks and crossing symmetry Using the OPE , a four-point function can be written as a combination of three-point structure constants and s-channel conformal blocks, The conformal block is the sum of the contributions of the primary field and its descendants. It depends on the fields and their positions. If the three-point functions or involve several independent tensor structures, the structure constants and conformal blocks depend on these tensor structures, and the primary field contributes several independent blocks. Conformal blocks are determined by conformal symmetry, and known in principle. To compute them, there are recursion relations and integrable techniques. Using the OPE or , the same four-point function is written in terms of t-channel conformal blocks or u-channel conformal blocks, The equality of the s-, t- and u-channel decompositions is called crossing symmetry: a constraint on the spectrum of primary fields, and on the three-point structure constants. Conformal blocks obey the same conformal symmetry constraints as four-point functions. In particular, s-channel conformal blocks can be written in terms of functions of the cross-ratios. While the OPE only converges if , conformal blocks can be analytically continued to all (non pairwise coinciding) values of the positions. In Euclidean space, conformal blocks are single-valued real-analytic functions of the positions except when the four points lie on a circle but in a singly-transposed cyclic order [1324], and only in these exceptional cases does the decomposition into conformal blocks not converge. A conformal field theory in flat Euclidean space is thus defined by its spectrum and OPE coefficients (or three-point structure constants) , satisfying the constraint that all four-point functions are crossing-symmetric. From the spectrum and OPE coefficients (collectively referred to as the CFT data), correlation functions of arbitrary order can be computed. Features Unitarity A conformal field theory is unitary if its space of states has a positive definite scalar product such that the dilation operator is self-adjoint. Then the scalar product endows the space of states with the structure of a Hilbert space. In Euclidean conformal field theories, unitarity is equivalent to reflection positivity of correlation functions: one of the Osterwalder-Schrader axioms. Unitarity implies that the conformal dimensions of primary fields are real and bounded from below. The lower bound depends on the spacetime dimension , and on the representation of the rotation or Lorentz group in which the primary field transforms. For scalar fields, the unitarity bound is In a unitary theory, three-point structure constants must be real, which in turn implies that four-point functions obey certain inequalities. Powerful numerical bootstrap methods are based on exploiting these inequalities. Compactness A conformal field theory is compact if it obeys three conditions: All conformal dimensions are real. For any there are finitely many states whose dimensions are less than . There is a unique state with the dimension , and it is the vacuum state, i.e. the corresponding field is the identity field. (The identity field is the field whose insertion into correlation functions does not modify them, i.e. .) The name comes from the fact that if a 2D conformal field theory is also a sigma model, it will satisfy these conditions if and only if its target space is compact. It is believed that all unitary conformal field theories are compact in dimension . Without unitarity, on the other hand, it is possible to find CFTs in dimension four and in dimension that have a continuous spectrum. And in dimension two, Liouville theory is unitary but not compact. Extra symmetries A conformal field theory may have extra symmetries in addition to conformal symmetry. For example, the Ising model has a symmetry, and superconformal field theories have supersymmetry. Examples Mean field theory A generalized free field is a field whose correlation functions are deduced from its two-point function by Wick's theorem. For instance, if is a scalar primary field of dimension , its four-point function reads For instance, if are two scalar primary fields such that (which is the case in particular if ), we have the four-point function Mean field theory is a generic name for conformal field theories that are built from generalized free fields. For example, a mean field theory can be built from one scalar primary field . Then this theory contains , its descendant fields, and the fields that appear in the OPE . The primary fields that appear in can be determined by decomposing the four-point function in conformal blocks: their conformal dimensions belong to : in mean field theory, the conformal dimension is conserved modulo integers. Structure constants can be computed exactly in terms of the Gamma function. Similarly, it is possible to construct mean field theories starting from a field with non-trivial Lorentz spin. For example, the 4d Maxwell theory (in the absence of charged matter fields) is a mean field theory built out of an antisymmetric tensor field with scaling dimension . Mean field theories have a Lagrangian description in terms of a quadratic action involving Laplacian raised to an arbitrary real power (which determines the scaling dimension of the field). For a generic scaling dimension, the power of the Laplacian is non-integer. The corresponding mean field theory is then non-local (e.g. it does not have a conserved stress tensor operator). Critical Ising model The critical Ising model is the critical point of the Ising model on a hypercubic lattice in two or three dimensions. It has a global symmetry, corresponding to flipping all spins. The two-dimensional critical Ising model includes the Virasoro minimal model, which can be solved exactly. There is no Ising CFT in dimensions. Critical Potts model The critical Potts model with colors is a unitary CFT that is invariant under the permutation group . It is a generalization of the critical Ising model, which corresponds to . The critical Potts model exists in a range of dimensions depending on . The critical Potts model may be constructed as the continuum limit of the Potts model on d-dimensional hypercubic lattice. In the Fortuin-Kasteleyn reformulation in terms of clusters, the Potts model can be defined for , but it is not unitary if is not integer. Critical O(N) model The critical O(N) model is a CFT invariant under the orthogonal group. For any integer , it exists as an interacting, unitary and compact CFT in dimensions (and for also in two dimensions). It is a generalization of the critical Ising model, which corresponds to the O(N) CFT at . The O(N) CFT can be constructed as the continuum limit of a lattice model with spins that are N-vectors, called the n-vector model. Alternatively, the critical model can be constructed as the limit of Wilson-Fisher fixed point in dimensions. At , the Wilson-Fisher fixed point becomes the tensor product of free scalars with dimension . For the model in question is non-unitary. When N is large, the O(N) model can be solved perturbatively in a 1/N expansion by means of the Hubbard–Stratonovich transformation. In particular, the limit of the critical O(N) model is well-understood. The conformal data of the critical O(N) model are functions of N and of the dimension, on which many results are known. Conformal gauge theories Some conformal field theories in three and four dimensions admit a Lagrangian description in the form of a gauge theory, either abelian or non-abelian. Examples of such CFTs are conformal QED with sufficiently many charged fields in or the Banks-Zaks fixed point in . Applications Continuous phase transitions Continuous phase transitions (critical points) of classical statistical physics systems with D spatial dimensions are often described by Euclidean conformal field theories. A necessary condition for this to happen is that the critical point should be invariant under spatial rotations and translations. However this condition is not sufficient: some exceptional critical points are described by scale invariant but not conformally invariant theories. If the classical statistical physics system is reflection positive, the corresponding Euclidean CFT describing its critical point will be unitary. Continuous quantum phase transitions in condensed matter systems with D spatial dimensions may be described by Lorentzian D+1 dimensional conformal field theories (related by Wick rotation to Euclidean CFTs in D+1 dimensions). Apart from translation and rotation invariance, an additional necessary condition for this to happen is that the dynamical critical exponent z should be equal to 1. CFTs describing such quantum phase transitions (in absence of quenched disorder) are always unitary. String theory World-sheet description of string theory involves a two-dimensional CFT coupled to dynamical two-dimensional quantum gravity (or supergravity, in case of superstring theory). Consistency of string theory models imposes constraints on the central charge of this CFT, which should be c=26 in bosonic string theory and c=10 in superstring theory. Coordinates of the spacetime in which string theory lives correspond to bosonic fields of this CFT. AdS/CFT correspondence Conformal field theories play a prominent role in the AdS/CFT correspondence, in which a gravitational theory in anti-de Sitter space (AdS) is equivalent to a conformal field theory on the AdS boundary. Notable examples are d = 4, N = 4 supersymmetric Yang–Mills theory, which is dual to Type IIB string theory on AdS5 × S5, and d = 3, N = 6 super-Chern–Simons theory, which is dual to M-theory on AdS4 × S7. (The prefix "super" denotes supersymmetry, N denotes the degree of extended supersymmetry possessed by the theory, and d the number of space-time dimensions on the boundary.)
Physical sciences
Quantum mechanics
Physics
8978774
https://en.wikipedia.org/wiki/Quantum%20nonlocality
Quantum nonlocality
In theoretical physics, quantum nonlocality refers to the phenomenon by which the measurement statistics of a multipartite quantum system do not allow an interpretation with local realism. Quantum nonlocality has been experimentally verified under a variety of physical assumptions. Quantum nonlocality does not allow for faster-than-light communication, and hence is compatible with special relativity and its universal speed limit of objects. Thus, quantum theory is local in the strict sense defined by special relativity and, as such, the term "quantum nonlocality" is sometimes considered a misnomer. Still, it prompts many of the foundational discussions concerning quantum theory. History Einstein, Podolsky and Rosen In the 1935 EPR paper, Albert Einstein, Boris Podolsky and Nathan Rosen described "two spatially separated particles which have both perfectly correlated positions and momenta" as a direct consequence of quantum theory. They intended to use the classical principle of locality to challenge the idea that the quantum wavefunction was a complete description of reality, but instead they sparked a debate on the nature of reality. Afterwards, Einstein presented a variant of these ideas in a letter to Erwin Schrödinger, which is the version that is presented here. The state and notation used here are more modern, and akin to David Bohm's take on EPR. The quantum state of the two particles prior to measurement can be written as where . Here, subscripts “A” and “B” distinguish the two particles, though it is more convenient and usual to refer to these particles as being in the possession of two experimentalists called Alice and Bob. The rules of quantum theory give predictions for the outcomes of measurements performed by the experimentalists. Alice, for example, will measure her particle to be spin-up in an average of fifty percent of measurements. However, according to the Copenhagen interpretation, Alice's measurement causes the state of the two particles to collapse, so that if Alice performs a measurement of spin in the z-direction, that is with respect to the basis , then Bob's system will be left in one of the states . Likewise, if Alice performs a measurement of spin in the x-direction, that is, with respect to the basis , then Bob's system will be left in one of the states . Schrödinger referred to this phenomenon as "steering". This steering occurs in such a way that no signal can be sent by performing such a state update; quantum nonlocality cannot be used to send messages instantaneously and is therefore not in direct conflict with causality concerns in special relativity. In the Copenhagen view of this experiment, Alice's measurement—and particularly her measurement choice—has a direct effect on Bob's state. However, under the assumption of locality, actions on Alice's system do not affect the "true", or "ontic" state of Bob's system. We see that the ontic state of Bob's system must be compatible with one of the quantum states or , since Alice can make a measurement that concludes with one of those states being the quantum description of his system. At the same time, it must also be compatible with one of the quantum states or for the same reason. Therefore, the ontic state of Bob's system must be compatible with at least two quantum states; the quantum state is therefore not a complete descriptor of his system. Einstein, Podolsky and Rosen saw this as evidence of the incompleteness of the Copenhagen interpretation of quantum theory, since the wavefunction is explicitly not a complete description of a quantum system under this assumption of locality. Their paper concludes: Although various authors (most notably Niels Bohr) criticised the ambiguous terminology of the EPR paper, the thought experiment nevertheless generated a great deal of interest. Their notion of a "complete description" was later formalised by the suggestion of hidden variables that determine the statistics of measurement results, but to which an observer does not have access. Bohmian mechanics provides such a completion of quantum mechanics, with the introduction of hidden variables; however the theory is explicitly nonlocal. The interpretation therefore does not give an answer to Einstein's question, which was whether or not a complete description of quantum mechanics could be given in terms of local hidden variables in keeping with the "Principle of Local Action". Bell inequality In 1964 John Bell answered Einstein's question by showing that such local hidden variables can never reproduce the full range of statistical outcomes predicted by quantum theory. Bell showed that a local hidden variable hypothesis leads to restrictions on the strength of correlations of measurement results. If the Bell inequalities are violated experimentally as predicted by quantum mechanics, then reality cannot be described by local hidden variables and the mystery of quantum nonlocal causation remains. However, Bell notes that the non-local hidden variable model of Bohm are different: Clauser, Horne, Shimony and Holt (CHSH) reformulated these inequalities in a manner that was more conducive to experimental testing (see CHSH inequality). In the scenario proposed by Bell (a Bell scenario), two experimentalists, Alice and Bob, conduct experiments in separate labs. At each run, Alice (Bob) conducts an experiment in her (his) lab, obtaining outcome . If Alice and Bob repeat their experiments several times, then they can estimate the probabilities , namely, the probability that Alice and Bob respectively observe the results when they respectively conduct the experiments x,y. In the following, each such set of probabilities will be denoted by just . In the quantum nonlocality slang, is termed a box. Bell formalized the idea of a hidden variable by introducing the parameter to locally characterize measurement results on each system: "It is a matter of indifference ... whether λ denotes a single variable or a set ... and whether the variables are discrete or continuous". However, it is equivalent (and more intuitive) to think of as a local "strategy" or "message" that occurs with some probability when Alice and Bob reboot their experimental setup. Bell's assumption of local causality then stipulates that each local strategy defines the distributions of independent outcomes if Alice conducts experiment x and Bob conducts experiment Here () denotes the probability that Alice (Bob) obtains the result when she (he) conducts experiment and the local variable describing her (his) experiment has value (). Suppose that can take values from some set . If each pair of values has an associated probability of being selected (shared randomness is allowed, i.e., can be correlated), then one can average over this distribution to obtain a formula for the joint probability of each measurement result: A box admitting such a decomposition is called a Bell local or a classical box. Fixing the number of possible values which can each take, one can represent each box as a finite vector with entries . In that representation, the set of all classical boxes forms a convex polytope. In the Bell scenario studied by CHSH, where can take values within , any Bell local box must satisfy the CHSH inequality: where The above considerations apply to model a quantum experiment. Consider two parties conducting local polarization measurements on a bipartite photonic state. The measurement result for the polarization of a photon can take one of two values (informally, whether the photon is polarized in that direction, or in the orthogonal direction). If each party is allowed to choose between just two different polarization directions, the experiment fits within the CHSH scenario. As noted by CHSH, there exist a quantum state and polarization directions which generate a box with equal to . This demonstrates an explicit way in which a theory with ontological states that are local, with local measurements and only local actions cannot match the probabilistic predictions of quantum theory, disproving Einstein's hypothesis. Experimentalists such as Alain Aspect have verified the quantum violation of the CHSH inequality as well as other formulations of Bell's inequality, to invalidate the local hidden variables hypothesis and confirm that reality is indeed nonlocal in the EPR sense. Possibilistic nonlocality Bell's demonstration is probabilistic in the sense that it shows that the precise probabilities predicted by quantum mechanics for some entangled scenarios cannot be met by a local hidden variable theory. (For short, here and henceforth "local theory" means "local hidden variables theory".) However, quantum mechanics permits an even stronger violation of local theories: a possibilistic one, in which local theories cannot even agree with quantum mechanics on which events are possible or impossible in an entangled scenario. The first proof of this kind was due to Daniel Greenberger, Michael Horne, and Anton Zeilinger in 1993 The state involved is often called the GHZ state. In 1993, Lucien Hardy demonstrated a logical proof of quantum nonlocality that, like the GHZ proof is a possibilistic proof. It starts with the observation that the state defined below can be written in a few suggestive ways: where, as above, . The experiment consists of this entangled state being shared between two experimenters, each of whom has the ability to measure either with respect to the basis or . We see that if they each measure with respect to , then they never see the outcome . If one measures with respect to and the other , they never see the outcomes However, sometimes they see the outcome when measuring with respect to , since This leads to the paradox: having the outcome we conclude that if one of the experimenters had measured with respect to the basis instead, the outcome must have been or , since and are impossible. But then, if they had both measured with respect to the basis, by locality the result must have been , which is also impossible. Nonlocal hidden variable models with a finite propagation speed The work of Bancal et al. generalizes Bell's result by proving that correlations achievable in quantum theory are also incompatible with a large class of superluminal hidden variable models. In this framework, faster-than-light signaling is precluded. However, the choice of settings of one party can influence hidden variables at another party's distant location, if there is enough time for a superluminal influence (of finite, but otherwise unknown speed) to propagate from one point to the other. In this scenario, any bipartite experiment revealing Bell nonlocality can just provide lower bounds on the hidden influence's propagation speed. Quantum experiments with three or more parties can, nonetheless, disprove all such non-local hidden variable models. Analogs of Bell’s theorem in more complicated causal structures The random variables measured in a general experiment can depend on each other in complicated ways. In the field of causal inference, such dependencies are represented via Bayesian networks: directed acyclic graphs where each node represents a variable and an edge from a variable to another signifies that the former influences the latter and not otherwise, see the figure. In a standard bipartite Bell experiment, Alice's (Bob's) setting (), together with her (his) local variable (), influence her (his) local outcome (). Bell's theorem can thus be interpreted as a separation between the quantum and classical predictions in a type of causal structures with just one hidden node . Similar separations have been established in other types of causal structures. The characterization of the boundaries for classical correlations in such extended Bell scenarios is challenging, but there exist complete practical computational methods to achieve it. Entanglement and nonlocality Quantum nonlocality is sometimes understood as being equivalent to entanglement. However, this is not the case. Quantum entanglement can be defined only within the formalism of quantum mechanics, i.e., it is a model-dependent property. In contrast, nonlocality refers to the impossibility of a description of observed statistics in terms of a local hidden variable model, so it is independent of the physical model used to describe the experiment. It is true that for any pure entangled state there exists a choice of measurements that produce Bell nonlocal correlations, but the situation is more complex for mixed states. While any Bell nonlocal state must be entangled, there exist (mixed) entangled states which do not produce Bell nonlocal correlations (although, operating on several copies of some of such states, or carrying out local post-selections, it is possible to witness nonlocal effects). Moreover, while there are catalysts for entanglement, there are none for nonlocality. Finally, reasonably simple examples of Bell inequalities have been found for which the quantum state giving the largest violation is never a maximally entangled state, showing that entanglement is, in some sense, not even proportional to nonlocality. Quantum correlations As shown, the statistics achievable by two or more parties conducting experiments in a classical system are constrained in a non-trivial way. Analogously, the statistics achievable by separate observers in a quantum theory also happen to be restricted. The first derivation of a non-trivial statistical limit on the set of quantum correlations, due to B. Tsirelson, is known as Tsirelson's bound. Consider the CHSH Bell scenario detailed before, but this time assume that, in their experiments, Alice and Bob are preparing and measuring quantum systems. In that case, the CHSH parameter can be shown to be bounded by The sets of quantum correlations and Tsirelson’s problem Mathematically, a box admits a quantum realization if and only if there exists a pair of Hilbert spaces , a normalized vector and projection operators such that For all , the sets represent complete measurements. Namely, . , for all . In the following, the set of such boxes will be called . Contrary to the classical set of correlations, when viewed in probability space, is not a polytope. On the contrary, it contains both straight and curved boundaries. In addition, is not closed: this means that there exist boxes which can be arbitrarily well approximated by quantum systems but are themselves not quantum. In the above definition, the space-like separation of the two parties conducting the Bell experiment was modeled by imposing that their associated operator algebras act on different factors of the overall Hilbert space describing the experiment. Alternatively, one could model space-like separation by imposing that these two algebras commute. This leads to a different definition: admits a field quantum realization if and only if there exists a Hilbert space , a normalized vector and projection operators such that For all , the sets represent complete measurements. Namely, . , for all . , for all . Call the set of all such correlations . How does this new set relate to the more conventional defined above? It can be proven that is closed. Moreover, , where denotes the closure of . Tsirelson's problem consists in deciding whether the inclusion relation is strict, i.e., whether or not . This problem only appears in infinite dimensions: when the Hilbert space in the definition of is constrained to be finite-dimensional, the closure of the corresponding set equals . In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen claimed a result in quantum complexity theory that would imply that , thus solving Tsirelson's problem. Tsirelson's problem can be shown equivalent to Connes embedding problem, a famous conjecture in the theory of operator algebras. Characterization of quantum correlations Since the dimensions of and are, in principle, unbounded, determining whether a given box admits a quantum realization is a complicated problem. In fact, the dual problem of establishing whether a quantum box can have a perfect score at a non-local game is known to be undecidable. Moreover, the problem of deciding whether can be approximated by a quantum system with precision is NP-hard. Characterizing quantum boxes is equivalent to characterizing the cone of completely positive semidefinite matrices under a set of linear constraints. For small fixed dimensions , one can explore, using variational methods, whether can be realized in a bipartite quantum system , with , . That method, however, can just be used to prove the realizability of , and not its unrealizability with quantum systems. To prove unrealizability, the most known method is the Navascués–Pironio–Acín (NPA) hierarchy. This is an infinite decreasing sequence of sets of correlations with the properties: If , then for all . If , then there exists such that . For any , deciding whether can be cast as a semidefinite program. The NPA hierarchy thus provides a computational characterization, not of , but of . If , (as claimed by Ji, Natarajan, Vidick, Wright, and Yuen) then a new method to detect the non-realizability of the correlations in is needed. If Tsirelson's problem was solved in the affirmative, namely, , then the above two methods would provide a practical characterization of . The physics of supra-quantum correlations The works listed above describe what the quantum set of correlations looks like, but they do not explain why. Are quantum correlations unavoidable, even in post-quantum physical theories, or on the contrary, could there exist correlations outside which nonetheless do not lead to any unphysical operational behavior? In their seminal 1994 paper, Popescu and Rohrlich explore whether quantum correlations can be explained by appealing to relativistic causality alone. Namely, whether any hypothetical box would allow building a device capable of transmitting information faster than the speed of light. At the level of correlations between two parties, Einstein's causality translates in the requirement that Alice's measurement choice should not affect Bob's statistics, and vice versa. Otherwise, Alice (Bob) could signal Bob (Alice) instantaneously by choosing her (his) measurement setting appropriately. Mathematically, Popescu and Rohrlich's no-signalling conditions are: Like the set of classical boxes, when represented in probability space, the set of no-signalling boxes forms a polytope. Popescu and Rohrlich identified a box that, while complying with the no-signalling conditions, violates Tsirelson's bound, and is thus unrealizable in quantum physics. Dubbed the PR-box, it can be written as: Here take values in , and denotes the sum modulo two. It can be verified that the CHSH value of this box is 4 (as opposed to the Tsirelson bound of ). This box had been identified earlier, by Rastall and Khalfin and Tsirelson. In view of this mismatch, Popescu and Rohrlich pose the problem of identifying a physical principle, stronger than the no-signalling conditions, that allows deriving the set of quantum correlations. Several proposals followed: Non-trivial communication complexity (NTCC). This principle stipulates that nonlocal correlations should not be so strong as to allow two parties to solve all 1-way communication problems with some probability using just one bit of communication. It can be proven that any box violating Tsirelson's bound by more than is incompatible with NTCC. No Advantage for Nonlocal Computation (NANLC). The following scenario is considered: given a function , two parties are distributed the strings of bits and asked to output the bits so that is a good guess for . The principle of NANLC states that non-local boxes should not give the two parties any advantage to play this game. It is proven that any box violating Tsirelson's bound would provide such an advantage. Information Causality (IC). The starting point is a bipartite communication scenario where one of the parts (Alice) is handed a random string of bits. The second part, Bob, receives a random number . Their goal is to transmit Bob the bit , for which purpose Alice is allowed to transmit Bob bits. The principle of IC states that the sum over of the mutual information between Alice's bit and Bob's guess cannot exceed the number of bits transmitted by Alice. It is shown that any box violating Tsirelson's bound would allow two parties to violate IC. Macroscopic Locality (ML). In the considered setup, two separate parties conduct extensive low-resolution measurements over a large number of independently prepared pairs of correlated particles. ML states that any such “macroscopic” experiment must admit a local hidden variable model. It is proven that any microscopic experiment capable of violating Tsirelson's bound would also violate standard Bell nonlocality when brought to the macroscopic scale. Besides Tsirelson's bound, the principle of ML fully recovers the set of all two-point quantum correlators. Local Orthogonality (LO). This principle applies to multipartite Bell scenarios, where parties respectively conduct experiments in their local labs. They respectively obtain the outcomes . The pair of vectors is called an event. Two events , are said to be locally orthogonal if there exists such that and . The principle of LO states that, for any multipartite box, the sum of the probabilities of any set of pair-wise locally orthogonal events cannot exceed 1. It is proven that any bipartite box violating Tsirelson's bound by an amount of violates LO. All these principles can be experimentally falsified under the assumption that we can decide if two or more events are space-like separated. This sets this research program aside from the axiomatic reconstruction of quantum mechanics via Generalized Probabilistic Theories. The works above rely on the implicit assumption that any physical set of correlations must be closed under wirings. This means that any effective box built by combining the inputs and outputs of a number of boxes within the considered set must also belong to the set. Closure under wirings does not seem to enforce any limit on the maximum value of CHSH. However, it is not a void principle: on the contrary, in it is shown that many simple, intuitive families of sets of correlations in probability space happen to violate it. Originally, it was unknown whether any of these principles (or a subset thereof) was strong enough to derive all the constraints defining . This state of affairs continued for some years until the construction of the almost quantum set . is a set of correlations that is closed under wirings and can be characterized via semidefinite programming. It contains all correlations in , but also some non-quantum boxes . Remarkably, all boxes within the almost quantum set are shown to be compatible with the principles of NTCC, NANLC, ML and LO. There is also numerical evidence that almost-quantum boxes also comply with IC. It seems, therefore, that, even when the above principles are taken together, they do not suffice to single out the quantum set in the simplest Bell scenario of two parties, two inputs and two outputs. Device independent protocols Nonlocality can be exploited to conduct quantum information tasks which do not rely on the knowledge of the inner workings of the prepare-and-measurement apparatuses involved in the experiment. The security or reliability of any such protocol just depends on the strength of the experimentally measured correlations . These protocols are termed device-independent. Device-independent quantum key distribution The first device-independent protocol proposed was device-independent quantum key distribution (QKD). In this primitive, two distant parties, Alice and Bob, are distributed an entangled quantum state, that they probe, thus obtaining the statistics . Based on how non-local the box happens to be, Alice and Bob estimate how much knowledge an external quantum adversary Eve (the eavesdropper) could possess on the value of Alice and Bob's outputs. This estimation allows them to devise a reconciliation protocol at the end of which Alice and Bob share a perfectly correlated one-time pad of which Eve has no information whatsoever. The one-time pad can then be used to transmit a secret message through a public channel. Although the first security analyses on device-independent QKD relied on Eve carrying out a specific family of attacks, all such protocols have been recently proven unconditionally secure. Device-independent randomness certification, expansion and amplification Nonlocality can be used to certify that the outcomes of one of the parties in a Bell experiment are partially unknown to an external adversary. By feeding a partially random seed to several non-local boxes, and, after processing the outputs, one can end up with a longer (potentially unbounded) string of comparable randomness or with a shorter but more random string. This last primitive can be proven impossible in a classical setting. Device-independent (DI) randomness certification, expansion, and amplification are techniques used to generate high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography, where high-quality random numbers are essential for ensuring the security of cryptographic protocols. Randomness certification is the process of verifying that the output of a random number generator is truly random and has not been tampered with by an adversary. DI randomness certification does this verification without making assumptions about the underlying devices that generate random numbers. Instead, randomness is certified by observing correlations between the outputs of different devices that are generated using the same physical process. Recent research has demonstrated the feasibility of DI randomness certification using entangled quantum systems, such as photons or electrons. Randomness expansion is taking a small amount of initial random seed and expanding it into a much larger sequence of random numbers. In DI randomness expansion, the expansion is done using measurements of quantum systems that are prepared in a highly entangled state. The security of the expansion is guaranteed by the laws of quantum mechanics, which make it impossible for an adversary to predict the expansion output. Recent research has shown that DI randomness expansion can be achieved using entangled photon pairs and measurement devices that violate a Bell inequality. Randomness amplification is the process of taking a small amount of initial random seed and increasing its randomness by using a cryptographic algorithm. In DI randomness amplification, this process is done using entanglement properties and quantum mechanics. The security of the amplification is guaranteed by the fact that any attempt by an adversary to manipulate the algorithm's output will inevitably introduce errors that can be detected and corrected. Recent research has demonstrated the feasibility of DI randomness amplification using quantum entanglement and the violation of a Bell inequality. DI randomness certification, expansion, and amplification are powerful techniques for generating high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography and are likely to become increasingly crucial as quantum computing technology advances. In addition, a milder approach called semi-DI exists where random numbers can be generated with some assumptions on the working principle of the devices, environment, dimension, energy, etc., in which it benefits from ease-of-implementation and high generation rate. Self-testing Sometimes, the box shared by Alice and Bob is such that it only admits a unique quantum realization. This means that there exist measurement operators and a quantum state giving rise to such that any other physical realization of is connected to via local unitary transformations. This phenomenon, that can be interpreted as an instance of device-independent quantum tomography, was first pointed out by Tsirelson and named self-testing by Mayers and Yao. Self-testing is known to be robust against systematic noise, i.e., if the experimentally measured statistics are close enough to , one can still determine the underlying state and measurement operators up to error bars. Dimension witnesses The degree of non-locality of a quantum box can also provide lower bounds on the Hilbert space dimension of the local systems accessible to Alice and Bob. This problem is equivalent to deciding the existence of a matrix with low completely positive semidefinite rank. Finding lower bounds on the Hilbert space dimension based on statistics happens to be a hard task, and current general methods only provide very low estimates. However, a Bell scenario with five inputs and three outputs suffices to provide arbitrarily high lower bounds on the underlying Hilbert space dimension. Quantum communication protocols which assume a knowledge of the local dimension of Alice and Bob's systems, but otherwise do not make claims on the mathematical description of the preparation and measuring devices involved are termed semi-device independent protocols. Currently, there exist semi-device independent protocols for quantum key distribution and randomness expansion.
Physical sciences
Quantum mechanics
Physics
4293361
https://en.wikipedia.org/wiki/Astronomical%20interferometer
Astronomical interferometer
An astronomical interferometer or telescope array is a set of separate telescopes, mirror segments, or radio telescope antennas that work together as a single telescope to provide higher resolution images of astronomical objects such as stars, nebulas and galaxies by means of interferometry. The advantage of this technique is that it can theoretically produce images with the angular resolution of a huge telescope with an aperture equal to the separation, called baseline, between the component telescopes. The main drawback is that it does not collect as much light as the complete instrument's mirror. Thus it is mainly useful for fine resolution of more luminous astronomical objects, such as close binary stars. Another drawback is that the maximum angular size of a detectable emission source is limited by the minimum gap between detectors in the collector array. Interferometry is most widely used in radio astronomy, in which signals from separate radio telescopes are combined. A mathematical signal processing technique called aperture synthesis is used to combine the separate signals to create high-resolution images. In Very Long Baseline Interferometry (VLBI) radio telescopes separated by thousands of kilometers are combined to form a radio interferometer with a resolution which would be given by a hypothetical single dish with an aperture thousands of kilometers in diameter. At the shorter wavelengths used in infrared astronomy and optical astronomy it is more difficult to combine the light from separate telescopes, because the light must be kept coherent within a fraction of a wavelength over long optical paths, requiring very precise optics. Practical infrared and optical astronomical interferometers have only recently been developed, and are at the cutting edge of astronomical research. At optical wavelengths, aperture synthesis allows the atmospheric seeing resolution limit to be overcome, allowing the angular resolution to reach the diffraction limit of the optics. Astronomical interferometers can produce higher resolution astronomical images than any other type of telescope. At radio wavelengths, image resolutions of a few micro-arcseconds have been obtained, and image resolutions of a fractional milliarcsecond have been achieved at visible and infrared wavelengths. One simple layout of an astronomical interferometer is a parabolic arrangement of mirror pieces, giving a partially complete reflecting telescope but with a "sparse" or "dilute" aperture. In fact, the parabolic arrangement of the mirrors is not important, as long as the optical path lengths from the astronomical object to the beam combiner (focus) are the same as would be given by the complete mirror case. Instead, most existing arrays use a planar geometry, and Labeyrie's hypertelescope will use a spherical geometry. History One of the first uses of optical interferometry was applied by the Michelson stellar interferometer on the Mount Wilson Observatory's reflector telescope to measure the diameters of stars. The red giant star Betelgeuse was the first to have its diameter determined in this way on December 13, 1920. In the 1940s radio interferometry was used to perform the first high resolution radio astronomy observations. For the next three decades astronomical interferometry research was dominated by research at radio wavelengths, leading to the development of large instruments such as the Very Large Array and the Atacama Large Millimeter Array. Optical/infrared interferometry was extended to measurements using separated telescopes by Johnson, Betz and Townes (1974) in the infrared and by Labeyrie (1975) in the visible. In the late 1970s improvements in computer processing allowed for the first "fringe-tracking" interferometer, which operates fast enough to follow the blurring effects of astronomical seeing, leading to the Mk I, II and III series of interferometers. Similar techniques have now been applied at other astronomical telescope arrays, including the Keck Interferometer and the Palomar Testbed Interferometer. In the 1980s the aperture synthesis interferometric imaging technique was extended to visible light and infrared astronomy by the Cavendish Astrophysics Group, providing the first very high resolution images of nearby stars. In 1995 this technique was demonstrated on an array of separate optical telescopes for the first time, allowing a further improvement in resolution, and allowing even higher resolution imaging of stellar surfaces. Software packages such as BSMEM or MIRA are used to convert the measured visibility amplitudes and closure phases into astronomical images. The same techniques have now been applied at a number of other astronomical telescope arrays, including the Navy Precision Optical Interferometer, the Infrared Spatial Interferometer and the IOTA array. A number of other interferometers have made closure phase measurements and are expected to produce their first images soon, including the VLTI, the CHARA array and Le Coroller and Dejonghe's Hypertelescope prototype. If completed, the MRO Interferometer with up to ten movable telescopes will produce among the first higher fidelity images from a long baseline interferometer. The Navy Optical Interferometer took the first step in this direction in 1996, achieving 3-way synthesis of an image of Mizar; then a first-ever six-way synthesis of Eta Virginis in 2002; and most recently "closure phase" as a step to the first synthesized images produced by geostationary satellites. Modern astronomical interferometry Astronomical interferometry is principally conducted using Michelson (and sometimes other type) interferometers. The principal operational interferometric observatories which use this type of instrumentation include VLTI, NPOI, and CHARA. Current projects will use interferometers to search for extrasolar planets, either by astrometric measurements of the reciprocal motion of the star (as used by the Palomar Testbed Interferometer and the VLTI), through the use of nulling (as will be used by the Keck Interferometer and Darwin) or through direct imaging (as proposed for Labeyrie's Hypertelescope). Engineers at the European Southern Observatory ESO designed the Very Large Telescope VLT so that it can also be used as an interferometer. Along with the four unit telescopes, four mobile 1.8-metre auxiliary telescopes (ATs) were included in the overall VLT concept to form the Very Large Telescope Interferometer (VLTI). The ATs can move between 30 different stations, and at present, the telescopes can form groups of two or three for interferometry. When using interferometry, a complex system of mirrors brings the light from the different telescopes to the astronomical instruments where it is combined and processed. This is technically demanding as the light paths must be kept equal to within 1/1000 mm (the same order as the wavelength of light) over distances of a few hundred metres. For the Unit Telescopes, this gives an equivalent mirror diameter of up to , and when combining the auxiliary telescopes, equivalent mirror diameters of up to can be achieved. This is up to 25 times better than the resolution of a single VLT unit telescope. The VLTI gives astronomers the ability to study celestial objects in unprecedented detail. It is possible to see details on the surfaces of stars and even to study the environment close to a black hole. With a spatial resolution of 4 milliarcseconds, the VLTI has allowed astronomers to obtain one of the sharpest images ever of a star. This is equivalent to resolving the head of a screw at a distance of . Notable 1990s results included the Mark III measurement of diameters of 100 stars and many accurate stellar positions, COAST and NPOI producing many very high resolution images, and Infrared Stellar Interferometer measurements of stars in the mid-infrared for the first time. Additional results include direct measurements of the sizes of and distances to Cepheid variable stars, and young stellar objects. High on the Chajnantor plateau in the Chilean Andes, the European Southern Observatory (ESO), together with its international partners, is building ALMA, which will gather radiation from some of the coldest objects in the Universe. ALMA will be a single telescope of a new design, composed initially of 66 high-precision antennas and operating at wavelengths of 0.3 to 9.6 mm. Its main 12-meter array will have fifty antennas, 12 metres in diameter, acting together as a single telescope – an interferometer. An additional compact array of four 12-metre and twelve 7-meter antennas will complement this. The antennas can be spread across the desert plateau over distances from 150 metres to 16 kilometres, which will give ALMA a powerful variable "zoom". It will be able to probe the Universe at millimetre and submillimetre wavelengths with unprecedented sensitivity and resolution, with a resolution up to ten times greater than the Hubble Space Telescope, and complementing images made with the VLT interferometer. Optical interferometers are mostly seen by astronomers as very specialized instruments, capable of a very limited range of observations. It is often said that an interferometer achieves the effect of a telescope the size of the distance between the apertures; this is only true in the limited sense of angular resolution. The amount of light gathered—and hence the dimmest object that can be seen—depends on the real aperture size, so an interferometer would offer little improvement as the image is dim (the thinned-array curse). The combined effects of limited aperture area and atmospheric turbulence generally limits interferometers to observations of comparatively bright stars and active galactic nuclei. However, they have proven useful for making very high precision measurements of simple stellar parameters such as size and position (astrometry), for imaging the nearest giant stars and probing the cores of nearby active galaxies. For details of individual instruments, see the list of astronomical interferometers at visible and infrared wavelengths. At radio wavelengths, interferometers such as the Very Large Array and MERLIN have been in operation for many years. The distances between telescopes are typically , although arrays with much longer baselines utilize the techniques of Very Long Baseline Interferometry. In the (sub)-millimetre, existing arrays include the Submillimeter Array and the IRAM Plateau de Bure facility. The Atacama Large Millimeter Array has been fully operational since March 2013. Max Tegmark and Matias Zaldarriaga have proposed the Fast Fourier Transform Telescope which would rely on extensive computer power rather than standard lenses and mirrors. If Moore's law continues, such designs may become practical and cheap in a few years. Progressing quantum computing might eventually allow more extensive use of interferometry, as newer proposals suggest.
Technology
Telescope
null
3152275
https://en.wikipedia.org/wiki/Yam%20%28vegetable%29
Yam (vegetable)
Yam is the common name for some plant species in the genus Dioscorea (family Dioscoreaceae) that form edible tubers (some other species in the genus being toxic). Yams are perennial herbaceous vines native to Africa, Asia, and the Americas and cultivated for the consumption of their starchy tubers in many temperate and tropical regions. The tubers themselves, also called "yams", come in a variety of forms owing to numerous cultivars and related species. Description A monocot related to lilies and grasses, yams are vigorous herbaceous, perennially growing vines from a tuber. Some 870 species of yams are known, a few of which are widely grown for their edible tuber but others of which are toxic (such as D. communis). Yam plants can grow up to in length and high. The tuber may grow into the soil up to deep. The plant disperses by seed. The edible tuber has a rough skin that is difficult to peel but readily softened by cooking. The skins vary in color from dark brown to light pink. The majority, or meat, of the vegetable is composed of a much softer substance ranging in color from white or yellow to purple or pink in mature yams. Etymology The name "yam" appears to derive from Portuguese inhame or Canarian Spanish ñame, which derived from Fula, one of the West African languages during trade. However, in Portuguese, this name commonly refers to the taro plant (Colocasia esculenta) from the genus Colocasia, as opposed to Dioscorea. The main derivations borrow from verbs meaning "to eat". True yams have various common names across multiple world regions. In some places, other (unrelated) root vegetables are sometimes referred to as "yams", including: In the United States, sweet potatoes (Ipomoea batatas), especially those with orange flesh, are often referred to as "yams" In Australia, the tubers of the Microseris lanceolata, or yam daisy, were a staple food of Aboriginal Australians in some regions. In New Zealand, oca (Oxalis tuberosa) is typically referred to as "yam". In Malaysia and Singapore, taro (Colocasia esculenta) is referred to as "yam". In Africa, South and Southeast Asia as well as the tropical Pacific islands Amorphophallus paeoniifolius is grown and known as "elephant foot yam". Distribution and habitat Yams are native to Africa, Asia, and the Americas. Three species were independently domesticated on those continents: D. rotundata (Africa), D. alata (Asia), and D. trifida (South America). Ecology Some yams are invasive plants, often considered a noxious weed outside cultivated areas. Cultivation Yams are cultivated for the consumption of their starchy tubers in many temperate and tropical regions, especially in West Africa, South America and the Caribbean, Asia, and Oceania. About 95% of yam crops are grown in Africa. A yam crop begins when whole seed tubers or tuber portions are planted into mounds or ridges, at the beginning of the rainy season. The crop yield depends on how and where the sets are planted, sizes of mounds, interplant spacing, provision of stakes for the resultant plants, yam species, and tuber sizes desired at harvest. Small-scale farmers in West and Central Africa often intercrop yams with cereals and vegetables. The seed yams are perishable and bulky to transport. Farmers who do not buy new seed yams usually set aside up to 30% of their harvest for planting the next year. Yam crops face pressure from a range of insect pests and fungal and viral diseases, as well as nematodes. Their growth and dormant phases correspond respectively to the wet season and the dry season. For maximum yield, the yams require a humid tropical environment, with an annual rainfall over distributed uniformly throughout the growing season. White, yellow, and water yams typically produce a single large tuber per year, generally weighing . Yams suffer from relatively few pests and diseases. There is an caused by Colletotrichum gloeosporioides which is widely distributed around the world's growing regions. Winch et al., 1984 finds C. gloeosporioides afflicts a large number of Dioscorea spp. Despite the high labor requirements and production costs, consumer demand for yam is high in certain subregions of Africa, making yam cultivation quite profitable to certain farmers. Major cultivated species Many cultivated species of Dioscorea yams are found throughout the humid tropics. The most economically important are discussed below. Non-Dioscorea tubers that were historically important in Africa include Plectranthus rotundifolius (the Hausa potato) and P. esculentus (the Livingstone potato); these two tuber crops have now been largely displaced by the introduction of cassava. D. rotundata and D. cayennensis D. rotundata, the white yam, and D. cayenensis, the yellow yam, are native to Africa. They are the most important cultivated yams. In the past, they were considered as two separate species, but most taxonomists now regard them as the same species. Over 200 varieties between them are cultivated. White yam tuber is roughly cylindrical in shape, the skin is smooth and brown, and the flesh is usually white and firm. Yellow yam has yellow flesh, caused by the presence of carotenoids. It looks similar to the white yam in outer appearance; its tuber skin is usually a bit firmer and less extensively grooved. The yellow yam has a longer period of vegetation and a shorter dormancy than white yam. The Kokoro variety is important in making dried yam chips. They are large plants; the vines can be as long as . The tubers most often weigh about each, but can weigh as much as . After 7 to 12 months' growth, the tubers are harvested. In Africa, most are pounded into a paste to make the traditional dish of "pounded yam", known as Iyan. D. alata D. alata, called purple yam (not to be confused with the Okinawan purple "yam", which is a sweet potato), greater yam, winged yam, water yam, and (ambiguously) white yam, was first cultivated in Southeast Asia. Although not grown in the same quantities as the African yams, it has the largest distribution worldwide of any cultivated yam, being grown in Asia, the Pacific islands, Africa, and the West Indies. Even in Africa, the popularity of water yam is second only to white yam. The tuber shape is generally cylindrical, but can vary. Tuber flesh is white and watery in texture. D. alata and D. esculenta (lesser yam) were important staple crops to the seafaring Austronesian cultures. They were carried along with the Austronesian migrations as canoe plants, from Island Southeast Asia to as far as Madagascar and Polynesia. D. polystachya D. polystachya, Chinese yam, is native to China. The Chinese yam plant is somewhat smaller than the African, with the vines about long. It is tolerant to frost and can be grown in much cooler conditions than other yams. It is also grown in Korea and Japan. It was introduced to Europe in the 19th century, when the potato crop there was falling victim to disease, and is still grown in France for the Asian food market. The tubers are harvested after about 6 months of growth. Some are eaten right after harvesting and some are used as ingredients for other dishes, including noodles, and for traditional medicines. D. bulbifera D. bulbifera, the air potato, is found in both Africa and Asia, with slight differences between those found in each place. It is a large vine, or more in length. It produces tubers, but the bulbils which grow at the base of its leaves are the more important food product. They are about the size of potatoes (hence the name "air potato"), weighing from . Some varieties can be eaten raw, while some require soaking or boiling for detoxification before eating. It is not grown much commercially since the flavor of other yams is preferred by most people. However, it is popular in home vegetable gardens because it produces a crop after only four months of growth and continues producing for the life of the vine, as long as two years. Also, the bulbils are easy to harvest and cook. In 1905, the air potato was introduced to Florida and has since become an invasive species in much of the state. Its rapid growth crowds out native vegetation and it is very difficult to remove since it can grow back from the tubers, and new vines can grow from the bulbils even after being cut down or burned. D. esculenta D. esculenta, the lesser yam, was one of the first yam species cultivated. It is native to Southeast Asia and is the third-most commonly cultivated species there, although it is cultivated very little in other parts of the world. Its vines seldom reach more than in length and the tubers are fairly small in most varieties. The tubers are eaten baked, boiled, or fried much like potatoes. Because of the small size of the tubers, mechanical cultivation is possible, which along with its easy preparation and good flavor, could help the lesser yam to become more popular in the future. D. dumetorum D. dumetorum, the bitter yam, is popular as a vegetable in parts of West Africa, in part because their cultivation requires less labor than other yams. The wild forms are very toxic and are sometimes used to poison animals when mixed with bait. It is said that they have also been used for criminal purposes. D. trifida D. trifida, the cush-cush yam, is native to the Guyana region of South America and is the most important cultivated New World yam. Since they originated in tropical rainforest conditions, their growth cycle is less related to seasonal changes than other yams. Because of their relative ease of cultivation and their good flavor, they are considered to have a great potential for increased production. Wild taxa D. hirtiflora subsp. pedicellata D. hirtiflora subsp. pedicellata, lusala, busala or lwidi, is native to Tropical Africa. It is widely harvested and eaten in Southern Zambia where it grows in open forest areas. In Southern Zambia, it is an important addition to the March–September diets of almost all, and income of over half of rural households. Research on propagation of this subspecies to alleviate the threat from wild harvest has been successful. D. japonica D. japonica known as East Asian mountain yam, yamaimo, or Japanese mountain yam is a type of yam (Dioscorea) native to Japan. Its other common names include cham ma, Chinese yam, dang ma, glutinous yam, jinenjo, pinyin, rìběn- shǔyù, shan yao, Taiwanese yam, and wild yam. Varieties include D. japonica Thunb var. pseudojaponica Yamamoto, D. japonica Thunb. var. pseudojaponica (Hayata) Yamam, D. japonica var. japonica, D. japonica var. oldhamii and D. japonica var. pilifera. It is widely cultivated Japan, Korea, China, and neighbouring islands. It is widely cultivated as a food crop in Japan, Korea, China and neighbouring islands. Jinenjo is a related variety of Japanese yam that is used as an ingredient in soba noodles. Harvesting Yams in West Africa are typically harvested by hand, using sticks, spades, or diggers. Wood-based tools are preferred to metallic tools as they are less likely to damage the fragile tubers; however, wood tools need frequent replacement. Yam harvesting is labor-intensive and physically demanding. Tuber harvesting involves standing, bending, squatting, and sometimes sitting on the ground depending on the size of mound, size of tuber, or depth of tuber penetration. Care must be taken to avoid damage to the tuber, because damaged tubers do not store well and spoil rapidly. Some farmers use staking and mixed cropping, a practice that complicates harvesting in some cases. In forested areas, tubers grow in areas where other tree roots are present. Harvesting the tuber then involves the additional step of freeing them from other roots. This often causes tuber damage. Aerial tubers or bulbils are harvested by manual plucking from the vine. Yields may improve and cost of yam production be lower if mechanization were to be developed and adopted. However, current crop production practices and species used pose considerable hurdles to successful mechanization of yam production, particularly for small-scale rural farmers. Extensive changes in traditional cultivation practices, such as mixed cropping, may be required. Modification of current tuber harvesting equipment is necessary given yam tuber architecture and its different physical properties. Production In 2020, world production of yams was , led by Nigeria with 67% of the total (table). Toxicity Unlike cassava, most varieties of edible, mature, cultivated yam do not contain toxic compounds. However, there are exceptions. Bitter compounds tend to accumulate in immature tuber tissues of white and yellow yams. These may be polyphenols or tannin-like compounds. Wild forms of bitter yams (D. dumetorum) do contain some toxins, such as dihydrodioscorine, that taste bitter, hence are referred to as bitter yam. Bitter yams are not normally eaten except at times of desperation in poor countries and in times of local food scarcity. They are usually detoxified by soaking in a vessel of salt water, in cold or hot fresh water or in a stream. The bitter compounds in these yams are water-soluble alkaloids which, on ingestion, produce severe and distressing symptoms. Severe cases of alkaloid intoxication may prove fatal. Aerial or potato yams (D. bulbifera) have antinutritional factors. In Asia, detoxification methods, involving water extraction, fermentation, and roasting of the grated tuber, are used for bitter cultivars of this yam. The bitter compounds in yams also known locally as air potato include diosbulbin and possibly saponins, such as diosgenin. In Indonesia, an extract of air potato is used in the preparation of arrow poison. Uses Nutrition Raw yam has only moderate nutrient density, with appreciable content (10% or more of the Daily Value, DV) limited to potassium, vitamin B6, manganese, thiamin, dietary fiber, and vitamin C (table). But raw yam has the highest potassium levels amongst the 10 major staple foods of the world (see nutritional chart). Yam supplies 118 calories per 100 grams. Yam generally has a lower glycemic index, about 54% of glucose per 150 gram serving, compared to potato products. The protein content and quality of roots and tubers is lower than other food staples, with the content of yam and potato being around 2% on a fresh-weight basis. Yams, with cassava, provide a much greater proportion of the protein intake in Africa, ranging from 5.9% in East and South Africa to about 15.9% in humid West Africa. As a relatively low-protein food, yam is not a good source of essential amino acids. Experts emphasize the need to supplement a yam-dominant diet with more protein-rich foods to support healthy growth in children. Yam is an important dietary element for Nigerian and West African people. It contributes more than 200 calories per person per day for more than 150 million people in West Africa, and is an important source of income. Yam is an attractive crop in poor farms with limited resources. It is rich in starch, and can be prepared in many ways. It is available all year round, unlike other, unreliable, seasonal crops. These characteristics make yam a preferred food and a culturally important food security crop in some sub-Saharan African countries. Comparison to other staple foods The following table shows the nutrient content of yam and major staple foods in a raw harvested form on a dry weight basis to account for their different water contents. Raw forms, however, are not edible and cannot be digested. These must be sprouted, or prepared and cooked for human consumption. In sprouted or cooked form, the relative nutritional and antinutritional contents of each of these staples is remarkably different from that of raw form of these staples. Storage Roots and tubers such as yam are living organisms. When stored, they continue to respire, which results in the oxidation of the starch (a polymer of glucose) contained in the cells of the tuber, which converts it into water, carbon dioxide, and heat energy. During this transformation of the starch, the dry matter of the tuber is reduced. Amongst the major roots and tubers, properly stored yam is considered to be the least perishable. Successful storage of yams requires: initial selection of sound and healthy yams proper curing, if possible combined with fungicide treatment adequate ventilation to remove the heat generated by respiration of the tubers regular inspection during storage and removal of rotting tubers and any sprouts that develop protection from direct sunlight and rain Storing yam at low temperature reduces the respiration rates. However, temperatures below cause damage through chilling, causing a breakdown of internal tissues, increasing water loss and yam's susceptibility to decay. The symptoms of chilling injury are not always obvious when the tubers are still in cold storage. The injury becomes noticeable as soon as the tubers are restored to ambient temperatures. The best temperature to store yams is between , with high-technology-controlled humidity and climatic conditions, after a process of curing. Most countries that grow yams as a staple food are too poor to afford high-technology storage systems. Sprouting rapidly increases a tuber's respiration rates, and accelerates the rate at which its food value decreases. Certain cultivars of yams store better than others. The easier to store yams are those adapted to arid climate, where they tend to stay in a dormant low-respiration stage much longer than yam breeds adapted to humid tropical lands, where they do not need dormancy. Yellow yam and cush-cush yam, by nature, have much shorter dormancy periods than water yam, white yam, or lesser yam. Storage losses for yams are very high in Africa, with bacteria, insects, nematodes, and mammals being the most common storage pests. Consumption Yams are consumed in a variety of preparations, such as flour or whole vegetable pieces across their range of distribution in Asia, Africa, North America, Central America, the Caribbean, South America, and Oceania. Africa Yams of African species must be cooked to be safely eaten, because various natural substances in yams can cause illness if consumed raw. The most common cooking methods in Western and Central Africa are by boiling, frying or roasting. Among the Akan of Ghana, boiled yam can be mashed with palm oil into eto in a similar manner to the plantain dish matoke, and is served with eggs. The boiled yam can also be pounded with a traditional mortar and pestle to create a thick, starchy paste known as iyan (pounded yam) which is eaten with traditional sauces such as egusi and palm nut soup. Another method of consumption is to leave the raw yam pieces to dry in the sun. When dry, the pieces turn a dark brown color. These are then milled to create a brown powder known in Nigeria as elubo. The powder can be mixed with boiling water to create a thick starchy paste, a kind of pudding known as amala, which is then eaten with local soups and sauces. Yams are a staple agricultural commodity in West Africa with cultural significance, where over 95% of the world's yam crop is harvested. Yams are still important for survival in these regions. Some varieties of these tubers can be stored up to six months without refrigeration, which makes them a valuable resource for the yearly period of food scarcity at the beginning of the wet season. Yam cultivars are also cultivated in other humid tropical countries. Yam is the main staple crop of the Igbos in south eastern Nigeria where for centuries it played a dominant role in both their agricultural and cultural life. It is celebrated with annual yam festivals. Brazil Yams are particularly consumed in the coastal area of the Northeast region, although they can be found in other parts of the country. In Pernambuco state, it is usually boiled and served cut in slices at breakfast, along with cheese spread or molasses. Colombia In Colombia yam production has been specifically located in the Caribbean region, where it has become a key product in the diet of the population of this area. In 2010, Colombia was among the 12 countries with the highest yam production worldwide, and ranked first in yield of tons per hectare planted. Although its main use is for food, several studies have shown its usefulness in the pharmaceutical industry and the manufacture of bioplastics. However, in Colombia, there is no evidence of the use of this product, other than food. Philippines In the Philippines, the purple ube species of yam (D. alata), is eaten as a sweetened dessert called ube halaya, and is also used as an ingredient in another Filipino dessert, halo-halo. It is also used as a popular ingredient for ice cream. Vietnam In Vietnam, yams are used to prepare dishes such as canh khoai mỡ or canh khoai từ. This involves mashing the yam and cooking it until very well done. The yam root was traditionally used by peasants in Vietnam to dye cotton clothes throughout the Red River and Mekong delta regions as late as the mid-20th century, and is still used by others in the Sapa region of northern Vietnam. Indonesia In Indonesia, the same purple yam is used for preparing desserts. This involves mashing the yam and mixing it with coconut milk and sugar. White- and off-white-fleshed yams are cut in cubes, cooked, lightly fermented, and eaten as afternoon snacks. Japan An exception to the cooking rule is the mountain yam (Dioscorea polystachya), known as nagaimo and can be further classified into ichōimo (lit. 'ginkgo-leaf yam'; kanji: 銀杏芋), or yamatoimo (lit. Yamato yam; kanji: 大和芋), depending on the root shape. Mountain yam is eaten raw and grated, after only a relatively minimal preparation: the whole tubers are briefly soaked in a vinegar-water solution to neutralize irritant oxalate crystals found in their skin. The raw vegetable is starchy and bland, mucilaginous when grated, and may be eaten plain as a side dish, or added to noodles. Another variety of yam, jinenjo, is used in Japan as an ingredient in soba noodles. In Okinawa, purple yams (Dioscorea alata) are grown. This purple yam is popular as lightly deep-fried tempura, as well as being grilled or boiled. Additionally, the purple yam is a common ingredient of yam ice cream with the signature purple color. Purple yam is also used in other types of traditional wagashi sweets, cakes, and candy. India In central parts of India, the yam is prepared by being finely sliced, seasoned with spices, and deep fried. In Southern India, the vegetable is a popular accompaniment to rice dishes and curry. The purple yam, D. alata, is also eaten in India, where it is also called the violet yam. Species may be called by the regional name "taradi", which can refer to D. belophylla, Dioscorea deltoidea, and D. bulbifera. Digging and selling taradi is a major source of income in the region of Palampur. Nepal Dioscorea root is traditionally eaten on Māgh Sankrānti (a midwinter festival) in Nepal. It is usually steamed and then cooked with spices. Fiji Islands Yam is, along with cassava and taro, a staple food, and is consumed boiled, roasted in a lovo, or steamed with fish or meat in curry sauce or coconut milk and served with rice. The cost of yam is higher due to the difficulty in farming and relatively low volume of production. Jamaica Because of their abundance and importance to survival, yams were highly regarded in Jamaican ceremonies and constitute part of many traditional West African ceremonies. The West Yam powder is available in the West from grocers specializing in African products, and may be used in a similar manner to instant mashed potato powder, although preparation is a little more difficult because of the tendency of the yam powder to form lumps. The powder is sprinkled onto a pan containing a small amount of boiling water and stirred vigorously. The resulting mixture is served with a heated sauce, such as tomato and chili, poured onto it. Skinned and cut frozen yams may also be available from specialty grocers. Phytochemicals and use in medicine The tubers of certain wild yams, including a variant of 'Kokoro' yam and other species of Dioscorea, such as Dioscorea nipponica, are a source for the extraction of diosgenin, a sapogenin steroid. The extracted diosgenin is used for the commercial synthesis of cortisone, pregnenolone, progesterone, and other steroid products. Such preparations were used in early combined oral contraceptive pills. The unmodified steroid has estrogenic activity. In culture Historical records in West Africa and of African yams in Europe date back to the 16th century. Yams were taken to the Americas through precolonial Portuguese and Spanish on the borders of Brazil and Guyana, followed by a dispersion through the Caribbean. Yams are used in Papua New Guinea, where they are called kaukau. Their cultivation and harvesting is accompanied by complex rituals and taboos. The coming of the yams (one of the numerous versions from Maré) is described in Pene Nengone (Loyalty Islands of New Caledonia). Nigeria and Ghana A yam festival is usually held in the beginning of August at the end of the rainy season. People offer yams to gods and ancestors first, before distributing them to the villagers. This is their way of giving thanks to the spirits above them. The New Yam Festival celebrates the main agricultural crop of the Igbos, Idomas, and Tivs. The New Yam Festival, known as Orureshi in Owukpa in Idoma west and Ima-Ji, Iri-Ji or Iwa Ji in Igbo land, is a celebration depicting the prominence of yam in social and cultural life. The festival is prominent among southeastern states and major tribes in Benue State, mainly around August. The Igbo people accord special respect to yam to the extent that no one eats the newly harvested yam until the New Yam celebrations / feast is marked. It is called Iri ji ọhụrụ. People return to their various communities for the celebrations.
Biology and health sciences
Monocots
null
3152386
https://en.wikipedia.org/wiki/Australian%20feral%20camel
Australian feral camel
Australian feral camels are introduced populations of dromedary, or one-humped, camel (Camelus dromedarius—from the Middle East, North Africa and the Indian Subcontinent). Imported to Australia as valuable beasts of burden from British India and Afghanistan during the 19th century (for transport and sustenance during the exploration and colonisation of the Red Centre), many were casually released into the wild after motorised transport negated the use of camels in the early 20th century. This resulted in a fast-growing feral population with numerous ecological, agricultural, and social impacts. By 2008, it was feared that Central Australia's feral camel population had grown to roughly one million animals, and was projected to double every 8 to 10 years. Camels are known to cause serious degradation of local environmental and cultural sites, particularly during dry conditions. They directly compete with endemic animals, such as kangaroos and other marsupials, by eating much of the available plant matter; camels may further thrive as they are able to digest many unpalatable (to other mammals) species of plants. Camels are known for their abilities to survive without water, using fat reserves stored in their hump; however, when a source of hydration is available, even a small herd can consume much of the available water, and soil the water in the process (making it unsafe for drinking by other animals, and creating a pathogen-fostering environment). The feral camels in Australia are also known to be aggressive when they encounter herds of domestic livestock, such as cattle, sheep, and goats; they can also be dangerously territorial towards people, especially females with newly-born camels and males in their rut. In general, the mating season is known as a hazardous time to be close to camels, of either sex. Pastoralists, representatives from the Central Land Council, and Aboriginal land holders, in the affected areas, were those amongst the earliest complainants. An AU$19 million culling program was funded in 2009, and by 2013 a total of 160,000 camels were slaughtered, estimating the feral population to have been reduced to around 300,000. A post-kill analysis projected the original count to be around 600,000, an estimating error from the original number greater than the totality of the cull. History Camels had been used successfully in desert exploration in other parts of the world. The first suggestion of importing camels into Australia was made in 1822 by Danish-French geographer and journalist Conrad Malte-Brun, whose Universal Geography contains the following: For such an expedition, men of science and courage ought to be selected. They ought to be provided with all sorts of implements and stores, and with different animals, from the powers and instincts of which they may derive assistance. They should have oxen from Buenos Aires, or from the English settlements, mules from Senegal, and dromedaries from Africa or Arabia. The oxen would traverse the woods and the thickets; the mules would walk securely among rugged rocks and hilly countries; the dromedaries would cross the sandy deserts. Thus the expedition would be prepared for any kind of territory that the interior might present. Dogs also should be taken to raise game, and to discover springs of water; and it has even been proposed to take pigs, for the sake of finding out esculent roots in the soil. When no kangaroos and game are to be found the party would subsist on the flesh of their own flocks. They should be provided with a balloon for spying at a distance any serious obstacle to their progress in particular directions, and for extending the range of observations which the eye would take of such level lands as are too wide to allow any heights beyond them to come within the compass of their view. In 1839, Lieutenant-Colonel George Gawler, second Governor of South Australia, suggested that camels should be imported to work in the semi-arid regions of Australia. The first camel arrived in Australia in 1840, ordered from the Canary Islands by the Phillips brothers of Adelaide (Henry Weston Phillips (1818–1898); George Phillips (1820–1900); G M Phillips (unknown)). The Apolline, under Captain William Deane, docked at Port Adelaide in South Australia on 12 October 1840, but all but one of the camels died on the voyage. The surviving camel was named Harry. This camel, Harry, was used for inland exploration by pastoralist and explorer John Ainsworth Horrocks on his ill-fated 1846 expedition into the arid South Australian interior near Lake Torrens, in searching for new agricultural land. He became known as the 'man who was shot by his own camel'. On 1 September Horrocks was preparing to shoot a bird on the shores of Lake Dutton. His kneeling camel moved while Horrocks was reloading his gun, causing the gun to fire and injuring the middle fingers of his right hand and a row of teeth. Horrocks died of his wounds on 23 September in Penwortham after requesting that the camel be shot. "Afghan" cameleers Australia's first major inland expedition to use camels as a main form of transport was the Burke and Wills expedition in 1860. The Victorian Government imported 24 camels for the expedition. The first cameleers arrived on 9 June 1860 at Port Melbourne from Karachi (then known as Kurrachee and then in British India) on the ship the Chinsurah, to participate in the expedition. As described by the Victorian Exploration Expedition Committee, "the camels would be comparatively useless unless accompanied by their native drivers". The cameleers on the expedition included 45-year-old Dost Mahomed, who was bitten by a bull camel, leading to the permanent loss of use of his right arm, and Esa (Hassam) Khan from Kalat, who fell ill near Swan Hill. They cared for the camels, loaded and unloaded equipment and provisions and located water on the expedition. From the 1860s onward small groups of mainly Muslim cameleers were shipped in and out of Australia at three-year intervals, to service South Australia's inland pastoral industry. Carting goods and transporting wool bales by camel was a lucrative livelihood for them. As their knowledge of the Australian outback and economy increased, the cameleers began their own businesses, importing and running camel trains. By 1890 the camel business was dominated by the mostly Muslim merchants and brokers, commonly referred to as "Afghans" or "Ghans", despite their origin often being British India, as well as Afghanistan and Egypt and Turkey. They belonged to four main groups: Pashtuns, Baluchis, Punjabis, and Sindhis. At least 15,000 camels with their handlers are estimated to have come to Australia between 1870 and 1900. Most of these camels were dromedaries, especially from India, including the Bikaneri war camel from Rajasthan who used riding camels sourced from the Dervish wars in British Somaliland, and lowland Indian camels for heavy work. Other dromedaries included the Bishari riding camel of Somalia and Arabia. A bull camel could be expected to carry up to , and camel strings could cover more than per day. Camel studs were set up in 1866, by Sir Thomas Elder and Samuel Stuckey, at Beltana and Umberatana Stations in South Australia. There was also a government stud camel farm at Londonderry, near Coolgardie in Western Australia, established in 1894. These studs operated for about 50 years and provided high-class breeders for the Australian camel trade. Camels continued to be used for inland exploration by Peter Warburton in 1873, William Christie Gosse in 1873, Ernest Giles in 1875–76, David Lindsay in 1885–1886 and the Elder Scientific Exploring Expedition in 1891–1892, on the Calvert Expedition in 1896–97, and by Cecil Madigan in 1939. They were also used in the construction of the Overland Telegraph Line, and carried pipe sections for the Goldfields Water Supply Scheme. The introduction of the Immigration Restriction Act 1901 and the White Australia policy made it more difficult for cameleers to enter Australia. Camels go feral With the departure of many cameleers in the early 20th century, and the introduction of motorised transportation in the 1920s and 1930s, some cameleers released their camels into the wild. Well suited to the arid conditions of Central Australia, these camels became the source for the large population of feral camels still existing today. Camels and the Aboriginal people As the Afghan cameleers increasingly travelled through the inland, they encountered many Aboriginal groups. An exchange of skills, knowledge, and goods soon developed. Some cameleers assisted Aboriginal people by carrying traditional exchange goods, including red ochre or the narcotic plant pituri, along ancient trade routes such as the Birdsville Track. The cameleers also brought new commodities such as sugar, tea, tobacco, clothing, and metal tools to remote Aboriginal groups. Aboriginal people incorporated camel hair into their traditional string artefacts, and provided information on desert waters and plant resources. Some cameleers employed Aboriginal men and women to assist them on their long desert treks. This resulted in some enduring partnerships, and several marriages. From 1928 to 1933, the missionary Ernest Kramer undertook camel safaris in Central Australia with the aim of spreading the gospel. On most journeys, he employed Arrernte man Mickey Dow Dow as cameleer, guide, and translator and sometimes a man called Barney. The first of Kramer's trips was to the Musgrave Ranges and Mann Ranges, and was sponsored by the Aborigines Friends Association, which sought a report on Indigenous living conditions. According to Kramer's biography, as the men travelled through the desert and encountered local people, they handed them boiled sweets, tea, and sugar and played Jesus Loves Me on the gramophone. At night, using a "magic lantern projector," Kramer showed slides of Christmas and the life of Christ. For many people, this was their first experience of Christmas and the event picturesquely established "an association between camels, gifts, and Christianity that was not merely symbolic but had material reality." By the 1930s, as the cameleers became displaced by motor transport, an opportunity arose for Aboriginal people. They learnt camel-handling skills and acquired their own animals, extending their mobility and independence in a rapidly changing frontier society. This continued until at least the late 1960s. A documentary film, Camels and the Pitjantjara, made by Roger Sandall, shot in 1968 and released in 1969, follows a group of Pitjantjara men who travel out from their base at Areyonga Settlement to capture a wild camel, tame it, and add it to their domestic herds. They then use camels to help transport a large group of people from Areyonga to Papunya, three days walk. Camels appear in Indigenous Australian art, with examples held in collections of the National Museum of Australia and Museums Victoria. Various Australian Aboriginal languages have adopted a word for the camel, including Eastern Arrernte (kamule), Pitjantjatjara (kamula), and Alyawarr (kamwerl). Impact Australia has the largest population of feral camels and the only herd of dromedary (one-humped) camels exhibiting wild behaviour in the world. In 2008, the number of feral camels was estimated to be more than one million, with the capability of doubling in number every 8 to 10 years. In 2013, this estimate was revised to a population of 600,000 prior to culling operations, and around 300,000 camels after culling, with an annual growth of 10% per year. Impact on the environment Although their impact on the environment is not as severe as some other pests introduced in Australia, camels ingest more than 80% of the plant species available. Degradation of the environment occurs when densities exceed two animals per square kilometre, which is presently the case throughout much of their range in the Northern Territory where they are confined to two main regions: the Simpson Desert and the western desert area of the Central Ranges, Great Sandy Desert and Tanami Desert. Some traditional food plants harvested by Aboriginal people in these areas are seriously affected by camel-browsing. While having soft-padded feet makes soil erosion less likely, they do destabilise dune crests, which can contribute to erosion. Feral camels do have a noticeable impact on salt lake ecosystems, and have been found to foul waterholes. The National Feral Camel Action Plan (see below) cited the following environmental impacts: "broad landscape damage including damage to vegetation through foraging behaviour and trampling, suppression of recruitment of some plant species, selective browsing on rare and threatened flora, damage to wetlands through fouling, trampling and sedimentation, competition with native animals for food and shelter and loss of sequestered carbon in vegetation". Some researchers think feral camels may actually benefit their ecosystem in various ways. They may fill lost ecological niches of extinct Australian megafauna, such as Diprotodon and Palorchestes, a theory similar to the concepts of Pleistocene rewilding, and contribute to a decline in wildfires. Additionally, camels can be an effective counter against introduced weeds. Impact on infrastructure Camels can do significant damage to infrastructure such as taps, pumps, and toilets, as a means to obtain water, particularly in times of severe drought. They can smell water at a distance of up to five kilometres, and are even attracted by moisture condensed by air conditioners. They also damage stock fences and cattle watering points. These effects are felt particularly in Aboriginal and other remote communities where the costs of repairs are prohibitive. Decaying bodies of camels that have been trampled by their herd in their search for water near human settlements can cause further problems. Economic impact The National Feral Camel Action Plan (see below) cited the following economic impacts: "direct control and management costs, damage to infrastructure (fences, yards, grazing lands, water sources), competition with cattle for food and water, cattle escapes due to fencing damage, destruction of bush tucker resources". Social impact The National Feral Camel Action Plan (see below) cited the following social impacts: "damage to culturally significant sites including religious sites, burial sites, ceremonial grounds, water places (e.g. water holes, rockholes, soaks, springs), places of birth, places (including trees) where spirits of dead people are said to dwell and resource points (food, ochre, flints), destruction of bush tucker resources, changes in patterns of exploitation and customary use of country and loss of opportunities to teach younger generations, reduction of people’s enjoyment of natural areas, interference with native animals or hunting of native animals, creation of dangerous driving conditions, cause of general nuisance in residential areas, cause of safety concerns to do with feral camels on airstrips, damage to outstations, damage to community infrastructure, community costs associated with traffic accidents". Culls Northern Territory cull, 2009 Drought conditions in Australia during the first decade of the 21st century (the "Millennium drought") were particularly harsh, leading to thousands of camels dying of thirst in the outback. The problem of invading camels searching for water became great enough for the Northern Territory Government to plan to kill as many as 6,000 camels that had become a nuisance in the community of Docker River, 500 km south west of Alice Springs in the Northern Territory, where the camels were causing severe damage in their search for food and water. The planned cull was reported internationally and drew a strong reaction. National Feral Camel Action Plan, 2009–2013 The Australian Feral Camel Management Project (AFCMP) was established in 2009 and ran until 2013. It was managed by Ninti One Limited in Alice Springs funded with from the Australian Government. It aimed to work with landholders to build their capacity to manage feral camels while reducing impacts at key environmental and cultural sites. The project was expected to be completed by June 2013 but received a six-month extension. It was completed A$4 million under budget. It was a collaboration between nineteen key partners: the Governments of Australia, Western Australia, South Australia, Northern Territory and Queensland; Central Land Council, Anangu Pitjantjatjara Yankunytjatjara Lands, Ngaanyatjarra Council Inc., Kanyirninpa Jukurrpa, Pila Nguru Aboriginal Corporation, Kimberley Land Council and Western Desert Lands Aboriginal Corporation; South Australian Arid Lands NRM, Alinytjara Wilurara NRM Board, Natural Resource Management Board NT Inc. and Rangelands NRM WA; Northern Territory Cattlemen's Association; Australian Camel Industry Association; RSPCA; Australian Wildlife Conservancy; CSIRO; and Flinders University. In November 2010 the Australian Government Department of Environment released the National Feral Camel Action Plan, a national management plan for what it defined an 'Established Pest of National Significance' in accordance with its Australian Pest Animal Strategy. Ninti One and its partners gained consent for the culling program from the landholders for over square kilometres of land. Different culling techniques were used for different regions in deference to concerns from the Aboriginal landholders. At the completion of the project in 2013, the Australian Feral Camel Management Project had reduced the feral camel population by 160,000 camels. This includes over 130,000 through aerial culling, 15,000 mustered and 12,000 ground-culled (shot from vehicle) for pet meat. It estimated around 300,000 camels remained, the population increasing 10% per year. The largest individual aerial cull operation was conducted in mid-2012 in the south-west of the Northern Territory. It employed three R44 helicopter cull platforms in combination with two R22 helicopter spotting/mustering platforms. It removed 11,560 feral camels in 280 operational hours over 12 days, over 45,000 square kilometres, at a cost of around $30 per head. The project faced criticism from some parts of the Australian camel industry, who wanted to see the feral population harvested for meat processing, the pet-meat market, or live export, arguing it would reduce waste and create jobs. Poor animal condition, high cost of freight, the lack of infrastructure in remote locations, and difficulty in gaining the necessary permissions on Aboriginal land are some of the challenges faced by the camel industry. No ongoing funding has been committed to the program. Ninti One estimated in 2013 that per year would be required to maintain current population levels. 2020 APY lands cull As a result of widespread heat, drought and the 2019–20 Australian bushfire season, feral camels were impinging more on human settlements, especially remote Aboriginal communities. In the APY lands of South Australia, they roamed the streets, damaging buildings and infrastructure in their search for water. They were also destroying native vegetation, contaminating water supplies and destroying cultural sites. On 8 January 2020 the South Australian Department for Environment and Water began a five-day cull of the camels, the first mass cull of camels in the area. Professional shooters would kill between 4,000 to 5,000 camels from helicopters, "...in accordance with the highest standards of animal welfare". Camel industry Camel meat Camel meat is consumed in Australia. A multi-species abattoir at Caboolture in Queensland run by Meramist regularly processes feral camels, selling meat into Europe, the United States, and Japan. Samex Australian Meat Company in Peterborough, South Australia, also resumed processing feral camels in 2012. It is regularly supplied by an Indigenous camel company run by Ngaanyatjarra Council on the Ngaanyatjarra Lands in Western Australia and by camels mustered on the Anangu Pitjantjatjara Yankunytjatjara (APY) Lands of South Australia. A small abattoir on Bond Springs Station just north of Alice Springs also processes small quantities of camels when operational. Exports to Saudi Arabia where camel meat is consumed began in 2002. Camel meat was also used in the production of pet food in Australia. In 2011, the RSPCA issued a warning, after a study found cases of severe and sometimes fatal liver disease in dogs that had eaten camel meat containing the amino acid indospicine, present within some species of a genus of plants known as Indigofera. Camel milk Australia's first commercial-scale camel dairy, Australian Wild Camel Corporation, was established in 2015 in Clarendon, Queensland. There are a number of smaller-scale camel dairies, some growing fast: Summer Land Camels and QCamel in Central Queensland, in New South Wales' Upper Hunter District, Camel Milk Australia in South Burnett, Queensland, and Australian Camel Dairies near Perth in Western Australia. The Camel Milk Company in northern Victoria has grown from three wild camels in 2014 to over 300 in 2019, and exports mostly to Singapore, with shipments of both fresh and powdered product set to start to Thailand and Malaysia. Production of camel milk in Australia grew from of camel milk in 2016 to per annum in 2019. Live exports Live camels are occasionally exported to Saudi Arabia, the United Arab Emirates, Brunei, and Malaysia, where disease-free wild camels are prized as a delicacy. Australia's camels are also exported as breeding stock for Arab camel racing stables, and for use in tourist venues in places such as the United States. Tourism Camel farms offering rides or treks to tourists include Kings Creek Station near Uluru, Calamunnda Camel Farm in Western Australia, Camels Australia at Stuart Well, south of Alice Springs, and Pyndan Camel Tracks in Alice Springs. Camel rides are offered on the beach at Victor Harbor in South Australia and on Cable Beach in Broome, Western Australia. There are also two popular camel racing events in Central Australia, the Camel Cup in Alice Springs and the Uluru Camel Cup at Uluru Camel Tours at Uluru.
Biology and health sciences
Camelidae
Animals
3157567
https://en.wikipedia.org/wiki/Proganochelys
Proganochelys
Proganochelys is a genus of extinct, primitive stem-turtle. Proganochelys was named by Georg Baur in 1887 as the oldest turtle in existence at the time. The name Proganochelys comes from the Greek word ganos meaning 'brightness', combined with prefix pro, 'before', and Greek base chelys meaning 'turtle'. Proganochelys is believed to have been around 1 meter in size and herbivorous in nature. Proganochelys had been known as the most primitive stem-turtle for over a century, until the novel discovery of Odontochelys in 2008. Odontochelys and Proganochelys share unique primitive features that are not found in Casichelydia, such as tooth-like structures on the pterygoid and vomer and a plate-like coracoid. Proganochelys is the oldest stem-turtle species with a complete shell discovered to date, known from fossils found in Germany, Switzerland, Greenland, and Thailand in strata from the late Triassic, dating to approximately 210 million years ago. The location of these fossils suggest that Proganochelys was active throughout the continent of Laurasia. There are only two known species of Proganchelys, known from little information as a result of a small fossil record. All Proganochelys quentesti fossils were discovered in Germany, while Proganochelys ruchae fossils were found in Thailand. Psammochelys, Stegochelys, and Triassochelys are junior synonyms of Proganochelys. Chelytherium von Meyer, 1863 has been considered a synonym of Proganochelys by some authors, but Joyce (2017) considers it a nomen dubium given the fragmentary nature of the syntype material. Joyce (2017) also considered North American genus Chinlechelys to be a junior synonym of Proganochelys, though the author maintains the type species of the former genus, C. tenertesta, as a distinct species within the genus Proganochelys. Description and paleobiology Proganochelys was once considered to be the oldest known stem-turtle until the description of Odontochelys and Eorhynchochelys, two slightly earlier genera that lived in the Carnian stage of the Triassic. Proganochelys had a fully developed shell long. The total length of Proganochelys was about . Its overall appearance resembled modern turtles in many respects: it lacked teeth on the upper and lower jaw, likely had a beak and had the characteristic heavily armored shell formed from bony plates and ribs which fused together into a solid cage around the internal organs. Proganochelys had a semi-beak like structure along with denticles fused to its vomer. The plates comprising the carapace and plastron were already in the modern form, although there were additional plates along the margins of the shell that would have served to protect the legs. Also, unlike any modern species of turtle, its long tail had spikes and terminated in a club, its head could not be retracted under the shell, and its neck may have been protected by small spines. While it had no teeth in its jaws, it did have small denticles on the palate. The beak like structure suggests that the Triassic stem-turtles evolved from carnivorous stem-turtles to herbivorous as the loss of teeth and gain of the beak would benefit the crushing of plants in these stem-turtles. Synapomorphies and autapomorphies Proganochelys possesses several chelonian synapomorphies including: a bony shell containing fused ribs, neural bones with fused thoracic segments, and a carapace and plastron that enclose the pelvic and shoulder girdle. Proganochelys was also known for its autapomorphies, which included a tail club and a tubercle on the basioccipital. The tail of Proganochelys was noticeably long and is hypothesized to have been used as a club for protection against predators. Although evolution of the shell has been clearly defined, the mechanisms behind the movement of the neck have been a subject of debate for Proganochelys. It has been hypothesized that Proganochelys were able to retract their necks by tucking in their skull under the front of their shell when needed, perhaps in a similar fashion to some modern side-necked turtles. Shell The broadened ribs on Proganochelys show "metaplastic ossification of the dermis". The enlarged ribs suggest that the endochondral rib ossifications were joined by a second ossification instead of having expanded ribs. The 220-million-year-old stem-turtle Odontochelys only has a partially formed shell. Odontochelys is believed to only possess a plastron. The 5-million-year difference that distinguish Odontochelys from Proganochelys tell us that the evolution of the shell occurred relatively quickly in time. Proganochelys possessed both a carapace, and a plastron. The shell is believed to have been used for protection against predators. Proganochelys fits well into the order as a turtle, as the shell of Proganochelys is in agreement with the evolution of other stem-turtles. Skull The dermal roofing elements of Proganochelys include a large nasal, a fully roofed skull, a flat squamosal, and an absent pineal foramen. Palatal characteristics include paired vomers, and a dorsal process containing premaxilla. An open interpterygoid vacuity along with a prominent elongated quadrate are notable basicranial elements. Overall, Proganochelys is characterized by having relatively few chelonian features and having a relatively generalized amniote skull. The skull of Proganochelys quenstedti from Trossingen, West Germany, retains a number of well-known amniote features not found in any other turtle. For instance, the lacrimal bone, supratemporal bone, and lacrimal duct are notable structures that are kept. Furthermore, some traits that are present in modern turtles are not present in Proganochelys and therefore must have come after the evolution of the shell. For instance, jaw differentiation, the fusion of the vomer, and the loss of the lacrimal are clear examples of traits that evolved after the evolution of the shell in Proganochelys. Discovery The earliest fossils of Proganochelys were discovered in Germany in the rural towns of Halberstadt, Tübingen, and Trossingen. The fossils were found in an elaborate formation of shales, sandstones, and some limestone piles, with the formation believed to be between 220 and 205 million years old. Consensus among Geologists placed the fossils in the middle of the Norian, around 210 million years ago, although this is largely an estimate. In addition to Proganochelys, the rock formations in Germany have also given fossils for the stem-turtle Proterochersis. Fossils have also been found the Klettgau Formation of Switzerland. Paleoecology The specific ecology of the Late Triassic stem-turtles has been a major point of disagreement for many years among scientists. Triassic stem-turtles, including Proganochelys, appear to have been both aquatic and terrestrial. Shell proportions are believed to be correlated to the environment in which a turtle lives in, seen in modern turtles today. Using this concept, scientists were able to infer on the habitat in which Proganochelys may have lived in. A comparison between modern turtles and Proganochelys found that it was not likely that stem-turtles had differentiated into specialized ecologies such as open water swimmers or solely terrestrial turtles in the Late Triassic period. If this is the case, a freshwater habitat would be the most likely environment for Proganochelys to have lived in. On the other hand, it is noted that some believe Proganochelys were solely terrestrial. Shell bone histology of extant turtles revealed congruence with terrestrial turtles for the earliest basal turtles, including Proganochelys, in one study. The common ancestry of all living turtles is believed to be aquatic, while the earliest turtles are believed to have lived in a terrestrial environment. Environment and forelimbs Forelimbs are believed to be a physical feature that reflects the preferences and adaptations to a specific environment, indicating the environment a turtle would be most likely to reside in. Based on morphological data, Proganochelys is believed to have lived in a semi-aquatic environment, though a 2021 study groups it with tortoises and other terrestrial taxa. Turtles possessing short hands are believed to be most likely terrestrial, while turtles with long limbs are more likely to be aquatic. The majority of all testudines are short-handed and terrestrial, while all cheloniods are long-handed and aquatic. A study on its shell anatomy further conforms to a semi-aquatic mode of life. Classification Proganochelys belongs to the group Testudinata, which consists of all extant turtles and several taxa of extinct kin. It is the oldest primitive stem turtle. The group does not include Odontochelys. The cladogram below follows an analysis by Jérémy Anquetin (2012). Habitat Proganochelys is considered to have lived in the giant continent Laurasia during the Triassic period. The fossil records show that Proganochelys might have lived anywhere in between Thailand and Germany. During the Triassic period, Laurasia was primarily dry and warm, especially in arid areas. Proganochelys shared their environment with a variety of dinosaurs. Proganochelys lived in small water bodies such as ponds, but it was mainly earthbound.
Biology and health sciences
Prehistoric turtles
Animals
3157936
https://en.wikipedia.org/wiki/Australopithecine
Australopithecine
The australopithecines, formally Australopithecina or Hominina, are generally any species in the related genera of Australopithecus and Paranthropus. It may also include members of Kenyanthropus, Ardipithecus, and Praeanthropus. The term comes from a former classification as members of a distinct subfamily, the Australopithecinae. They are classified within the Australopithecina subtribe of the Hominini tribe. These related species are sometimes collectively termed australopithecines, australopiths, or homininians. They are the extinct, close relatives of modern humans and, together with the extant genus Homo, comprise the human clade. Members of the human clade, i.e. the Hominini after the split from the chimpanzees, are called Hominina (see Hominidae; terms "hominids" and hominins). While none of the groups normally directly assigned to this group survived, the australopithecines do not appear to be literally extinct (in the sense of having no living descendants) as the genera Kenyanthropus, Paranthropus, and Homo probably emerged as sisters of a late Australopithecus species such as A. africanus and/or A. sediba. The terms australopithecines, et. al., come from a former classification as members of a distinct subfamily, the Australopithecinae. Members of Australopithecus are sometimes referred to as the "gracile australopithecines", while Paranthropus are called the "robust australopithecines". The australopithecines occurred in the Late Miocene sub-epoch and were bipedal, and they were dentally similar to humans, but with a brain size not much larger than that of modern non-human apes, with lesser encephalization than in the genus Homo. Humans (genus Homo) may have descended from australopithecine ancestors and the genera Ardipithecus, Orrorin, Sahelanthropus, and Graecopithecus are the possible ancestors of the australopithecines. Classification Classification of subtribe Australopithecina according to . Australopithecina Australopithecus Australopithecus africanus Australopithecus deyiremeda Australopithecus garhi Australopithecus sediba Australopithecus afarensis (=Praeanthropus afarensis) Australopithecus anamensis (=Praeanthropus anamensis) Australopithecus bahrelghazali (=Praeanthropus bahrelghazali) Paranthropus Paranthropus robustus Paranthropus boisei Paranthropus aethiopicus Ardipithecus Ardipithecus ramidus Ardipithecus kadabba Orrorin Orrorin tugenensis Sahelanthropus Sahelanthropus tchadensis Phylogeny Phylogeny of Hominina/Australopithecina according to Dembo et al. (2016). Physical characteristics The post-cranial remains of australopithecines show they were adapted to bipedal locomotion, but did not walk identically to humans. They had a forearm to upper arm ratio similar to the Golden Ratio – greater than other hominins. They exhibited greater sexual dimorphism than members of Homo or Pan but less so than Gorilla or Pongo. It is thought that they averaged heights of and weighed between . The brain size may have been 350 cc to 600 cc. The postcanines (the teeth behind the canines) were relatively large, and had more enamel compared to contemporary apes and humans, whereas the incisors and canines were relatively small, and there was little difference between the males' and females' canines compared to modern apes. Relation to Homo Most scientists maintain that the genus Homo emerged in Africa within the australopithecines around two million years ago. However, there is no consensus on within which species: Marc Verhaegen has argued that an australopithecine species could have also been ancestral to the genus Pan (i.e. chimpanzees). Asian australopithecines A minority view among palaeoanthropologists is that australopithecines moved outside Africa. One proponent of this theory is Jens Lorenz Franzen, formerly Head of Paleoanthropology at the Research Institute Senckenberg. Franzen argued that robust australopithecines had reached not only Indonesia, as Meganthropus, but also China: In 1957, an Early Pleistocene Chinese fossil tooth of unknown province was described as resembling P. robustus. Three fossilized molars from Jianshi, China (Longgudong Cave) were later identified as belonging to an Australopithecus species. However further examination questioned this interpretation; Zhang (1984) argued the Jianshi teeth and unidentified tooth belong to H. erectus. Liu et al. (2010) also dispute the Jianshi–australopithecine link and argue the Jianshi molars fall within the range of Homo erectus: However, Wolpoff (1999) notes that in China "persistent claims of australopithecine or australopithecine-like remains continue".
Biology and health sciences
Australopithecines
Biology
7458393
https://en.wikipedia.org/wiki/Columbia%20River%20Basalt%20Group
Columbia River Basalt Group
The Columbia River Basalt Group (CRBG) is the youngest, smallest and one of the best-preserved continental flood basalt provinces on Earth, covering over mainly eastern Oregon and Washington, western Idaho, and part of northern Nevada. The basalt group includes the Steens and Picture Gorge basalt formations. Introduction During the middle to late Miocene epoch, the Columbia River flood basalts engulfed about of the Pacific Northwest, forming a large igneous province with an estimated volume of . Eruptions were most vigorous 17–14 million years ago, when over 99 percent of the basalt was released. Less extensive eruptions continued 14–6 million years ago. Erosion resulting from the Missoula Floods has extensively exposed these lava flows, laying bare many layers of the basalt flows at Wallula Gap, the lower Palouse River, the Columbia River Gorge and throughout the Channeled Scablands. The Columbia River Basalt Group is thought to be a potential link to the Chilcotin Group in south-central British Columbia, Canada. The Latah Formation sediments of Washington and Idaho are interbedded with a number of the Columbia River Basalt Group flows, and outcrop across the region. Absolute dates, subject to a statistical uncertainty, are determined through radiometric dating using isotope ratios such as 40Ar/39Ar dating, which can be used to identify the date of solidifying basalt. In the CRBG deposits 40Ar, which is produced by 40K decay, only accumulates after the melt solidifies. Other flood basalts include the Deccan Traps (late Cretaceous period), that cover an area of in west-central India; the Emeishan Traps (Permian), which cover more than 250,000 square kilometers in southwestern China; and Siberian Traps (late Permian) that cover (800,000 sq mi) in Russia. Formation Some time during a 10–15 million-year period, lava flow after lava flow poured out of multiple dikes which trace along an old fault line running from south-eastern Oregon through to western British Columbia. The many layers of lava eventually reached a thickness of more than . As the molten rock came to the surface, the Earth's crust gradually sank into the space left by the rising lava. This subsidence of the crust produced a large, slightly depressed lava plain now known as the Columbia Basin or Columbia River Plateau. The northwesterly advancing lava forced the ancient Columbia River into its present course. The lava, as it flowed over the area, first filled the stream valleys, forming dams that in turn caused impoundments or lakes. In these ancient lake beds are found fossil leaf impressions, petrified wood, fossil insects, and bones of vertebrate animals. In the middle Miocene, 17 to 15 Ma, the Columbia Plateau and the Oregon Basin and Range of the Pacific Northwest were flooded with lava flows. Both flows are similar in composition and age, and have been attributed to a common source, the Yellowstone hotspot. The ultimate cause of the volcanism is still up for debate, but the most widely accepted idea is that the mantle plume or upwelling (similar to that associated with present-day Hawaii) initiated the widespread and voluminous basaltic volcanism about 17 million years ago. As hot mantle plume materials rise and reach lower pressures, the hot materials melt and interact with the materials in the upper mantle, creating magma. Once that magma breaches the surface, it flows as lava and then solidifies into basalt. Transition to flood volcanism Prior to 17.5 million years ago, the Western Cascade stratovolcanoes erupted with periodic regularity for over 20 million years, even as they do today. An abrupt transition to shield volcanic flooding took place in the mid-Miocene. The flows can be divided into four major categories: The Steens Basalt, Grande Ronde Basalt, the Wanapum Basalt, and the Saddle Mountains Basalt. The various lava flows have been dated by radiometric dating—particularly through measurement of the ratios of isotopes of potassium to argon. The Columbia River flood basalt province comprises more than 300 individual basalt lava flows that have an average volume of . The transition to flood volcanism in the Columbia River Basalt Group (CRBG), similar to other large igneous provinces, was also marked by atmospheric loading through the mass exsolution and emission of volatiles, via the process of volcanic degassing. Comparative analysis of volatile concentrations in source feeder dikes to associated extruded flow units have been quantitatively measured to determine the magnitude of degassing exhibited in CRBG eruptions. Of the more than 300 individual flows associated with the CRBG, the Roza flow contains some of the most chemically well preserved basalts for volatile analysis. Contained within the Wanapum formation, Roza is one of the most extensive members of the CRBG with an area of 40,300 square kilometres and a volume of 1,300 cubic kilometres. With magmatic volatile values assumed at 1 - 1.5 percent by weight concentration for source feeder dikes, the emission of sulphur for the Roza flow is calculated to be on the order of 12Gt (12,000 million tonnes) at a rate of 1.2Gt (1,200 million tonnes) annually, in the form of sulphur dioxide (SO2). However, other research through petrologic analysis has yielded SO2 mass degassing values at 0.12% - 0.28% of the total erupted mass of the magma, translating to lower emission estimates in the range of 9.2Gt of sulfur dioxide for the Roza flow. Sulfuric acid, a by-product of emitted sulfur dioxide and atmospheric interactions, has been calculated to be 1.7Gt annually for the Roza flow and 17Gt in total. Analysis of glass inclusions within phenocrysts of the basaltic deposits have yielded emission volumes on the magnitude of 310 Mt of hydrochloric acid, and 1.78 Gt of hydrofluoric acid, additionally. Cause of volcanism Major hotspots have often been tracked back to flood-basalt events. In this case the Yellowstone hotspot's initial flood-basalt event occurred near Steens Mountain when the Imnaha and Steens eruptions began. As the North American Plate moved several centimeters per year westward, the eruptions progressed through the Snake River Plain across Idaho and into Wyoming. Consistent with the hot spot hypothesis, the lava flows are progressively younger as one proceeds east along this path. Previous to this eruptive period, it is believed the Yellowstone Hotspot created features like Smith Rock in Central Oregon and perhaps another flood basalt event known as Siletzia which underlies much of the Pacific Northwest coast with exposures in the Oregon Coast Range. There is additional confirmation that Yellowstone is associated with a deep hot spot. Using tomographic images based on seismic waves, relatively narrow, deeply seated, active convective plumes have been detected under Yellowstone and several other hot spots. These plumes are much more focused than the upwelling observed with large-scale plate-tectonics circulation. The hot spot hypothesis is not universally accepted as it has not resolved several questions. The Yellowstone hot spot volcanism track shows a large apparent bow in the hot-spot track that does not correspond to changes in plate motion if the northern CRBG floods are considered. Further, the Yellowstone images show necking of the plume at , which may correspond to phase changes or may reflect still-to-be-understood viscosity effects. Additional data collection and further modeling will be required to achieve a consensus on the actual mechanism. Speed of flood basalt emplacement The Columbia River Basalt Group flows exhibit essentially uniform chemical properties through the bulk of individual flows, suggesting rapid placement. Ho and Cashman (1997) characterized the -long Ginkgo flow of the Frenchman Springs Member, determining that it had been formed in roughly a week, based on the measured melting temperature along the flow from the origin to the most distant point of the flow, combined with hydraulics considerations. The Ginkgo basalt was examined over its flow path from a Ginkgo flow feeder dike near Kahlotus, Washington to the flow terminus in the Pacific Ocean at Yaquina Head, Oregon. The basalt had an upper melting temperature of and a lower temperature to this indicates that the maximum temperature drop along the Ginkgo flow was 20 °C. The lava must have spread quickly to achieve this uniformity. Analyses indicate that the flow must remain laminar, as turbulent flow would cool more quickly. This could be accomplished by sheet flow, which can travel at velocities of without turbulence and minimal cooling, suggesting that the Ginkgo flow occurred in less than a week. The cooling/hydraulics analyses are supported by an independent indicator; if longer periods were required, external water from temporarily dammed rivers would intrude, resulting in both more dramatic cooling rates and increased volumes of pillow lava. Ho's analysis is consistent with the analysis by Reidel, Tolan, & Beeson (1994), who proposed a maximum Pomona flow emplacement duration of several months based on the time required for rivers to be reestablished in their canyons following a basalt flow interruption. Dating of the flood basalt flows Three major tools are used to date the CRBG flows: Stratigraphy, radiometric dating, and magnetostratigraphy. These techniques have been key to correlating data from disparate basalt exposures and boring samples over five states. Major eruptive pulses of flood basalt lavas are laid down stratigraphically. The layers can be distinguished by physical characteristics and chemical composition. Each distinct layer is typically assigned a name usually based on area (valley, mountain, or region) where that formation is exposed and available for study. Stratigraphy provides a relative ordering (ordinal ranking) of the CRBG layers. Absolute dates, subject to a statistical uncertainty, are determined through radiometric dating using isotope ratios such as 40Ar/39Ar dating, which can be used to identify the date of solidifying basalt. In the CRBG deposits 40Ar, which is produced by 40K decay, only accumulates after the melt solidifies. Magnetostratigraphy is also used to determine age. This technique uses the pattern of magnetic polarity zones of CRBG layers by comparison to the magnetic polarity timescale. The samples are analyzed to determine their characteristic remanent magnetization from the Earth's magnetic field at the time a stratum was deposited. This is possible because, as magnetic minerals precipitate in the melt (crystallize), they align themselves with Earth's current magnetic field. The Steens Basalt captured a highly detailed record of the Earth's magnetic reversal that occurred roughly 15 million years ago. Over a 10,000-year period, more than 130 flows solidified – roughly one flow every 75 years. As each flow cooled below about , it captured the magnetic field's orientation-normal, reversed, or in one of several intermediate positions. Most of the flows froze with a single magnetic orientation. However, several of the flows, which freeze from both the upper and lower surfaces, progressively toward the center, captured substantial variations in magnetic field direction as they froze. The observed change in direction was reported as 50⁰ over 15 days. Major flows Steens Basalt The Steens Basalt flows covered about of the Oregon Plateau in sections up to thick. It contains the earliest identified eruption of the CRBG large igneous province. The type locality for the Steens basalt, which covers a large portion of the Oregon Plateau, is an approximately face of Steens Mountain showing multiple layers of basalt. The oldest of the flows considered part of the Columbia River Basalt Group, the Steens basalt, includes flows geographically separated but roughly concurrent with the Imnaha flows. Older Imnaha basalt north of Steens Mountain overlies the chemically distinct lowermost flows of Steens basalt; hence some flows of the Imnaha are stratigraphically younger than the lowermost Steens basalt. One geomagnetic field reversal occurred during the Steens Basalt eruptions at approximately 16.7 Ma, as dated using 40Ar/39Ar ages and the geomagnetic polarity timescale. Steens Mountain and related sections of Oregon Plateau flood basalts at Catlow Peak and Poker Jim Ridge to the southeast and west of Steens Mountain, provide the most detailed magnetic field reversal data (reversed-to-normal polarity transition) yet reported in volcanic rocks. The observed trend in feeder dike swarms associated with the Steens Basalt flow are considered to be atypical of other dike swarm trends associated with the CRBG. These swarms, characterized by a maintained trend of N20°E, trace the northward continuation of the Nevada shear zone and have been attributed to magmatic rise through this zone on a regional scale. Imnaha Basalt Virtually coeval with the oldest of the flows, the Imnaha basalt flows welled up across northeastern Oregon. There were 26 major flows over the period, one roughly every 15,000 years. Although estimates are that this amounts to about 10% of the total flows, they have been buried under more recent flows, and are visible in few locations. They can be seen along the lower benches of the Imnaha River and Snake River in Wallowa county. The Imnaha lavas have been dated using the K–Ar technique, and show a broad range of dates. The oldest is 17.67±0.32 Ma with younger lava flows ranging to 15.50±0.40 Ma. Although the Imnaha Basalt overlies Lower Steens Basalt, it has been suggested that it is interfingered with Upper Steens Basalt. Grande Ronde Basalt The next oldest of the flows, from 17 million to 15.6 million years ago, make up the Grande Ronde Basalt. Units (flow zones) within the Grande Ronde Basalt include the Meyer Ridge and the Sentinel Bluffs units. Geologists estimate that the Grande Ronde Basalt comprises about 85 percent of the total flow volume. It is characterized by a number of dikes called the Chief Joseph Dike Swarm near Joseph, Enterprise, Troy and Walla Walla through which the lava upwelling occurred (estimates range to up to 20,000 such dikes). Many of the dikes were fissures wide and up to in length, allowing for huge quantities of magma upwelling. Much of the lava flowed north into Washington as well as down the Columbia River channel to the Pacific Ocean; the tremendous flows created the Columbia River Plateau. The weight of this flow (and the emptying of the underlying magma chamber) caused central Washington to sink, creating the broad Columbia Basin in Washington. The type locality for the formation is the canyon of the Grande Ronde River. Grande Ronde basalt flows and dikes can also be seen in the exposed walls of Joseph Canyon along Oregon Route 3. The Grande Ronde basalt flows flooded down the ancestral Columbia River channel to the west of the Cascade Mountains. It can be found exposed along the Clackamas River and at Silver Falls State Park where the falls plunge over multiple layers of the Grande Ronde basalt. Evidence of eight flows can be found in the Tualatin Mountains on the west side of Portland. Individual flows included large quantities of basalt. The McCoy Canyon flow of the Sentinel Bluffs Member released of basalt in layers of in thickness. The Umtanum flow has been estimated at in layers deep. The Pruitt Draw flow of the Teepee Butte Member released about with layers of basalt up to thick. Wanapum Basalt The Wanapum Basalt is made up of the Eckler Mountain Member (15.6 million years ago), the Frenchman Springs Member (15.5 million years ago), the Roza Member (14.9 million years ago) and the Priest Rapids Member (14.5 million years ago). They originated from vents between Pendleton, Oregon and Hanford, Washington. The Frenchman Springs Member flowed along similar paths as the Grande Ronde basalts, but can be identified by different chemical characteristics. It flowed west to the Pacific, and can be found in the Columbia Gorge, along the upper Clackamas River, the hills south of Oregon City. and as far west as Yaquina Head near Newport, Oregon – a distance of . Saddle Mountains Basalt The Saddle Mountains Basalt, seen prominently at the Saddle Mountains, is made up of the Umatilla Member flows, the Wilbur Creek Member flows, the Asotin Member flows (13 million years ago), the Weissenfels Ridge Member flows, the Esquatzel Member flows, the Elephant Mountain Member flows (10.5 million years ago), the Bujford Member flows, the Ice Harbor Member flows (8.5 million years ago) and the Lower Monumental Member flows (6 million years ago). Related geologic structures Oregon High Lava Plains observed that the Oregon High Lava Plains is a complementary system of propagating rhyolite eruptions, with the same point of origin. The two phenomena occurred concurrently, with the High Lava Plains propagating westward since ~10 Ma, while the Snake River Plains propagated eastward.
Physical sciences
Geologic features
Earth science
7466971
https://en.wikipedia.org/wiki/Parabolic%20partial%20differential%20equation
Parabolic partial differential equation
A parabolic partial differential equation is a type of partial differential equation (PDE). Parabolic PDEs are used to describe a wide variety of time-dependent phenomena in, i.a., engineering science, quantum mechanics and financial mathematics. Examples include the heat equation, time-dependent Schrödinger equation and the Black–Scholes equation. Definition To define the simplest kind of parabolic PDE, consider a real-valued function of two independent real variables, and . A second-order, linear, constant-coefficient PDE for takes the form where the subscripts denote the first- and second-order partial derivatives with respect to and . The PDE is classified as parabolic if the coefficients of the principal part (i.e. the terms containing the second derivatives of ) satisfy the condition Usually represents one-dimensional position and represents time, and the PDE is solved subject to prescribed initial and boundary conditions. Equations with are termed elliptic while those with are hyperbolic. The name "parabolic" is used because the assumption on the coefficients is the same as the condition for the analytic geometry equation to define a planar parabola. The basic example of a parabolic PDE is the one-dimensional heat equation where is the temperature at position along a thin rod at time and is a positive constant called the thermal diffusivity. The heat equation says, roughly, that temperature at a given time and point rises or falls at a rate proportional to the difference between the temperature at that point and the average temperature near that point. The quantity measures how far off the temperature is from satisfying the mean value property of harmonic functions. The concept of a parabolic PDE can be generalized in several ways. For instance, the flow of heat through a material body is governed by the three-dimensional heat equation where denotes the Laplace operator acting on . This equation is the prototype of a multi-dimensional parabolic PDE. Noting that is an elliptic operator suggests a broader definition of a parabolic PDE: where is a second-order elliptic operator (implying that must be positive; a case where is considered below). A system of partial differential equations for a vector can also be parabolic. For example, such a system is hidden in an equation of the form if the matrix-valued function has a kernel of dimension 1. Solution Under broad assumptions, an initial/boundary-value problem for a linear parabolic PDE has a solution for all time. The solution , as a function of for a fixed time , is generally smoother than the initial data . For a nonlinear parabolic PDE, a solution of an initial/boundary-value problem might explode in a singularity within a finite amount of time. It can be difficult to determine whether a solution exists for all time, or to understand the singularities that do arise. Such interesting questions arise in the solution of the Poincaré conjecture via Ricci flow. Backward parabolic equation One occasionally encounters a so-called backward parabolic PDE, which takes the form (note the absence of a minus sign). An initial-value problem for the backward heat equation, is equivalent to a final-value problem for the ordinary heat equation, Similarly to a final-value problem for a parabolic PDE, an initial-value problem for a backward parabolic PDE is usually not well-posed (solutions often grow unbounded in finite time, or even fail to exist). Nonetheless, these problems are important for the study of the reflection of singularities of solutions to various other PDEs.
Mathematics
Differential equations
null
9674107
https://en.wikipedia.org/wiki/Pharmacokinetics
Pharmacokinetics
Pharmacokinetics (from Ancient Greek pharmakon "drug" and kinetikos "moving, putting in motion"; see chemical kinetics), sometimes abbreviated as PK, is a branch of pharmacology dedicated to describing how the body affects a specific substance after administration. The substances of interest include any chemical xenobiotic such as pharmaceutical drugs, pesticides, food additives, cosmetics, etc. It attempts to analyze chemical metabolism and to discover the fate of a chemical from the moment that it is administered up to the point at which it is completely eliminated from the body. Pharmacokinetics is based on mathematical modeling that places great emphasis on the relationship between drug plasma concentration and the time elapsed since the drug's administration. Pharmacokinetics is the study of how an organism affects the drug, whereas pharmacodynamics (PD) is the study of how the drug affects the organism. Both together influence dosing, benefit, and adverse effects, as seen in PK/PD models. ADME A number of phases occur once the drug enters into contact with the organism, these are described using the acronym ADME (or LADME if liberation is included as a separate step from absorption): Liberation – the process of the active ingredient separating from its pharmaceutical formulation.
Biology and health sciences
Drugs and pharmacology
null
16453539
https://en.wikipedia.org/wiki/Acute%20%28medicine%29
Acute (medicine)
In medicine, describing a disease as acute denotes that it is of recent onset; it occasionally denotes a short duration. The quantification of how much time constitutes "short" and "recent" varies by disease and by context, but the core denotation of "acute" is always qualitatively in contrast with "chronic", which denotes long-lasting disease (for example, in acute leukaemia and chronic leukaemia). In the context of the mass noun "acute disease", it refers to the acute phase (that is, a short course) of any disease entity. For example, in an article on ulcerative enteritis in poultry, the author says, "in acute disease there may be increased mortality without any obvious signs", referring to the acute form or phase of ulcerative enteritis. Meaning variations A mild stubbed toe is an acute injury. Similarly, many acute upper respiratory infections and acute gastroenteritis cases in adults are mild and usually resolve within a few days or weeks. The term "acute" is also included in the definition of several diseases, such as severe acute respiratory syndrome, acute leukaemia, acute myocardial infarction, and acute hepatitis. This is often to distinguish diseases from their chronic forms, such as chronic leukaemia, or to highlight the sudden onset of the disease, such as acute myocardial infarct. Related terminology Related terms include: Acute care Acute care is the early and specialist management of adult patients who have a wide range of medical conditions requiring urgent or emergency care usually within 48 hours of admission or referral from other specialties. Acute hospitals are those intended for short-term medical and/or surgical treatment and care which is a medical speciality of acute medicine, as often primary care is not positioned to assume this role.
Biology and health sciences
Disease: general classification
Health
5725438
https://en.wikipedia.org/wiki/Early%20Earth
Early Earth
Early Earth also known as proto-earth is loosely defined as encompassing Earth in its first one billion years, or gigayear (Ga, 109 y), from its initial formation in the young Solar System at about 4.55 Ga to some time in the Archean eon in approximately 3.5 Ga. On the geologic time scale, this comprises all of the Hadean eon, starting with the formation of the Earth about 4.6 billion years ago, and the Eoarchean, starting 4 billion years ago, and part of the Paleoarchean era, starting 3.6 billion years ago, of the Archean eon. This period of Earth's history involved the planet's formation from the solar nebula via a process known as accretion. This time period included intense meteorite bombardment as well as giant impacts, including the Moon-forming impact, which resulted in a series of magma oceans and episodes of core formation. After formation of the core, meteorites or comets may have delivered water and other volatile compounds to the Earth in a "late veneer". Although little crustal material from this period survives, the oldest dated specimen is a zircon mineral of 4.404 ± 0.008 Ga enclosed in a metamorphosed sandstone conglomerate in the Jack Hills of the Narryer Gneiss Terrane of Western Australia. The earliest supracrustals (such as the Isua greenstone belt) date from the latter half of this period, about 3.8 Ga, around the same time as peak Late Heavy Bombardment. History According to evidence from radiometric dating and other sources, Earth formed about 4.54 billion years ago. The current dominant theory of planet formation suggests that planets such as Earth form in about 50 to 100 million years but more recently proposed alternative processes and timescales have stimulated ongoing debate in the planetary science community. For example, in June 2023, one team of scientists reported evidence that Earth may have formed in just three million years. Nonetheless, within the first billion years of the formation of Earth, life appeared in its oceans and began to affect its atmosphere and surface, promoting the proliferation of aerobic as well as anaerobic organisms. Since then, the combination of Earth's distance from the Sun, its physical properties and its geological history have allowed life to emerge, develop photosynthesis, and, later, evolve further and thrive. The earliest life on Earth arose at least 3.5 billion years ago. Earlier possible evidence of life includes graphite, which may have a biogenic origin, in 3.7-billion-year-old metasedimentary rocks discovered in southwestern Greenland and 4.1-billion-year-old zircon grains in Western Australia. In November 2020, an international team of scientists reported studies suggesting that the primeval atmosphere of the early Earth was very different from the conditions used in the Miller–Urey studies considering the origin of life on Earth.
Physical sciences
Earth science basics: General
Earth science
1065882
https://en.wikipedia.org/wiki/Germ%20layer
Germ layer
A germ layer is a primary layer of cells that forms during embryonic development. The three germ layers in vertebrates are particularly pronounced; however, all eumetazoans (animals that are sister taxa to the sponges) produce two or three primary germ layers. Some animals, like cnidarians, produce two germ layers (the ectoderm and endoderm) making them diploblastic. Other animals such as bilaterians produce a third layer (the mesoderm) between these two layers, making them triploblastic. Germ layers eventually give rise to all of an animal's tissues and organs through the process of organogenesis. History Caspar Friedrich Wolff observed organization of the early embryo in leaf-like layers. In 1817, Heinz Christian Pander discovered three primordial germ layers while studying chick embryos. Between 1850 and 1855, Robert Remak had further refined the germ cell layer (Keimblatt) concept, stating that the external, internal and middle layers form respectively the epidermis, the gut, and the intervening musculature and vasculature. The term "mesoderm" was introduced into English by Huxley in 1871, and "ectoderm" and "endoderm" by Lankester in 1873. Evolution Among animals, sponges show the least amount of compartmentalization, having a single germ layer. Although they have differentiated cells (e.g. collar cells), they lack true tissue coordination. Diploblastic animals, Cnidaria and Ctenophora, show an increase in compartmentalization, having two germ layers, the endoderm and ectoderm. Diploblastic animals are organized into recognisable tissues. All bilaterian animals (from flatworms to humans) are triploblastic, possessing a mesoderm in addition to the germ layers found in Diploblasts. Triploblastic animals develop recognizable organs. Development Fertilization leads to the formation of a zygote. During the next stage, cleavage, mitotic cell divisions transform the zygote into a hollow ball of cells, a blastula. This early embryonic form undergoes gastrulation, forming a gastrula with either two or three layers (the germ layers). In all vertebrates, these progenitor cells differentiate into all adult tissues and organs. In the human embryo, after about three days, the zygote forms a solid mass of cells by mitotic division, called a morula. This then changes to a blastocyst, consisting of an outer layer called a trophoblast, and an inner cell mass called the embryoblast. Filled with uterine fluid, the blastocyst breaks out of the zona pellucida and undergoes implantation. The inner cell mass initially has two layers: the hypoblast and epiblast. At the end of the second week, a primitive streak appears. The epiblast in this region moves towards the primitive streak, dives down into it, and forms a new layer, called the endoderm, pushing the hypoblast out of the way (this goes on to form the amnion.) The epiblast keeps moving and forms a second layer, the mesoderm. The top layer is now called the ectoderm. Gastrulation occurs in reference to the primary body axis. Germ layer formation is linked to the primary body axis as well, however it is less reliant on it than gastrulation is. Hydractinia shows that germ layer formation that transpires as a mixed delamination. In mice, germ layer differentiation is controlled by two transcription factors: Sox2 and Oct4 proteins. These transcription factors cause the pluripotent mouse embryonic stem cells to select a germ layer fate. Sox2 promotes ectodermal differentiation, while Oct4 promotes mesendodermal differentiation. Each gene inhibits what the other promotes. Amounts of each protein are different throughout the genome, causing the embryonic stem cells to select their fate. The germ layers Endoderm The endoderm is one of the germ layers formed during animal embryonic development. Cells migrating inward along the archenteron form the inner layer of the gastrula, which develops into the endoderm. The endoderm consists at first of flattened cells, which subsequently become columnar. It forms the epithelial lining of the whole of the digestive tract except part of the mouth and pharynx and the terminal part of the rectum (which are lined by involutions of the ectoderm). It also forms the lining cells of all the glands which open into the digestive tract, including those of the liver and pancreas; the epithelium of the auditory tube and tympanic cavity; the trachea, bronchi, and alveoli of the lungs; the bladder and part of the urethra; and the follicle lining of the thyroid gland and thymus. The endoderm forms: the pharynx, the esophagus, the stomach, the small intestine, the colon, the liver, the pancreas, the bladder, the epithelial parts of the trachea and bronchi, the lungs, the thyroid, and the parathyroid. Mesoderm The mesoderm germ layer forms in the embryos of triploblastic animals. During gastrulation, some of the cells migrating inward contribute to the mesoderm, an additional layer between the endoderm and the ectoderm. The formation of a mesoderm leads to the development of a coelom. Organs formed inside a coelom can freely move, grow, and develop independently of the body wall while fluid cushions protects them from shocks. The mesoderm has several components which develop into tissues: intermediate mesoderm, paraxial mesoderm, lateral plate mesoderm, and chorda-mesoderm. The chorda-mesoderm develops into the notochord. The intermediate mesoderm develops into kidneys and gonads. The paraxial mesoderm develops into cartilage, skeletal muscle, and dermis. The lateral plate mesoderm develops into the circulatory system (including the heart and spleen), the wall of the gut, and wall of the human body. Through cell signaling cascades and interactions with the ectodermal and endodermal cells, the mesodermal cells begin the process of differentiation. The mesoderm forms: muscle (smooth and striated), bone, cartilage, connective tissue, adipose tissue, circulatory system, lymphatic system, dermis, dentine of teeth, genitourinary system, serous membranes, spleen and notochord. Ectoderm The ectoderm generates the outer layer of the embryo, and it forms from the embryo's epiblast. The ectoderm develops into the surface ectoderm, neural crest, and the neural tube. The surface ectoderm develops into: epidermis, hair, nails, lens of the eye, sebaceous glands, cornea, tooth enamel, the epithelium of the mouth and nose. The neural crest of the ectoderm develops into: peripheral nervous system, adrenal medulla, melanocytes, facial cartilage. The neural tube of the ectoderm develops into: brain, spinal cord, posterior pituitary, motor neurons, retina. Note: The anterior pituitary develops from the ectodermal tissue of Rathke's pouch. Neural crest Because of its great importance, the neural crest is sometimes considered a fourth germ layer. It is, however, derived from the ectoderm.
Biology and health sciences
Animal reproduction
Biology
1066000
https://en.wikipedia.org/wiki/Effelsberg%20100-m%20Radio%20Telescope
Effelsberg 100-m Radio Telescope
The Effelsberg 100-m Radio Telescope is a radio telescope in the Ahr Hills (part of the Eifel) in Bad Münstereifel, Germany. Inaugurated in 1972, for 29 years the Effelsberg Radio Telescope was the largest fully steerable radio telescope on Earth, surpassing the Lovell Telescope in the UK. In 2000, it was surpassed by the Green Bank Observatory's Robert C. Byrd Green Bank Telescope in Green Bank, US, which has a slightly larger elliptical 100 by 110-metre aperture. Geography The telescope is located about 1.3 km northeast of Effelsberg, a southeastern part of the town of Bad Münstereifel in North Rhine-Westphalia. It is less than 300 m west of the 398 m high Hünerberg, which is in neighbouring Rhineland-Palatinate. The boundary is a stream, the Effelsberger Bach, which runs only a few metres east of the telescope. The Effelsberger Bach is 6.5 km long, flowing from the Effelsberger Wald into the Sahrbach, which in turn flows south and into the Ahr river. A hiking path leads past the telescope; in 2004 part of this was turned into a planet trail with information panels about the Solar System with its planets. The trail ends at the 39 cm model of the Sun next to the visitor centre. Radio telescope The Effelsberg radio telescope is operated by the Max Planck Institute for Radio Astronomy in Bonn, the radio astronomy institute of the Max-Planck-Gesellschaft. It was constructed from 1968 to 1971 and inaugurated on 1 August 1972. A major technical difficulty in building a radio telescope of 100 m diameter was how to deal with the deformation of the mirror due to gravity when it is rotated to point in a different direction. The mirror must have a precise parabolic shape to focus the radio waves, but a conventionally-designed dish of this size would "sag" slightly when rotated so the mirror loses its parabolic shape. The Effelsberg telescope uses a novel computer-designed mirror support structure which deforms in such a way that the deformed mirror will always take a parabolic shape. The focus will move during such deformation, and the feed antenna suspended in front of the mirror is moved slightly by the computer control system as the telescope is rotated to keep it at the focus. Tests after completion of the telescope showed that the intended accuracy of the mirror surface of 1 mm had not only been met, but exceeded significantly. About 45% of the observing time is available to external astronomers. The Effelsberg 100-m telescope was involved in several surveys, including the one at 408 MHz (73 cm) by Haslam et al.
Technology
Ground-based observatories
null
1067082
https://en.wikipedia.org/wiki/Cell%20death
Cell death
Cell death is the event of a biological cell ceasing to carry out its functions. This may be the result of the natural process of old cells dying and being replaced by new ones, as in programmed cell death, or may result from factors such as diseases, localized injury, or the death of the organism of which the cells are part. Apoptosis or Type I cell-death, and autophagy or Type II cell-death are both forms of programmed cell death, while necrosis is a non-physiological process that occurs as a result of infection or injury. The term "cell necrobiology" has been used to describe the life processes associated with morphological, biochemical, and molecular changes which predispose, precede, and accompany cell death, as well as the consequences and tissue response to cell death. The word is derived from the Greek νεκρό meaning "death", βìο meaning "life", and λόγος meaning "the study of". The term was initially coined to broadly define investigations of the changes that accompany cell death, detected and measured by multiparameter flow- and laser scanning- cytometry. It has been used to describe the real-time changes during cell death, detected by flow cytometry. Types Programmed cell death Programmed cell death (PCD) is cell death mediated by an intracellular program. PCD is carried out in a regulated process, which usually confers advantage during an organism's life-cycle. For example, the differentiation of fingers and toes in a developing human embryo occurs because cells between the fingers apoptose; the result is that the digits separate. PCD serves fundamental functions during both plant and metazoa (multicellular animals) tissue development. Apoptosis Apoptosis is the processor of programmed cell death (PCD) that may occur in multicellular organisms. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, and chromosomal DNA fragmentation. It is now thought that – in a developmental context – cells are induced to positively commit suicide whilst in a homeostatic context; the absence of certain survival factors may provide the impetus for suicide. There appears to be some variation in the morphology and indeed the biochemistry of these suicide pathways; some treading the path of "apoptosis", others following a more generalized pathway to deletion, but both usually being genetically and synthetically motivated. There is some evidence that certain symptoms of "apoptosis" such as endonuclease activation can be spuriously induced without engaging a genetic cascade, however, presumably true apoptosis and programmed cell death must be genetically mediated. It is also becoming clear that mitosis and apoptosis are toggled or linked in some way and that the balance achieved depends on signals received from appropriate growth or survival factors. Certain key proteins primarily employed in the repair of DNA damage can also induce apoptosis when DNA damage exceeds the cell’s repair capability. These dual role proteins protect against proliferation of unstable damaged cells that might lead to cancer. Autophagy Autophagy is cytoplasmic, characterized by the formation of large vacuoles that eat away organelles in a specific sequence prior to the destruction of the nucleus. Macroautophagy, often referred to as autophagy, is a catabolic process that results in the autophagosomic-lysosomal degradation of bulk cytoplasmic contents, abnormal protein aggregates, and excess or damaged organelles. Autophagy is generally activated by conditions of nutrient deprivation but has also been associated with physiological as well as pathological processes such as development, differentiation, neurodegenerative diseases, stress, infection and cancer. Other variations of PCD Other pathways of programmed cell death have been discovered. Called "non-apoptotic programmed cell-death" (or "caspase-independent programmed cell-death"), these alternative routes to death are as efficient as apoptosis and can function as either backup mechanisms or the main type of PCD. Some such forms of programmed cell death are anoikis, almost identical to apoptosis except in its induction; cornification, a form of cell death exclusive to the eyes; excitotoxicity; ferroptosis, an iron-dependent form of cell death and Wallerian degeneration. Plant cells undergo particular processes of PCD similar to autophagic cell death. However, some common features of PCD are highly conserved in both plants and metazoa. Activation-induced cell death (AICD) is a programmed cell death caused by the interaction of Fas receptor (Fas, CD95)and Fas ligand (FasL, CD95 ligand). It occurs as a result of repeated stimulation of specific T-cell receptors (TCR) and it helps to maintain the periphery immune tolerance. Therefore, an alteration of the process may lead to autoimmune diseases. In the other words AICD is the negative regulator of activated T-lymphocytes. Ischemic cell death, or oncosis, is a form of accidental, or passive cell death that is often considered a lethal injury. The process is characterized by mitochondrial swelling, cytoplasm vacuolization, and swelling of the nucleus and cytoplasm. Mitotic catastrophe is an oncosuppressive mechanism that can lead to cell death that is due to premature or inappropriate entry of cells into mitosis. It is the most common mode of cell death in cancer cells exposed to ionizing radiation and many other anti-cancer treatments. Immunogenic cell death or immunogenic apoptosis is a form of cell death caused by some cytostatic agents such as anthracyclines, oxaliplatin and bortezomib, or radiotherapy and photodynamic therapy (PDT). Pyroptosis is a highly inflammatory form of programmed cell death that occurs most frequently upon infection with intracellular pathogens and is likely to form part of the antimicrobial response in myeloid cells. PANoptosis is a unique inflammatory cell death pathway that integrates components from other cell death pathways. The totality of biological effects in PANoptosis cannot be individually accounted for by pyroptosis, apoptosis, or necroptosis alone. PANoptosis is regulated by multifaceted macromolecular complexes termed PANoptosomes. Phagoptosis is cell death resulting from a live cell being phagocytosed (i.e. eaten) by another cell (usually a phagocyte), resulting in death and digestion of the engulfed cell. Phagoptosis can occur to cells that are pathogenic, cancerous, aged, damaged or excess to requirements. Necrotic cell death Necrosis is cell death where a cell has been badly damaged through external forces such as trauma or infection and occurs in several different forms. It is the sum of what happens to cells after their deaths. In necrosis, a cell undergoes swelling, followed by uncontrolled rupture of the cell membrane with cell contents being expelled. These cell contents often then go on to cause inflammation in nearby cells. A form of programmed necrosis, called necroptosis, has been recognized as an alternative form of programmed cell death. It is hypothesized that necroptosis can serve as a cell-death backup to apoptosis when the apoptosis signaling is blocked by endogenous or exogenous factors such as viruses or mutations. Necroptotic pathways are associated with death receptors such as the tumor necrosis factor receptor 1. Identification of cell death was previously classified based on morphology, but in recent years switched to molecular and genetic conditions.
Biology and health sciences
Cell processes
Biology
1067978
https://en.wikipedia.org/wiki/Litopterna
Litopterna
Litopterna (from "smooth heel") is an extinct order of South American native ungulates that lived from the Paleocene to the end of the Pleistocene-early Holocene around 62.5 million-12,000 years ago, and were also present in Antarctica during the Eocene. They represent the second most diverse group of South American ungulates after Notoungulata. It is divided into nine families, with Proterotheriidae and Macraucheniidae being the most diverse and last surviving families. Diversity The body forms of many litopterns, notably in the limb and skull structure, are broadly similar to those of living ungulates, unlike other South American native ungulate groups, which are often strongly divergent from living ungulates. Paleocene and Eocene litopterns generally had small body masses, with Protolipterna (Protolipternidae) estimated to have had a body mass of , though the Eocene sparnotheriodontids were considerably larger, with estimated body masses of around . Most proterotheriids had body masses of around while many macraucheniids had body masses of around . Some of the last macraucheniids like Macrauchenia were considerably larger, with body masses around a ton. Adianthidae generally had small body masses, with members of the genus Adianthus estimated to weigh . Members of the proterotheriid subfamily Megadolodinae are noted for having bunodont (rounded cusp) molar teeth, which is largely unique to litopterns among South American native ungulates. Litopterns of the mid-late Cenozoic had hinge-like limb joints and hooves similar to those of modern ungulates, with the weight being supported on three toes in macraucheniids and one in proterotheriids, with the protherotheriid Thoatherium developing greater toe reduction than that present in living horses. Macraucheniids had long necks and limbs. Members of the macraucheniid subfamily Macraucheniinae saw the progressive migration of the nasal opening to the top of the skull, which was often historically suggested to indicate the presence of a trunk, though other authors have suggested that a moose-like prehensile lip, or a saiga-like nose to filter dust are more likely. Ecology Litopterns were likely hindgut fermenters. At least some macraucheniids like Macrauchenia are suggested to have been mixed feeders feeding on both browse and grass. Sparnotheriodontids are suggested to have been browsers. Some proterotheriids are suggested to have been browsers, while some members proterotheriid subfamily Megadolodinae like Megadolodus have been suggested to have been omnivorous with at least part of their diet consisting of hard fruit. Evolutionary history Litopterna, like other "South American native ungulates" is thought to have originated from groups of archaic "condylarths" that migrated from North America. Sequencing of the collagen proteome and mitochondrial genome of Macrauchenia has revealed that litopterns are true ungulates, sharing a common ancestor with Notoungulata, and with their closest living relatives being Perissodactyla (the group containing living equines, rhinoceros and tapirs) as part of the clade Panperissodactyla, with the split from Perissodactyla being estimated at around 66 million years ago. The relationship of Litopterna to other South American native ungulate groups is uncertain, though it may be closely related to the "condylarth" group Didolodontidae. The earliest litopterns appeared during the early Paleocene, around 62.5 million years ago. Aside from South America, sparnotheriodontids are also known from the Eocene aged La Meseta Formation in the Antarctic Peninsula, representing the only record of litopterns on the Antarctic continent. Litopterns declined during the Pliocene and Pleistocene, likely as a result of climatic change and competition with recently immigrated North American ungulates who arrived as part of the Great American interchange, following the connection of the previously isolated North and South America via the Isthmus of Panama. Macrauchenia, Xenorhinotherium (Macraucheniidae) and Neolicaphrium (Proterotheriidae) were the last surviving genera of litopterns. All became extinct at the end of the Late Pleistocene around 12,000 years ago as part of the end-Pleistocene extinction event, along with most other large mammals in the Americas, co-inciding with the arrival of the first humans to the continent. It is possible that hunting had a causal role in their extinction. Classification Order Litopterna Proacrodon Family Protolipternidae Asmithwoodwardia Miguelsoria Protolipterna Family Indaleciidae Adiantoides Indalecia Family Sparnotheriodontidae Phoradiadius Notiolofos Sparnotheriodon Victorlemoinea Family Amilnedwardsiidae Amilnedwardsia Ernestohaeckelia Rutimeyeria Family Notonychopidae Notonychops Requisia Superfamily Macrauchenioidea Family Adianthidae Proectocion Adianthinae Adianthus Proadiantus Proheptaconus Thadanius Tricoelodus Family Macraucheniidae Llullataruca Subfamily Cramaucheniinae Coniopternium Caliphrium Cramauchenia Phoenixauchenia Polymorphis Pternoconius Theosodon Subfamily Macraucheniinae Cullinia Huayqueriana Macrauchenia Macraucheniopsis Oxyodontherium Paranauchenia Promacrauchenia Scalabrinitherium Windhausenia Xenorhinotherium Superfamily Proterotherioidea Family Proterotheriidae Anisolambda Anisolophus Brachytherium Diadiaphorus Diplasiotherium Eoauchenia Eolicaphrium Epecuenia Epitherium Guilielmofloweria Heteroglyphis Lambdaconus Lambdaconops Mesolicaphrium Neobrachytherium Neodolodus Neolicaphrium Olisanophus Paramacrauchenia Paranisolambda Picturotherium Prolicaphrium Promylophis Proterotherium Protheosodon Pseudobrachytherium Tetramerorhinus Thoatherium Thoatheriopsis Villarroelia Uruguayodon Wainka Xesmodon Megadolodinae Bounodus Megadolodus
Biology and health sciences
Mammals: General
Animals
1068768
https://en.wikipedia.org/wiki/Phytoremediation
Phytoremediation
Phytoremediation technologies use living plants to clean up soil, air and water contaminated with hazardous contaminants. It is defined as "the use of green plants and the associated microorganisms, along with proper soil amendments and agronomic techniques to either contain, remove or render toxic environmental contaminants harmless". The term is an amalgam of the Greek phyto (plant) and Latin remedium (restoring balance). Although attractive for its cost, phytoremediation has not been demonstrated to redress any significant environmental challenge to the extent that contaminated space has been reclaimed. Phytoremediation is proposed as a cost-effective plant-based approach of environmental remediation that takes advantage of the ability of plants to concentrate elements and compounds from the environment and to detoxify various compounds without causing additional pollution. The concentrating effect results from the ability of certain plants called hyperaccumulators to bioaccumulate chemicals. The remediation effect is quite different. Toxic heavy metals cannot be degraded, but organic pollutants can be, and are generally the major targets for phytoremediation. Several field trials confirmed the feasibility of using plants for environmental cleanup. Background Soil remediation is an expensive and complicated process. Traditional methods involve removal of the contaminated soil followed by treatment and return of the treated soil. Phytoremediation could in principle be a more cost effective solution. Phytoremediation may be applied to polluted soil or static water environment. This technology has been increasingly investigated and employed at sites with soils contaminated heavy metals like with cadmium, lead, aluminum, arsenic and antimony. These metals can cause oxidative stress in plants, destroy cell membrane integrity, interfere with nutrient uptake, inhibit photosynthesis and decrease plant chlorophyll. Phytoremediation has been used successfully in the restoration of abandoned metal mine workings, and sites where polychlorinated biphenyls have been dumped during manufacture and mitigation of ongoing coal mine discharges reducing the impact of contaminants in soils, water, or air. Contaminants such as metals, pesticides, solvents, explosives, and crude oil and its derivatives, have been mitigated in phytoremediation projects worldwide. Many plants such as mustard plants, alpine pennycress, hemp, and pigweed have proven to be successful at hyperaccumulating contaminants at toxic waste sites. Not all plants are able to accumulate heavy metals or organics pollutants due to differences in the physiology of the plant. Even cultivars within the same species have varying abilities to accumulate pollutants. Advantages and limitations Advantages the cost of the phytoremediation is lower than that of traditional processes both in situ and ex situ the possibility of the recovery and re-use of valuable metals (by companies specializing in "phytomining") it preserves the topsoil, maintaining the fertility of the soil Increase soil health, yield, and plant phytochemicals the use of plants also reduces erosion and metal leaching in the soil Noise, smell and visual disruption are usually less than with alternative methods. The :de:Galmeivegetation of hyperaccumulator plants is even protected by environmental legislation in many areas where it occurs. Limitations phytoremediation is limited to the surface area and depth occupied by the roots. with plant-based systems of remediation, it is not possible to completely prevent the leaching of contaminants into the groundwater (without the complete removal of the contaminated ground, which in itself does not resolve the problem of contamination) the survival of the plants is affected by the toxicity of the contaminated land and the general condition of the soil bio-accumulation of contaminants, especially metals, into the plants can affect consumer products like food and cosmetics, and requires the safe disposal of the affected plant material when taking up heavy metals, sometimes the metal is bound to the soil organic matter, which makes it unavailable for the plant to extract some plants are too hard to cultivate or too slow growing to make them viable for phytoremediation despite their status as hyperacumulators. Genetic engineering may improve desirable properties in target species but is controversial in some countries. Processes A range of processes mediated by plants or algae are tested in treating environmental problems.: Phytoextraction Phytoextraction (or phytoaccumulation or phytosequestration) exploits the ability of plants or algae to remove contaminants from soil or water into harvestable plant biomass. It is also used for the mining of metals such as copper(II) compounds. The roots take up substances from the soil or water and concentrate them above ground in the plant biomass Organisms that can uptake high amounts of contaminants are called hyperaccumulators. Phytoextraction can also be performed by plants (e.g. Populus and Salix) that take up lower levels of pollutants, but due to their high growth rate and biomass production, may remove a considerable amount of contaminants from the soil. Phytoextraction has been growing rapidly in popularity worldwide for the last twenty years or so. Typically, phytoextraction is used for heavy metals or other inorganics. At the time of disposal, contaminants are typically concentrated in the much smaller volume of the plant matter than in the initially contaminated soil or sediment. After harvest, a lower level of the contaminant will remain in the soil, so the growth/harvest cycle must usually be repeated through several crops to achieve a significant cleanup. After the process, the soil is remediated. Of course many pollutants kill plants, so phytoremediation is not a panacea. For example, chromium is toxic to most higher plants at concentrations above 100 μM·kg−1 dry weight. Mining of these extracted metals through phytomining is a conceivable way of recovering the material. Hyperaccumulating plants are often metallophyte. Induced or assisted phytoextraction is a process where a conditioning fluid containing a chelator or another agent is added to soil to increase metal solubility or mobilization so that the plants can absorb them more easily. While such additives can increase metal uptake by plants, they can also lead to large amounts of available metals in the soil beyond what the plants are able to translocate, causing potential leaching into the subsoil or groundwater. Examples of plants that are known to accumulate the following contaminants: Arsenic, using the sunflower (Helianthus annuus), or the Chinese Brake fern (Pteris vittata). Cadmium, using willow (Salix viminalis), which as a phytoextractor of cadmium (Cd), zinc (Zn), and copper (Cu). Cadmium and zinc, using alpine pennycress (Thlaspi caerulescens), a hyperaccumulator of these metals at levels that would be toxic to many plants. Specifically, pennycress leaves accumulate up to 380 mg/kg Cd. On the other hand, the presence of copper seems to impair its growth (see table for reference). Chromium is toxic to most plants. However, tomatoes (Solanum lycopersicum) show some promise. Lead, using Indian mustard (Brassica juncea), ragweed (Ambrosia artemisiifolia), hemp dogbane (Apocynum cannabinum), or poplar trees, which sequester lead in their biomass. Salt-tolerant (moderately halophytic) barley and/or sugar beets are commonly used for the extraction of sodium chloride (common salt) to reclaim fields that were previously flooded by sea water. Caesium-137 and strontium-90 were removed from a pond using sunflowers after the Chernobyl accident. Mercury, selenium and organic pollutants such as polychlorinated biphenyls (PCBs) have been removed from soils by transgenic plants containing genes for bacterial enzymes. Thallium is sequestered by some plants. Phytostabilization Phytostabilization reduces the mobility of substances in the environment, for example, by limiting the leaching of substances from the soil. It focuses on the long term stabilization and containment of the pollutant. The plant immobilizes the pollutants by binding them to soil particles making them less available for plant or human uptake. Unlike phytoextraction, phytostabilization focuses mainly on sequestering pollutants in soil near the roots but not in plant tissues. Pollutants become less bioavailable, resulting in reduced exposure. The plants can also excrete a substance that produces a chemical reaction, converting the heavy metal pollutant into a less toxic form. Stabilization results in reduced erosion, runoff, leaching, in addition to reducing the bioavailability of the contaminant. An example application of phytostabilization is using a vegetative cap to stabilize and contain mine tailings. Some soil amendments decrease radiosource mobility – while at some concentrations the same amendments will increase mobility. Vidal et al. 2000 find the root mats of meadow grasses are effective at demobilising radiosource materials especially with certain combinations of other agricultural practices. Vidal also find that the particular grass mix makes a significant difference. Phytodegradation Phytodegradation (also called phytotransformation) uses plants or microorganisms to degrade organic pollutants in the soil or within the body of the plant. The organic compounds are broken down by enzymes that the plant roots secrete and these molecules are then taken up by the plant and released through transpiration. This process works best with organic contaminants like herbicides, trichloroethylene, and methyl tert-butyl ether. Phytotransformation results in the chemical modification of environmental substances as a direct result of plant metabolism, often resulting in their inactivation, degradation (phytodegradation), or immobilization (phytostabilization). In the case of organic pollutants, such as pesticides, explosives, solvents, industrial chemicals, and other xenobiotic substances, certain plants, such as Cannas, render these substances non-toxic by their metabolism. In other cases, microorganisms living in association with plant roots may metabolize these substances in soil or water. These complex and recalcitrant compounds cannot be broken down to basic molecules (water, carbon-dioxide, etc.) by plant molecules, and, hence, the term phytotransformation represents a change in chemical structure without complete breakdown of the compound. The term "Green Liver" is used to describe phytotransformation, as plants behave analogously to the human liver when dealing with these xenobiotic compounds (foreign compound/pollutant). After uptake of the xenobiotics, plant enzymes increase the polarity of the xenobiotics by adding functional groups such as hydroxyl groups (-OH). This is known as Phase I metabolism, similar to the way that the human liver increases the polarity of drugs and foreign compounds (drug metabolism). Whereas in the human liver enzymes such as cytochrome P450s are responsible for the initial reactions, in plants enzymes such as peroxidases, phenoloxidases, esterases and nitroreductases carry out the same role. In the second stage of phytotransformation, known as Phase II metabolism, plant biomolecules such as glucose and amino acids are added to the polarized xenobiotic to further increase the polarity (known as conjugation). This is again similar to the processes occurring in the human liver where glucuronidation (addition of glucose molecules by the UGT class of enzymes, e.g. UGT1A1) and glutathione addition reactions occur on reactive centres of the xenobiotic. Phase I and II reactions serve to increase the polarity and reduce the toxicity of the compounds, although many exceptions to the rule are seen. The increased polarity also allows for easy transport of the xenobiotic along aqueous channels. In the final stage of phytotransformation (Phase III metabolism), a sequestration of the xenobiotic occurs within the plant. The xenobiotics polymerize in a lignin-like manner and develop a complex structure that is sequestered in the plant. This ensures that the xenobiotic is safely stored, and does not affect the functioning of the plant. However, preliminary studies have shown that these plants can be toxic to small animals (such as snails), and, hence, plants involved in phytotransformation may need to be maintained in a closed enclosure. Hence, the plants reduce toxicity (with exceptions) and sequester the xenobiotics in phytotransformation. Trinitrotoluene phytotransformation has been extensively researched and a transformation pathway has been proposed. Phytostimulation Phytostimulation (or rhizodegradation) is the enhancement of soil microbial activity for the degradation of organic contaminants, typically by organisms that associate with roots. This process occurs within the rhizosphere, which is the layer of soil that surrounds the roots. Plants release carbohydrates and acids that stimulate microorganism activity which results in the biodegradation of the organic contaminants. This means that the microorganisms are able to digest and break down the toxic substances into harmless form. Phytostimulation has been shown to be effective in degrading petroleum hydrocarbons, PCBs, and PAHs. Phytostimulation can also involve aquatic plants supporting active populations of microbial degraders, as in the stimulation of atrazine degradation by hornwort. Phytovolatilization Phytovolatilization is the removal of substances from soil or water with release into the air, sometimes as a result of phytotransformation to more volatile and/or less polluting substances. In this process, contaminants are taken up by the plant and through transpiration, evaporate into the atmosphere. This is the most studied form of phytovolatilization, where volatilization occurs at the stem and leaves of the plant, however indirect phytovolatilization occurs when contaminants are volatilized from the root zone. Selenium (Se) and Mercury (Hg) are often removed from soil through phytovolatilization. Poplar trees are one of the most successful plants for removing VOCs through this process due to its high transpiration rate. Rhizofiltration Rhizofiltration is a process that filters water through a mass of roots to remove toxic substances or excess nutrients. The pollutants remain absorbed in or adsorbed to the roots. This process is often used to clean up contaminated groundwater through planting directly in the contaminated site or through removing the contaminated water and providing it to these plants in an off-site location. In either case though, typically plants are first grown in a greenhouse under precise conditions. Biological hydraulic containment Biological hydraulic containment occurs when some plants, like poplars, draw water upwards through the soil into the roots and out through the plant, which decreases the movement of soluble contaminants downwards, deeper into the site and into the groundwater. Phytodesalination Phytodesalination uses halophytes (plants adapted to saline soil) to extract salt from the soil to improve its fertility. Role of genetics Breeding programs and genetic engineering are powerful methods for enhancing natural phytoremediation capabilities, or for introducing new capabilities into plants. Genes for phytoremediation may originate from a micro-organism or may be transferred from one plant to another variety better adapted to the environmental conditions at the cleanup site. For example, genes encoding a nitroreductase from a bacterium were inserted into tobacco and showed faster removal of TNT and enhanced resistance to the toxic effects of TNT. Researchers have also discovered a mechanism in plants that allows them to grow even when the pollution concentration in the soil is lethal for non-treated plants. Some natural, biodegradable compounds, such as exogenous polyamines, allow the plants to tolerate concentrations of pollutants 500 times higher than untreated plants, and to absorb more pollutants. Hyperaccumulators and biotic interactions A plant is said to be a hyperaccumulator if it can concentrate the pollutants in a minimum percentage which varies according to the pollutant involved (for example: more than 1000 mg/kg of dry weight for nickel, copper, cobalt, chromium or lead; or more than 10,000 mg/kg for zinc or manganese). This capacity for accumulation is due to hypertolerance, or phytotolerance: the result of adaptative evolution from the plants to hostile environments through many generations. A number of interactions may be affected by metal hyperaccumulation, including protection, interferences with neighbour plants of different species, mutualism (including mycorrhizae, pollen and seed dispersal), commensalism, and biofilm. Tables of hyperaccumulators Hyperaccumulators table – 1 : Al, Ag, As, Be, Cr, Cu, Mn, Hg, Mo, Naphthalene, Pb, Pd, Pt, Se, Zn Hyperaccumulators table – 2 : Nickel Hyperaccumulators table – 3 : Radionuclides (Cd, Cs, Co, Pu, Ra, Sr, U), Hydrocarbons, Organic Solvents. Phytoscreening As plants can translocate and accumulate particular types of contaminants, plants can be used as biosensors of subsurface contamination, thereby allowing investigators to delineate contaminant plumes quickly. Chlorinated solvents, such as trichloroethylene, have been observed in tree trunks at concentrations related to groundwater concentrations. To ease field implementation of phytoscreening, standard methods have been developed to extract a section of the tree trunk for later laboratory analysis, often by using an increment borer. Phytoscreening may lead to more optimized site investigations and reduce contaminated site cleanup costs.
Technology
Environmental remediation
null
1070152
https://en.wikipedia.org/wiki/Mugger%20crocodile
Mugger crocodile
The mugger crocodile (Crocodylus palustris) is a medium-sized broad-snouted crocodile, also known as mugger and marsh crocodile. It is native to freshwater habitats from southern Iran to the Indian subcontinent, where it inhabits marshes, lakes, rivers and artificial ponds. It rarely reaches a body length of and is a powerful swimmer, but also walks on land in search of suitable waterbodies during the hot season. Both young and adult mugger crocodiles dig burrows to which they retreat when the ambient temperature drops below or exceeds . Females dig holes in the sand as nesting sites and lay up to 46 eggs during the dry season. The sex of hatchlings depends on temperature during incubation. Both parents protect the young for up to one year. They feed on insects, and adults prey on fish, reptiles, birds and mammals. The mugger crocodile evolved at least and has been a symbol for the fructifying and destructive powers of the rivers since the Vedic period. It was first scientifically described in 1831 and is protected by law in Iran, India and Sri Lanka. Since 1982, it has been listed as Vulnerable on the IUCN Red List. Outside protected areas, it is threatened by conversion of natural habitats, gets entangled in fishing nets and is killed in human–wildlife conflict situations and in traffic accidents. Taxonomy and evolution Crocodilus palustris was the scientific name proposed by René Lesson in 1831 who described the type specimen from the Gangetic plains. In subsequent years, several naturalists and curators of natural history museums described zoological specimens and proposed different names, including: C. bombifrons by John Edward Gray in 1844 for a specimen sent by the Museum of the Royal Asiatic Society of Bengal to the British Museum of Natural History. C. trigonops also by Gray in 1844 for a young mugger specimen from India. Evolution Phylogenetic analysis of 23 crocodilian species indicated that the genus Crocodylus most likely originated in Australasia about . The freshwater crocodile (C. johnstoni) is thought to have been the first species that genetically diverged from the common ancestor of the genus about . The sister group comprising saltwater crocodile (C. porosus), Siamese crocodile (C. siamensis) and mugger crocodile diverged about . The latter diverged from this group about . A paleogenomics analysis indicated that Crocodylus likely originated in Africa and radiated towards Southeast Asia and the Americas, diverging from its closest recent relative, the extinct Voay of Madagascar, around near the Oligocene/Miocene boundary. Within Crocodylus, the mugger crocodile's closest living relatives are the Siamese crocodile and the saltwater crocodile. Fossil crocodile specimens excavated in the Sivalik Hills closely resemble the mugger crocodile in the shortness of the premaxillae and in the form of the nasal openings. In Andhra Pradesh’s Prakasam district, a long fossilized skull of a mugger crocodile was found in a volcanic ash bed that probably dates to the late Pleistocene. Crocodylus palaeindicus from late Pliocene sediments in the Sivalik Hills is thought to be an ancestor of the mugger crocodile. Fossil remains of C. palaeindicus were also excavated in the vicinity of Bagan in central Myanmar. Below cladogram is from a tip dating study, for which morphological, molecular DNA sequencing and stratigraphic fossil age data were simultaneously used to establish the inter-relationships within Crocodylidae. This cladogram was revised in a paleogenomics study. Characteristics Mugger crocodile hatchlings are pale olive with black spots. Adults are dark olive to grey or brown. The head is rough without any ridges and has large scutes around the neck that is well separated from the back. Scutes usually form four, rarely six longitudinal series and 16 or 17 transverse series. The limbs have keeled scales with serrated fringes on outer edges, and outer toes are extensively webbed. The snout is slightly longer than broad with 19 upper teeth on each side. The symphysis of the lower jaw extends to the level of the fourth or fifth tooth. The premaxillary suture on the palate is nearly straight or curved forwards, and the nasal bones separate the premaxilla above. The mugger crocodile is considered a medium-sized crocodilian, but has the broadest snout among living crocodiles. It has a powerful tail and webbed feet. Its visual, hearing and smelling senses are acute. Adult female muggers are on average; males usually measure , but rarely reach a length of . The two largest known muggers measured and were killed in Sri Lanka. One individual weighing had a bite force of . Large males may reach a weight of . The largest zoological specimen in the British Museum of Natural History measures . One male mugger caught in Pakistan of about weighed . Distribution and habitat The mugger crocodile occurs in southern Iran, Pakistan, Nepal, India and Sri Lanka up to an elevation of . It inhabits freshwater lakes, rivers and marshes, and prefers slow-moving, shallow water bodies. It also thrives in artificial reservoirs and irrigation canals. In Iran, the mugger occurs along rivers in Sistan and Baluchestan Provinces along the Iran–Pakistan border. A population of around 200 mugger crocodiles lives on the Iranian Makran coast near Chabahar. Due to human activity and a long drought in the late 1990s and early 2000s, it had been pushed to the brink of extinction. Following several tropical cyclones in 2007 and 2010, much of the habitat of the mugger crocodiles has been restored as formerly dry lakes and hamuns were flooded again. In Pakistan, a small population lives in 21 ponds around Dasht River; in the winter of 2007–08, 99 individuals were counted. By 2017, the population had declined to 25 individuals. In Sindh Province, small mugger populations occur in wetlands of Deh Akro-2 and Nara Desert Wildlife Sanctuary, near Chotiari Dam, in the Nara Canal and around Haleji lake. In Nepal's Terai, it occurs in the wetlands of Shuklaphanta and Bardia National Parks, Ghodaghodi Tal, Chitwan National Park and Koshi Tappu Wildlife Reserve. In India, it occurs in: Rajasthan along the Chambal, Ken and Son Rivers, and in Ranthambore National Park Gujarat along the Vishwamitri River and several reservoirs and lakes in Kutch Madhya Pradesh's National Chambal Sanctuary Uttarakhand's Rajaji National Park, Corbett Tiger Reserve and Lansdowne Forest Division Uttar Pradesh's Katarniaghat and Kishanpur Wildlife Sanctuaries Odisha's Simlipal National Park and along Mahanadi and Sabari Rivers In 2019, 82 individuals were recorded in the river systems of Simlipal National Park. Telangana's Manjira Wildlife Sanctuary Maharashtra's Kadavi and Warna Rivers, and Savitri River in Raigad District. Goa's Salaulim Reservoir, Zuari River and in small lakes Karnataka along Kaveri and Kabini Rivers, in Ranganthittu Bird Sanctuary, Nagarhole National Park and Tungabhadra Reservoir Kerala's Parambikulam Reservoir and Neyyar Wildlife Sanctuary Tamil Nadu's Amaravathi Reservoir, Moyar and Kaveri rivers. In Sri Lanka, it occurs in Wilpattu, Yala and Bundala National Parks. Between 1991 and 1996, it was recorded in another 102 localities. In Bangladesh, it was historically present in the northern parts of the Sundarbans, where four to five captive individuals survived in an artificial pond by the 1980s. It is possibly locally extinct in the country. In Bhutan, it became extinct in the late 1960s, but a few captive-bred individuals were released in the Manas River in the late 1990s. It is considered locally extinct in Myanmar. Behaviour and ecology The mugger crocodile is a powerful swimmer that uses its tail and hind feet to move forward, change direction and submerge. It belly-walks, with its belly touching ground, at the bottom of waterbodies and on land. During the hot dry season, it walks over land at night to find suitable wetlands and spends most of the day submerged in water. During the cold season it basks on riverbanks, individuals are tolerant of others during this period. Territorial behaviour increases during the mating season. Like all crocodilians, the mugger crocodile is a thermoconformer and has an optimal body temperature of and risks dying of freezing or hyperthermia when exposed to temperatures below or above , respectively. It digs burrows to retreat from extreme temperatures and other harsh climatic conditions. Burrows are between deep, with entrances above the water level and a chamber at the end that is big enough to allow the mugger to turn around. Temperatures inside remains constant at , depending on region. Hunting and diet The mugger crocodile preys on fish, snakes, turtles, birds and mammals including monkeys, squirrels, rodents, otters and dogs. It also scavenges on dead animals. During dry seasons, muggers walk many kilometers over land in search of water and prey. Hatchlings feed mainly on insects such as beetles, but also on crabs and shrimp and on vertebrates later on. It seizes and drags potential prey approaching watersides into the water, when the opportunity arises. Adult muggers were observed feeding on a flapshell turtle and a tortoise. Subadult and adult muggers favour fish, but also prey on small to medium-sized ungulates up to the size of chital (Axis axis). At the Chambal River, muggers have attacked water buffaloes, cattle and goats. In Bardia National Park, a mugger was observed caching a chital kill beneath the roots of a tree and returning to its basking site; a part of the deer was still wedged among the roots on the next day. In the same national park, a mugger caught a brown fish owl (Ketupa zeylonensis); several instances of water bird feathers in mugger dung have been reported. Muggers have also been observed preying and feeding on pythons. In Yala National Park, a mugger killed a large Indian pangolin (Manis crassicaudata) and devoured pieces over several hours. Tool use Mugger crocodiles have been documented using lures to hunt birds. This means they are among the first reptiles recorded to use tools. By balancing sticks and branches on their heads, they lure birds that are looking for nesting material. This strategy is particularly effective during the nesting season. Reproduction Female muggers obtain sexual maturity at a body length of around at the age of about 6.5 years, and males at around body length. The reproduction cycle starts earliest in November at the onset of the cold season with courtship and mating. Between February and June, females dig deep holes for nesting between away from the waterside. They lay up to two clutches with 8 –46 eggs each. Eggs weigh on average. Laying of one clutch usually takes less than half an hour. Thereafter, females scrape sand over the nest to close it. Males have been observed to assist females in digging and protecting nest sites. Hatching season is two months later, between April and June in south India, and in Sri Lanka between August and September. Then females excavate the young, pick them up in their snouts and take them to the water. Both females and males protect the young for up to one year. Healthy hatchlings develop at a temperature range of . Sex ratio of hatched eggs depends on incubation temperature and exposure of nests to sunshine. Only females develop at constant temperatures of , and only males at . Percentage of females in a clutch decreases at constant temperatures between , and of males between . Temperature in natural nests is not constant but varies between nights and days. Foremost females hatch in natural early nests when initial temperature inside nests ranges between . The percentage of male hatchlings increases in late nests located in sunny sites. Hatchlings are long and weigh on average when one month old. They grow about per month and reach a body length of when two years old. Sympatric predators The distribution of the mugger crocodile overlaps with that of the saltwater crocodile in a few coastal areas, but it barely enters brackish water and prefers shallow waterways. It is an apex predator in freshwater ecosystems. It is sympatric with the gharial (Gavialis gangeticus) in the Rapti and Narayani Rivers, in the eastern Mahanadi, and in tributaries of the Ganges and Yamuna rivers. The Bengal tiger (Panthera tigris tigris) occasionally fights mugger crocodiles off prey and rarely preys on adult mugger crocodiles in Ranthambore National Park. The Asiatic lion (Panthera leo leo) sometimes preys on crocodiles on the banks of the Kamleshwar Dam in Gir National Park during dry, hot months. Threats The mugger crocodile is threatened by habitat destruction because of conversion of natural habitats for agricultural and industrial use. As humans encroach into its habitat, the incidents of conflict increase. Muggers are entangled in fishing equipment and drown, and are killed in areas where fishermen perceive them as competition. Major wetlands in Pakistan were drained in the 1990s by dams and channels to funnel natural streams and agricultural runoffs into rivers. In Gujarat, two muggers were found killed, one in 2015 with the tail cut off and internal organs missing; the other in 2017, also with the tail cut off. The missing body parts indicate that the crocodiles were sacrificed in superstitious practices or used as aphrodisiacs. Between 2005 and 2018, 38 mugger crocodiles were victims of traffic accidents on roads and railway tracks in Gujarat; 29 were found dead, four died during treatment, and five were returned to the wild after medical care. In 2017, a dead mugger was found on a railway track in Rajasthan. Conservation The mugger crocodile is listed in CITES Appendix I, hence international commercial trade is prohibited. It has been listed as Vulnerable on the IUCN Red List since 1982. By 2013, less than 8,700 mature individuals were estimated to live in the wild and no population unit to comprise more than 1,000 individuals. In India, it has been protected since 1972 under Schedule I of the Wildlife Protection Act, 1972, which prohibits catching, killing and transporting a crocodile without a permit; offenders face imprisonment and a fine. In Sri Lanka, it was listed in Schedule IV of the Fauna & Flora Protection Ordinance in 1946, which allowed for shooting one crocodile with a permit. Today, it is strictly protected, but law enforcement in Sri Lanka is lacking. In Iran, the mugger crocodile is listed as endangered and has been legally protected since 2013; capturing and killing a crocodile is punished with a fine of 100 million Iranian rials. Since large muggers occasionally take livestock, this leads to conflict with local people living close to mugger habitat. In Maharashtra, local people are compensated for loss of close relatives and livestock. Local people in Baluchestan respect the mugger crocodile as a water living creature and do not harm it. If an individual kills livestock, the owner is compensated for the loss. The mugger crocodile is translocated in severe conflict cases. A total of 1,193 captive bred muggers were released to restock populations in 28 protected areas in India between 1978 and 1992. Production of new offspring was halted by the Indian Government in 1994. In culture The Sanskrit word मकर 'makara' refers to the crocodile and a mythical crocodile-like animal. The Hindi word for crocodile is मगर 'magar'. In English language, both names 'mugger' and 'magar' were used around the turn of the 20th century. The names 'marsh crocodile' and 'broad-snouted crocodile' have been used since the late 1930s. The crocodile is acknowledged as the prototype of the makara and symbolises both the fructifying and the destructive powers of the rivers. It is the animal vehicle of the Vedic deity Varuna and of several nature spirits called yakshas. In Hindu mythology, it represents virility as a vehicle of Ganga and as an emblem of Kamadeva. A stone carving of a mugger crocodile was part of a beam of a gateway to the Bharhut Stupa built around 100 BC. The traditional biography of the Indian saint Adi Shankara includes an incident where he is grabbed by a crocodile in the Kaladi river, which releases him only after his mother reluctantly let him choose the ascetic path of a Sannyasa. The Muslim saint Pīr Mango is said to have taken care of crocodiles and created a stream to trickle out of a rock near Karachi in the 13th century. This place was later walled around, and about 40 mugger crocodiles were kept in the reservoir called Magar Talao in the 1870s; they were fed by both Hindu and Muslim pilgrims. Mugger crocodiles have also been kept in tanks near Hindu temples built in the vicinity of rivers; these crocodiles are considered sacred. In the early 20th century, young married women fed the crocodiles in Khan Jahan Ali's Tank in Jessore in the hope of being blessed with children. Vasava, Gamit and Chodhri tribes in Gujarat worship the crocodile god Mogra Dev asking for children, good crops and milk yield of their cows. They carve wooden statues symbolising Mogra Dev and mount them on poles. Their offerings during the installation ceremony include rice, milk, wine, heart and liver of a chicken, and a mixture of vermillion, oil and coconut fibres. Fatal attacks of mugger crocodiles on humans were documented in Gujarat and Maharashtra, but they rarely consumed the victims who died of drowning. A fable from the Jataka tales of Buddhist traditions features a clever monkey outwitting a crocodile. Three folktales feature crocodiles and jackals. A mugger crocodile is one of the characters in The Undertakers, a chapter of The Second Jungle Book. The children’s book Adventures of a Nepali Frog features the character Mugger, the crocodile who lives by the Rapti River in Chitwan National Park.
Biology and health sciences
Crocodilia
Animals
1070221
https://en.wikipedia.org/wiki/Human%20eye
Human eye
The human eye is a sensory organ in the visual system that reacts to visible light allowing eyesight. Other functions include maintaining the circadian rhythm, and keeping balance. The eye can be considered as a living optical device. It is approximately spherical in shape, with its outer layers, such as the outermost, white part of the eye (the sclera) and one of its inner layers (the pigmented choroid) keeping the eye essentially light tight except on the eye's optic axis. In order, along the optic axis, the optical components consist of a first lens (the cornea—the clear part of the eye) that accounts for most of the optical power of the eye and accomplishes most of the focusing of light from the outside world; then an aperture (the pupil) in a diaphragm (the iris—the coloured part of the eye) that controls the amount of light entering the interior of the eye; then another lens (the crystalline lens) that accomplishes the remaining focusing of light into images; and finally a light-sensitive part of the eye (the retina), where the images fall and are processed. The retina makes a connection to the brain via the optic nerve. The remaining components of the eye keep it in its required shape, nourish and maintain it, and protect it. Three types of cells in the retina convert light energy into electrical energy used by the nervous system: rods respond to low intensity light and contribute to perception of low-resolution, black-and-white images; cones respond to high intensity light and contribute to perception of high-resolution, coloured images; and the recently discovered photosensitive ganglion cells respond to a full range of light intensities and contribute to adjusting the amount of light reaching the retina, to regulating and suppressing the hormone melatonin, and to entraining circadian rhythm. Structure Humans have two eyes, situated on the left and the right of the face. The eyes sit in bony cavities called the orbits, in the skull. There are six extraocular muscles that control eye movements. The front visible part of the eye is made up of the whitish sclera, a coloured iris, and the pupil. A thin layer called the conjunctiva sits on top of this. The front part is also called the anterior segment of the eye. The eye is not shaped like a perfect sphere; rather it is a fused two-piece unit, composed of an anterior (front) segment and the posterior (back) segment. The anterior segment is made up of the cornea, iris and lens. The cornea is transparent and more curved and is linked to the larger posterior segment, composed of the vitreous, retina, choroid and the outer white shell called the sclera. The cornea is typically about in diameter, and 0.5 mm (500 μm) in thickness near its centre. The posterior chamber constitutes the remaining five-sixths; its diameter is typically about . An area termed the limbus connects the cornea and sclera. The iris is the pigmented circular structure concentrically surrounding the centre of the eye, the pupil, which appears to be black. The size of the pupil, which controls the amount of light entering the eye, is adjusted by the iris' dilator and sphincter muscles. Light energy enters the eye through the cornea, through the pupil and then through the lens. The lens shape is changed for near focus (accommodation) and is controlled by the ciliary muscle. Between the two lenses (the cornea and the crystalline lens), there are four optical surfaces which each refract light as it travels along the optical path. One basic model describing the geometry of the optical system is the Arizona Eye Model. This model describes the accommodation of the eye geometrically. Photons of light falling on the light-sensitive cells of the retina (photoreceptor cones and rods) are converted into electrical signals that are transmitted to the brain by the optic nerve and interpreted as sight and vision. Size The size of the eye differs among adults by only one or 2 millimetres. The eyeball is generally less tall than it is wide. The sagittal vertical (height) of a human adult eye is approximately , the transverse horizontal diameter (width) is and the axial anteroposterior size (depth) averages with no significant difference between sexes and age groups. Strong correlation has been found between the transverse diameter and the width of the orbit (r = 0.88). The typical adult eye has an anterior to posterior diameter of , and a volume of . The eyeball grows rapidly, increasing from about diameter at birth to by three years of age. By age 12, the eye attains its full size. Components The eye is made up of three coats, or layers, enclosing various anatomical structures. The outermost layer, known as the fibrous tunic, is composed of the cornea and sclera, which provide shape to the eye and support the deeper structures. The middle layer, known as the vascular tunic or uvea, consists of the choroid, ciliary body, pigmented epithelium and iris. The innermost is the retina, which gets its oxygenation from the blood vessels of the choroid (posteriorly) as well as the retinal vessels (anteriorly). The spaces of the eye are filled with the aqueous humour anteriorly, between the cornea and lens, and the vitreous body, a jelly-like substance, behind the lens, filling the entire posterior cavity. The aqueous humour is a clear watery fluid that is contained in two areas: the anterior chamber between the cornea and the iris, and the posterior chamber between the iris and the lens. The lens is suspended to the ciliary body by the suspensory ligament (zonule of Zinn), made up of hundreds of fine transparent fibers which transmit muscular forces to change the shape of the lens for accommodation (focusing). The vitreous body is a clear substance composed of water and proteins, which give it a jelly-like and sticky composition. Extraocular muscles Each eye has seven extraocular muscles located in its orbit. Six of these muscles control the eye movements, the seventh controls the movement of the upper eyelid. The six muscles are four recti muscles – the lateral rectus, the medial rectus, the inferior rectus, and the superior rectus, and two oblique muscles the inferior oblique, and the superior oblique. The seventh muscle is the levator palpebrae superioris muscle. When the muscles exert different tensions, a torque is exerted on the globe that causes it to turn, in almost pure rotation, with only about one millimeter of translation. Thus, the eye can be considered as undergoing rotations about a single point in the centre of the eye. Vision Field of view The approximate field of view of an individual human eye (measured from the fixation point, i.e., the point at which one's gaze is directed) varies by facial anatomy, but is typically 30° superior (up, limited by the brow), 45° nasal (limited by the nose), 70° inferior (down), and 100° temporal (towards the temple). For both eyes, combined (binocular vision) visual field is approximately 100° vertical and a maximum 190° horizontal, approximately 120° of which makes up the binocular field of view (seen by both eyes) flanked by two uniocular fields (seen by only one eye) of approximately 40 degrees. It is an area of 4.17 steradians or 13700 square degrees for binocular vision. When viewed at large angles from the side, the iris and pupil may still be visible by the viewer, indicating the person has peripheral vision possible at that angle. About 15° temporal and 1.5° below the horizontal is the blind spot created by the optic nerve nasally, which is roughly 7.5° high and 5.5° wide. Dynamic range The retina has a static contrast ratio of around 100:1 (about 6.5 f-stops). As soon as the eye moves rapidly to acquire a target (saccades), it re-adjusts its exposure by adjusting the iris, which adjusts the size of the pupil. Initial dark adaptation takes place in approximately four seconds of profound, uninterrupted darkness; full adaptation through adjustments in retinal rod photoreceptors is 80% complete in thirty minutes. The process is nonlinear and multifaceted, so an interruption by light exposure requires restarting the dark adaptation process over again.The human eye can detect a luminance from 10−6 cd/m2, or one millionth (0.000001) of a candela per square meter to 108 cd/m2 or one hundred million (100,000,000) candelas per square meter. (that is it has a range of 1014, or one hundred trillion 100,000,000,000,000, about 46.5 f-stops). This range does not include looking at the midday sun (109 cd/m2) or lightning discharge. At the low end of the range is the absolute threshold of vision for a steady light across a wide field of view, about 10−6 cd/m2 (0.000001 candela per square meter). The upper end of the range is given in terms of normal visual performance as 108 cd/m2 (100,000,000 or one hundred million candelas per square meter). The eye includes a lens similar to lenses found in optical instruments such as cameras and the same physics principles can be applied. The pupil of the human eye is its aperture; the iris is the diaphragm that serves as the aperture stop. Refraction in the cornea causes the effective aperture (the entrance pupil) to differ slightly from the physical pupil diameter. The entrance pupil is typically about 4 mm in diameter, although it can range from 2 mm () in a brightly lit place to 8 mm () in the dark. The latter value decreases slowly with age; older people's eyes sometimes dilate to not more than 5–6mm in the dark, and may be as small as 1mm in the light. Movement The visual system in the human brain is too slow to process information if images are slipping across the retina at more than a few degrees per second. Thus, to be able to see while moving, the brain must compensate for the motion of the head by turning the eyes. Frontal-eyed animals have a small area of the retina with very high visual acuity, the fovea centralis. It covers about 2 degrees of visual angle in people. To get a clear view of the world, the brain must turn the eyes so that the image of the object of regard falls on the fovea. Any failure to make eye movements correctly can lead to serious visual degradation. Having two eyes allows the brain to determine the depth and distance of an object, called stereovision, and gives the sense of three-dimensionality to the vision. Both eyes must point accurately enough that the object of regard falls on corresponding points of the two retinas to stimulate stereovision; otherwise, double vision might occur. Some persons with congenitally crossed eyes tend to ignore one eye's vision, thus do not suffer double vision, and do not have stereovision. The movements of the eye are controlled by six muscles attached to each eye, and allow the eye to elevate, depress, converge, diverge and roll. These muscles are both controlled voluntarily and involuntarily to track objects and correct for simultaneous head movements. Rapid Rapid eye movement, REM, typically refers to the sleep stage during which the most vivid dreams occur. During this stage, the eyes move rapidly. Saccadian Saccades are quick, simultaneous movements of both eyes in the same direction controlled by the frontal lobe of the brain. Fixational Even when looking intently at a single spot, the eyes drift around. This ensures that individual photosensitive cells are continually stimulated in different degrees. Without changing input, these cells would otherwise stop generating output. Eye movements include drift, ocular tremor, and microsaccades. Some irregular drifts, movements smaller than a saccade and larger than a microsaccade, subtend up to one tenth of a degree. Researchers vary in their definition of microsaccades by amplitude. Martin Rolfs states that 'the majority of microsaccades observed in a variety of tasks have amplitudes smaller than 30 min-arc'. However, others state that the "current consensus has largely consolidated around a definition of microsaccades that includes magnitudes up to 1°." Vestibulo-ocular The vestibulo-ocular reflex is a reflex eye movement that stabilizes images on the retina during head movement by producing an eye movement in the direction opposite to head movement in response to neural input from the vestibular system of the inner ear, thus maintaining the image in the centre of the visual field. For example, when the head moves to the right, the eyes move to the left. This applies for head movements up and down, left and right, and tilt to the right and left, all of which give input to the ocular muscles to maintain visual stability. Smooth pursuit Eyes can also follow a moving object around. This tracking is less accurate than the vestibulo-ocular reflex, as it requires the brain to process incoming visual information and supply feedback. Following an object moving at constant speed is relatively easy, though the eyes will often make saccades to keep up. The smooth pursuit movement can move the eye at up to 100°/s in adult humans. It is more difficult to visually estimate speed in low light conditions or while moving, unless there is another point of reference for determining speed. Optokinetic The optokinetic reflex (or optokinetic nystagmus) stabilizes the image on the retina through visual feedback. It is induced when the entire visual scene drifts across the retina, eliciting eye rotation in the same direction and at a velocity that minimizes the motion of the image on the retina. When the gaze direction deviates too far from the forward heading, a compensatory saccade is induced to reset the gaze to the centre of the visual field. For example, when looking out of the window at a moving train, the eyes can focus on a moving train for a short moment (by stabilizing it on the retina), until the train moves out of the field of vision. At this point, the eye is moved back to the point where it first saw the train (through a saccade). Near response The adjustment to close-range vision involves three processes to focus an image on the retina. Vergence movement When a creature with binocular vision looks at an object, the eyes must rotate around a vertical axis so that the projection of the image is in the centre of the retina in both eyes. To look at a nearby object, the eyes rotate 'towards each other' (convergence), while for an object farther away they rotate 'away from each other' (divergence). Pupil constriction Lenses cannot refract light rays at their edges as well as closer to the centre. The image produced by any lens is therefore somewhat blurry around the edges (spherical aberration). It can be minimized by screening out peripheral light rays and looking only at the better-focused centre. In the eye, the pupil serves this purpose by constricting while the eye is focused on nearby objects. Small apertures also give an increase in depth of field, allowing a broader range of "in focus" vision. In this way the pupil has a dual purpose for near vision: to reduce spherical aberration and increase depth of field. Lens accommodation Changing the curvature of the lens is carried out by the ciliary muscles surrounding the lens; this process is known as "accommodation". Accommodation narrows the inner diameter of the ciliary body, which actually relaxes the fibers of the suspensory ligament attached to the periphery of the lens, and also allows the lens to relax into a more convex, or globular, shape. A more convex lens refracts light more strongly and focuses divergent light rays from near objects onto the retina, allowing closer objects to be brought into better focus. Medicine The human eye contains enough complexity to warrant specialized attention and care beyond the duties of a general practitioner. These specialists, or eye care professionals, serve different functions in different countries. Eye care professionals can have overlap in their patient care privileges. For example, both an ophthalmologist (M.D.) and optometrist (O.D.) are professionals who diagnose eye disease and can prescribe lenses to correct vision. Typically, only ophthalmologists are licensed to perform surgical procedures. Ophthalmologists may also specialize within a surgical area, such as cornea, cataracts, laser, retina, or oculoplastics. Eye care professionals include: Ocularists Ophthalmologists Optometrists Opticians Orthoptists and vision therapists Pigmentation Brown Almost all mammals have brown or darkly-pigmented irises. In humans, brown is by far the most common eye color, with approximately 79% of people in the world having it. Brown eyes result from a relatively high concentration of melanin in the stroma of the iris, which causes light of both shorter and longer wavelengths to be absorbed. In many parts of the world, it is nearly the only iris color present. Brown eyes are common in Europe, East Asia, Southeast Asia, Central Asia, South Asia, West Asia, Oceania, Africa and the Americas. Light or medium-pigmented brown eyes can also be commonly found in Europe, among the Americas, and parts of Central Asia, West Asia and South Asia. Light brown eyes bordering amber and hazel coloration are common in Europe, but can also be observed in East Asia and Southeast Asia, though are uncommon in the region. Amber Amber eyes are a solid color with a strong yellowish/golden and russet/coppery tint, which may be due to the yellow pigment called lipochrome (also found in green eyes). Amber eyes should not be confused with hazel eyes. Although hazel eyes may contain specks of amber or gold, they usually tend to have many other colors, including green, brown and orange. Also, hazel eyes may appear to shift in color and consist of flecks and ripples, while amber eyes are of a solid gold hue. Even though amber is similar to gold, some people have russet or copper colored amber eyes that are mistaken for hazel, though hazel tends to be duller and contains green with red/gold flecks, as mentioned above. Amber eyes may also contain amounts of very light gold-ish gray. People with that eye color are common in northern Europe, and in fewer numbers in southern Europe, the Middle East, North Africa, and South America. Hazel Hazel eyes are due to a combination of Rayleigh scattering and a moderate amount of melanin in the iris' anterior border layer. Hazel eyes often appear to shift in color from a brown to a green. Although hazel mostly consists of brown and green, the dominant color in the eye can either be brown/gold or green. This is why hazel eyes can be mistaken as amber and vice versa. The combination can sometimes produce a multicolored iris, i.e., an eye that is light brown/amber near the pupil and charcoal or dark green on the outer part of the iris (or vice versa) when observed in sunlight. Definitions of the eye color hazel vary: it is sometimes considered to be synonymous with light brown or gold, as in the color of a hazelnut shell. Around 18% of the US population and 5% of the world population have hazel eyes. Hazel eyes are found in Europe, most commonly in the Netherlands and the United Kingdom, and have also been observed to be very common among the Low Saxon speaking populations of northern Germany. Green Green eyes are most common in Northern, Western and Central Europe. Around 8–10% of men and 18–21% of women in Iceland and 6% of men and 17% of women in the Netherlands have green eyes. Among European Americans, green eyes are most common among those of recent Celtic and Germanic ancestry, with about 16%. The green color is caused by the combination of: 1) an amber or light brown pigmentation in the stroma of the iris (which has a low or moderate concentration of melanin) with: 2) a blue shade created by the Rayleigh scattering of reflected light. Green eyes contain the yellowish pigment lipochrome. Blue The inheritance pattern followed by blue eyes was previously assumed to be a mendelian recessive trait, however, eye color inheritance is now recognized as a polygenic trait, meaning that it is controlled by the interactions of several genes. Blue eyes are predominant in northern and eastern Europe, particularly around the Baltic Sea. Blue eyes are also found in southern Europe, Central Asia, South Asia, North Africa and West Asia. Approximately 8% to 10% of the global population have blue eyes. A 2002 study found that the prevalence of blue eye color among the white population in the United States to be 33.8% for those born from 1936 through 1951. Gray Like blue eyes, gray eyes have a dark epithelium at the back of the iris and a relatively clear stroma at the front. One possible explanation for the difference in the appearance of gray and blue eyes is that gray eyes have larger deposits of collagen in the stroma, so that the light that is reflected from the epithelium undergoes Mie scattering (which is not strongly frequency-dependent) rather than Rayleigh scattering (in which shorter wavelengths of light are scattered more). This would be analogous to the change in the color of the sky, from the blue given by the Rayleigh scattering of sunlight by small gas molecules when the sky is clear, to the gray caused by Mie scattering of large water droplets when the sky is cloudy. Alternatively, it has been suggested that gray and blue eyes might differ in the concentration of melanin at the front of the stroma. Gray eyes can also be found among the Algerian Shawia people of the Aurès Mountains in Northwest Africa, in the Middle East/West Asia, Central Asia, and South Asia. Under magnification, gray eyes exhibit small amounts of yellow and brown color in the iris. Irritation Eye irritation has been defined as "the magnitude of any stinging, scratching, burning, or other irritating sensation from the eye". It is a common problem experienced by people of all ages. Related eye symptoms and signs of irritation are discomfort, dryness, excess tearing, itchiness, grating, foreign body sensation, ocular fatigue, pain, soreness, redness, swollen eyelids, and tiredness, etc. These eye symptoms are reported with intensities from mild to severe. It has been suggested that these eye symptoms are related to different causal mechanisms, and symptoms are related to the particular ocular anatomy involved. Several suspected causal factors in our environment have been studied so far. One hypothesis is that indoor air pollution may cause eye and airway irritation. Eye irritation depends somewhat on destabilization of the outer-eye tear film, i.e. the formation of dry spots on the cornea, resulting in ocular discomfort. Occupational factors are also likely to influence the perception of eye irritation. Some of these are lighting (glare and poor contrast), gaze position, reduced blink rate, limited number of breaks from visual tasking, and a constant combination of accommodation, musculoskeletal burden, and impairment of the visual nervous system. Another factor that may be related is work stress. In addition, psychological factors have been found in multivariate analyses to be associated with an increase in eye irritation among VDU users. Other risk factors, such as chemical toxins/irritants (e.g. amines, formaldehyde, acetaldehyde, acrolein, N-Decane, VOCs, ozone, pesticides and preservatives, allergens, etc.) might cause eye irritation as well. Certain volatile organic compounds that are both chemically reactive and airway irritants may cause eye irritation. Personal factors (e.g. use of contact lenses, eye make-up, and certain medications) may also affect destabilization of the tear film and possibly result in more eye symptoms. Nevertheless, if airborne particles alone should destabilize the tear film and cause eye irritation, their content of surface-active compounds must be high. An integrated physiological risk model with blink frequency, destabilization, and break-up of the eye tear film as inseparable phenomena may explain eye irritation among office workers in terms of occupational, climate, and eye-related physiological risk factors. There are two major measures of eye irritation. One is blink frequency, which can be observed by human behavior. The other measures are break up time, tear flow, hyperemia (redness, swelling), tear fluid cytology, and epithelial damage (vital stains) etc., which are human beings' physiological reactions. Blink frequency is defined as the number of blinks per minute and it is associated with eye irritation. Blink frequencies are individual with mean frequencies of < 2–3 to 20–30 blinks/minute, and they depend on environmental factors including the use of contact lenses. Dehydration, mental activities, work conditions, room temperature, relative humidity, and illumination all influence blink frequency. Break-up time (BUT) is another major measure of eye irritation and tear film stability. It is defined as the time interval (in seconds) between blinking and rupture. BUT is considered to reflect the stability of the tear film as well. In normal persons, the break-up time exceeds the interval between blinks, and, therefore, the tear film is maintained. Studies have shown that blink frequency is correlated negatively with break-up time. This phenomenon indicates that perceived eye irritation is associated with an increase in blink frequency since the cornea and conjunctiva both have sensitive nerve endings that belong to the first trigeminal branch. Other evaluating methods, such as hyperemia, cytology etc. have increasingly been used to assess eye irritation. There are other factors that are related to eye irritation as well. Three major factors that influence the most are indoor air pollution, contact lenses and gender differences. Field studies have found that the prevalence of objective eye signs is often significantly altered among office workers in comparisons with random samples of the general population. These research results might indicate that indoor air pollution has played an important role in causing eye irritation. There are more and more people wearing contact lens now and dry eyes appear to be the most common complaint among contact lens wearers. Although both contact lens wearers and spectacle wearers experience similar eye irritation symptoms, dryness, redness, and grittiness have been reported far more frequently among contact lens wearers and with greater severity than among spectacle wearers. Studies have shown that incidence of dry eyes increases with age, especially among women. Tear film stability (e.g. tear break-up time) is significantly lower among women than among men. In addition, women have a higher blink frequency while reading. Several factors may contribute to gender differences. One is the use of eye make-up. Another reason could be that the women in the reported studies have done more VDU work than the men, including lower grade work. A third often-quoted explanation is related to the age-dependent decrease of tear secretion, particularly among women after 40 years of age. In a study conducted by UCLA, the frequency of reported symptoms in industrial buildings was investigated. The study's results were that eye irritation was the most frequent symptom in industrial building spaces, at 81%. Modern office work with use of office equipment has raised concerns about possible adverse health effects. Since the 1970s, reports have linked mucosal, skin, and general symptoms to work with self-copying paper. Emission of various particulate and volatile substances has been suggested as specific causes. These symptoms have been related to sick building syndrome (SBS), which involves symptoms such as irritation to the eyes, skin, and upper airways, headache and fatigue. Many of the symptoms described in SBS and multiple chemical sensitivity (MCS) resemble the symptoms known to be elicited by airborne irritant chemicals. A repeated measurement design was employed in the study of acute symptoms of eye and respiratory tract irritation resulting from occupational exposure to sodium borate dusts. The symptom assessment of the 79 exposed and 27 unexposed subjects comprised interviews before the shift began and then at regular hourly intervals for the next six hours of the shift, four days in a row. Exposures were monitored concurrently with a personal real time aerosol monitor. Two different exposure profiles, a daily average and short term (15 minute) average, were used in the analysis. Exposure-response relations were evaluated by linking incidence rates for each symptom with categories of exposure. Acute incidence rates for nasal, eye, and throat irritation, and coughing and breathlessness were found to be associated with increased exposure levels of both exposure indices. Steeper exposure-response slopes were seen when short term exposure concentrations were used. Results from multivariate logistic regression analysis suggest that current smokers tended to be less sensitive to the exposure to airborne sodium borate dust. Several actions can be taken to prevent eye irritation— trying to maintain normal blinking by avoiding room temperatures that are too high; avoiding relative humidities that are too high or too low, because they reduce blink frequency or may increase water evaporation. trying to maintain an intact film of tears by the following actions: Blinking and short breaks may be beneficial for VDU users. Increasing these two actions might help maintain the tear film. Downward gazing is recommended to reduce ocular surface area and water evaporation. The distance between the VDU and keyboard should be kept as short as possible to minimize evaporation from the ocular surface area by a low direction of the gaze, and Blink training can be beneficial. In addition, other measures are proper lid hygiene, avoidance of eye rubbing, and proper use of personal products and medication. Eye make-up should be used with care. Disease There are many diseases, disorders, and age-related changes that may affect the eyes and surrounding structures. As the eye ages, certain changes occur that can be attributed solely to the aging process. Most of these anatomic and physiologic processes follow a gradual decline. With aging, the quality of vision worsens due to reasons independent of diseases of the aging eye. While there are many changes of significance in the non-diseased eye, the most functionally important changes seem to be a reduction in pupil size and the loss of accommodation or focusing capability (presbyopia). The area of the pupil governs the amount of light that can reach the retina. The extent to which the pupil dilates decreases with age, leading to a substantial decrease in light received at the retina. In comparison to younger people, it is as though older persons are constantly wearing medium-density sunglasses. Therefore, for any detailed visually guided tasks on which performance varies with illumination, older persons require extra lighting. Certain ocular diseases can come from sexually transmitted infections such as herpes and genital warts. If contact between the eye and area of infection occurs, the STI can be transmitted to the eye. With aging, a prominent white ring develops in the periphery of the cornea called arcus senilis. Aging causes laxity, downward shift of eyelid tissues and atrophy of the orbital fat. These changes contribute to the etiology of several eyelid disorders such as ectropion, entropion, dermatochalasis, and ptosis. The vitreous gel undergoes liquefaction (posterior vitreous detachment or PVD) and its opacities — visible as floaters — gradually increase in number. Eye care professionals, including ophthalmologists and optometrists, are involved in the treatment and management of ocular and vision disorders. A Snellen chart is one type of eye chart used to measure visual acuity. At the conclusion of a complete eye examination, the eye doctor might provide the patient with an eyeglass prescription for corrective lenses. Some disorders of the eyes for which corrective lenses are prescribed include myopia (near-sightedness), hyperopia (far-sightedness), astigmatism, and presbyopia (the loss of focusing range during aging). Macular degeneration Macular degeneration is especially prevalent in the U.S. and affects roughly 1.75 million Americans each year. Having lower levels of lutein and zeaxanthin within the macula may be associated with an increase in the risk of age-related macular degeneration. Lutein and zeaxanthin act as antioxidants that protect the retina and macula from oxidative damage from high-energy light waves. As the light waves enter the eye, they excite electrons that can cause harm to the cells in the eye, but they can cause oxidative damage that may lead to macular degeneration or cataracts. Lutein and zeaxanthin bind to the electron free radical and are reduced rendering the electron safe. There are many ways to ensure a diet rich in lutein and zeaxanthin, the best of which is to eat dark green vegetables including kale, spinach, broccoli and turnip greens. Nutrition is an important aspect of the ability to achieve and maintain proper eye health. Lutein and zeaxanthin are two major carotenoids, found in the macula of the eye, that are being researched to identify their role in the pathogenesis of eye disorders such as age-related macular degeneration and cataracts. Sexuality Human eyes (particularly the iris and its color) and the area surrounding the eye (lids, lashes, brows) have long been a key component of physical attractiveness. Eye contact plays a significant role in human nonverbal communication. A prominent limbal ring (dark ring around the iris of the eye) is considered attractive. Additionally, long and full eyelashes are coveted as a sign of beauty and are considered an attractive facial feature. Pupil size has also been shown to play an influential role in attraction and nonverbal communication, with dilated (larger) pupils perceived to be more attractive. It should also be noted that dilated pupils are a response to sexual arousal and stimuli. In the Renaissance, women used the juice of the berries of the belladonna plant in eyedrops to dilate the pupils and make the eyes appear more seductive. Images
Biology and health sciences
Human anatomy
null
12137890
https://en.wikipedia.org/wiki/Human%20penis
Human penis
In human anatomy, the penis (; : penises or penes; from the Latin pēnis, initially "tail") is an external sex organ (intromittent organ) through which males urinate and ejaculate. Together with the testes and surrounding structures, the penis functions as part of the male reproductive system. The main parts of the penis are the root, body, the epithelium of the penis including the shaft skin, and the foreskin covering the glans. The body of the penis is made up of three columns of tissue: two corpora cavernosa on the dorsal side and corpus spongiosum between them on the ventral side. The urethra passes through the prostate gland, where it is joined by the ejaculatory ducts, and then through the penis. The urethra goes across the corpus spongiosum and ends at the tip of the glans as the opening, the urinary meatus. An erection is the stiffening expansion and orthogonal reorientation of the penis, which occurs during sexual arousal. Erections can occur in non-sexual situations; spontaneous non-sexual erections frequently occur during adolescence and sleep. In its flaccid state, the penis is smaller, gives to pressure, and the glans is covered by the foreskin. In its fully erect state, the shaft becomes rigid and the glans becomes engorged but not rigid. An erect penis may be straight or curved and may point at an upward angle, a downward angle, or straight ahead. , the average erect human penis is long and has a circumference of . Neither age nor size of the flaccid penis accurately predicts erectile length. There are also several common body modifications to the penis, including circumcision and piercings. The penis is homologous to the clitoris in females. Structure Three main parts of the human penis include: Root: It is the attached part, consisting of the bulb in the middle and the crura, one crus on either side of the bulb. It lies within the superficial perineal pouch. The crus is attached to the pubic arch. Shaft: The pendulous part of the penis. It has two surfaces: dorsal (posterosuperior in the erect penis) and ventral or urethral (facing downwards and backwards on the flaccid penis). The ventral surface is marked by the penile raphe. The base of the shaft is supported by the suspensory ligament, which is attached to the pubic symphysis. Epithelium of the penis consists of the shaft skin, the foreskin (prepuce), and the preputial mucosa on the inside of it. The foreskin covers and protects the glans and shaft. The epithelium is not attached to the underlying shaft, so it is free to glide to and fro. The human penis is made up of three columns of erectile tissue: two corpora cavernosa lie next to each other (separated by a fibrous septum) on the dorsal side and one corpus spongiosum lies between them on the ventral side. These columns are surrounded by a fibrous layer of connective tissue called the tunica albuginea. The corpora cavernosa are innervated by lesser and greater cavernous nerves and form most of the penis containing blood vessels that fill with blood to help make an erection. The crura are the proximal parts of the corpora cavernosa. The corpus spongiosum is an erectile tissue surrounding the urethra. The proximal parts of the corpus spongiosum form the bulb and the distal ends form the glans penis. The enlarged and bulbous-shaped end of the corpus spongiosum forms the glans penis with two specific types of sinusoids, which supports the foreskin, a loose fold of skin that in adults can retract to expose the glans. The area on the underside of the glans, where the foreskin is attached, is called the frenulum. The rounded base of the glans is called the corona. The inner surface of the foreskin and corona is rich in sebaceous glands that secrete smegma. The structure of the penis is supported by the pelvic floor muscles. The urethra, which is the last part of the urinary tract, traverses the corpus spongiosum (spongy urethra) and opens through the urinary meatus on the tip of the glans. The penile raphe is the visible ridge between the lateral halves of the penis, found on the ventral or underside of the penis, running from the meatus and continuing as the perineal raphe across the scrotum and the perineum (area between scrotum and anus). The human penis differs from those of most other mammals, as it has no baculum (or erectile bone) and instead relies entirely on engorgement with blood to reach its erect state. A distal ligament buttresses the glans penis and plays an integral role to the penile fibroskeleton, and the structure is called "os analog", a term coined by Geng Long Hsu in the Encyclopedia of Reproduction. It is a remnant of the baculum that has likely evolved due to change in mating practice. The human penis cannot be withdrawn into the groin, and it is larger than average in the animal kingdom in proportion to body mass. The human penis is reciprocating from a cotton soft to a bony rigidity resulting from penile arterial flow varied between 2–3 to 60–80 mL/Min implies the most ideal milieu to apply Pascal's law in the entire human body; the overall structure is unique. Size Penile measurements vary, with studies that rely on self-measurement reporting a significantly higher average size than those which rely on measurements taken by health professional. A 2015 systematic review of 15,521 men in which the subjects were measured by health professionals showed that the average length of an erect human penis is 13.12 cm (5.17 inches) long, while the average circumference of an erect human penis is 11.66 cm (4.59 inches). Among all primates, the human penis is the largest in girth, but is comparable to the chimpanzee penis and the penises of certain other primates in length. Penis size is affected by genetics, but also by environmental factors such as fertility medications and chemical/pollution exposure. Normal variations Pearly penile papules are raised bumps of somewhat paler color around the base (sulcus) of the glans, which typically develop in males aged 20 to 40. As of 1999, different studies had produced estimates of incidence ranging from 8 to 48 percent of all men. They may be mistaken for warts, but are not harmful or infectious and do not require treatment. Fordyce's spots are small, raised, yellowish-white spots 1–2 mm (less than an inch) in diameter that may appear on the penis, which again are common and not infectious. Sebaceous prominences are raised bumps similar to Fordyce's spots on the shaft of the penis, located at the sebaceous glands and are normal. Phimosis is an inability to retract the foreskin fully. It is normal and harmless in infancy and pre-pubescence, occurring in about 8% of boys at age 10. According to the British Medical Association, treatment (topical steroid cream and/or manual stretching) does not need to be considered until age 19. Curvature: few penises are completely straight, with curves commonly seen in all directions (up, down, left, right). Sometimes the curve is very prominent but it rarely inhibits sexual intercourse. Curvature as great as 30° is considered normal and medical treatment is rarely considered unless the angle exceeds 45°. Changes to the curvature of a penis may be caused by Peyronie's disease. Development When the fetus is exposed to testosterone, the genital tubercle elongates (primordial phallus) and develops into the glans and shaft of the penis and the urogenital folds fuse to become the penile raphe. The urethra within the penis (except within the glans) is developed from the urogenital sinus. Growth in puberty On entering puberty, the penis, scrotum and testicles will enlarge toward maturity. During the process, pubic hair grows above and around the penis. A large-scale study assessing penis size in thousands of 17- to 19-year-old males found no difference in average penis size between 17-year-olds and 19-year-olds. From this, it can be concluded that penile growth is typically complete not later than age 17, and possibly earlier. Physiological functions Urination Males expel urine from the bladder through the urethra, which passes through the prostate where it is joined by the ejaculatory ducts, and then onward through the penis. At the root of the penis (the proximal end of the corpus spongiosum) lies the external sphincter muscle. This is a small sphincter of striated muscle tissue and is in healthy males, under voluntary control. Relaxing the urethral sphincter allows the urine in the upper urethra to enter the penis properly and thus empty the urinary bladder. Physiologically, urination involves coordination between the central, autonomic, and somatic nervous systems. In infants, some elderly individuals, and those with neurological injury, urination may occur as an involuntary reflex. Brain centers that regulate urination include the pontine micturition center, periaqueductal gray, and the cerebral cortex. During erection, these centers block the relaxation of the sphincter muscles, so as to act as a physiological separation of the excretory and reproductive function of the penis, and preventing urine from entering the upper portion of the urethra during ejaculation. Voiding position The distal section of the urethra allows a human male to direct the stream of urine by holding the penis. This flexibility allows the male to choose the posture in which to urinate. In cultures where more than a minimum of clothing is worn, the penis allows the male to urinate while standing without removing much of the clothing. It is customary for some boys and men to urinate in seated or crouched positions. The preferred position may be influenced by cultural or religious beliefs. Research on the medical superiority of either position exists, but the data are heterogenic. A meta-analysis summarizing the evidence found no superior position for young, healthy males. For elderly males with LUTS, however, the sitting position when compared to the standing position is differentiated by the following: the post void residual volume (PVR, ml) was significantly decreased the maximum urinary flow (Qmax, ml/s) was increased the voiding time (VT, s) was decreased This urodynamic profile is related to a lower risk of urologic complications, such as cystitis and bladder stones. Sexual stimulation and arousal The penis incites sexual arousal when sexually stimulated, such as from mental stimuli (sexual fantasy), partnered activity, or masturbation, which can lead to orgasm. The glans and the frenulum are erogenous zones of the penis. The glans has many nerve endings, which makes it the most sensitive. The most effective way to stimulate the penis is through oral stimulation (fellatio), manual stimulation (a handjob or manual masturbation), or during sexual penetration. Frot is mutual penile stimulation between men. Erection An erection is the stiffening and rising of the penis, which occurs during sexual arousal, though it can also happen in non-sexual situations. Spontaneous erections frequently occur during adolescence due to friction with clothing, a full bladder or large intestine, hormone fluctuations, nervousness, and undressing in a nonsexual situation. It is also normal for erections to occur during sleep and upon waking. (See nocturnal penile tumescence.) The primary physiological mechanism that brings about erection is the autonomic dilation of arteries supplying blood to the penis, which allows more blood to fill the three spongy erectile tissue chambers in the penis, the corpora cavernosa and corpus spongiosum, causing it to lengthen and stiffen. After vasocongestion, the now-engorged erectile tissue presses against and constricts the veins that carry blood away from the penis. More blood enters than leaves the penis until an equilibrium is reached where an equal volume of blood flows into the dilated arteries and out of the constricted veins; a constant erectile size is achieved at this equilibrium. Erection facilitates sexual intercourse though it is not essential for various other sexual activities. Erection angle Although many erect penises point upwards, it is common and normal for erect penis to curve in any direction. Many penises are curved in right, left, upwards or downwards direction depending upon the tension of the suspensory ligament that holds it in position. The following table shows how common various erection angles are for a standing male, out of a sample of 81 males aged 21 through 67. In the table, zero degrees is pointing straight up against the abdomen, 90 degrees is horizontal and pointing straight forward, while 180 degrees would be pointing straight down to the feet. An upward pointing angle is most common. Ejaculation Ejaculation is the ejection of semen from the penis. It is usually accompanied by orgasm. A series of muscular contractions delivers semen, containing male gametes known as sperm cells or spermatozoa, from the penis. Ejaculation usually happens as the result of sexual stimulation, but it can be due to prostatic disease in rare cases. Ejaculation may occur spontaneously during sleep (known as a nocturnal emission). Anejaculation is the condition of being unable to ejaculate. Sperm are produced in the testicles and stored in the attached epididymides. During ejaculation, sperm are propelled up the vasa deferentia, two ducts that pass over and behind the bladder. Fluids are added by the seminal vesicles and the vasa deferentia turn into the ejaculatory ducts, which join the urethra inside the prostate. The prostate, as well as the bulbourethral glands, add further secretions (including pre-ejaculate), and the semen is expelled through the penis. Ejaculation has two phases: emission and ejaculation proper. The emission phase of the ejaculatory reflex is under control of the sympathetic nervous system, while the ejaculatory phase is under control of a spinal reflex at the level of the spinal nerves S2–4 via the pudendal nerve. A refractory period succeeds the ejaculation, and sexual stimulation precedes it. The ischiocavernosus muscle helps to stabilize the penis during erection by compressing the crus and slowing the return of blood through the veins. The bulbospongiosus muscle also contributes to erection along with the expulsion of urine and semen. Evolved adaptations The human penis has been argued to have several evolutionary adaptations that maximise reproductive success and minimise sperm competition. Sperm competition is where the sperm of two males simultaneously occupy the reproductive tract of a female and they compete to fertilise the egg. If sperm competition results in the rival male's sperm fertilising the egg, cuckoldry could occur. This is the process whereby males unwittingly invest their resources into offspring of another male and, evolutionarily speaking, should be avoided. The most researched human penis adaptations are penis size and semen displacement. Penis size Evolution has caused sexually selected adaptations to occur in penis size in order to maximise reproductive success and minimise sperm competition. Sperm competition has caused the human penis to evolve in length and size for sperm retention and displacement. To achieve this, the penis must be of sufficient length to reach any rival sperm and to maximally fill the vagina. In order to ensure that the female retains the male's sperm, the adaptations in length of the human penis have occurred so that the ejaculate is placed close to the female cervix. This is achieved when complete penetration occurs and the penis pushes against the cervix. These adaptations have occurred in order to release and retain sperm to the highest point of the vaginal tract. As a result, this adaptation also leaves the sperm less vulnerable to sperm displacement and semen loss. Another reason for this adaptation is that, due to the nature of the human posture, gravity creates vulnerability for semen loss. Therefore, a long penis, which places the ejaculate deep in the vaginal tract, could reduce the loss of semen. Another evolutionary theory of penis size is female mate choice and its associations with social judgements in modern-day society. A study which illustrates female mate choice as an influence on penis size presented females with life-size, rotatable, computer generated males. These varied in height, body shape and flaccid penis size, with these aspects being examples of masculinity. Female ratings of attractiveness for each male revealed that larger penises were associated with higher attractiveness ratings. These relations between penis size and attractiveness have therefore led to frequently emphasized associations between masculinity and penis size in popular media. This has led to a social bias existing around penis size with larger penises being preferred and having higher social status. This is reflected in the association between believed sexual prowess and penis size and the social judgement of penis size in relation to 'manhood'. Semen displacement The shape of the human penis is thought to have evolved as a result of sperm competition. Semen displacement is an adaptation of the shape of the penis to draw foreign semen away from the cervix. This means that in the event of a rival male's sperm occupying the reproductive tract of a female, the human penis is able to displace the rival sperm, replacing it with his own. Semen displacement has two main benefits for a male. Firstly, by displacing a rival male's sperm, the risk of the rival sperm fertilising the egg is reduced. Secondly, the male replaces the rival's sperm with his own, thereby increasing the probability of his fertilising the egg and successfully reproducing with the female. However, males have to ensure they do not displace their own sperm. It is thought that the relatively quick loss of erection after ejaculation, penile hypersensitivity following ejaculation, and the shallower, slower thrusting of the male after ejaculation prevent this from occurring. The coronal ridge is the part of the human penis thought to have evolved to allow for semen displacement. Research has studied how much semen is displaced by differently shaped artificial genitals. This research showed that, when combined with thrusting, the coronal ridge of the penis is able to remove the seminal fluid of a rival male from within the female reproductive tract. It does this by forcing the semen under the frenulum of the coronal ridge, causing it to collect behind the coronal ridge shaft. When model penises without a coronal ridge were used, less than half the artificial sperm was displaced, compared to penises with a coronal ridge. The presence of a coronal ridge alone, however, is not sufficient for effective semen displacement. It must be combined with adequate thrusting to be successful. It has been shown that the deeper the thrusting, the larger the semen displacement. No semen displacement occurs with shallow thrusting. Some have therefore termed thrusting as a semen displacement behaviour. The behaviours associated with semen displacement, namely thrusting (number of thrusts and depth of thrusts), and duration of sexual intercourse, have been shown to vary according to whether a male perceives the risk of partner infidelity to be high or not. Males and females report greater semen displacement behaviours following allegations of infidelity. In particular, following allegations of infidelity, males and females report deeper and quicker thrusting during sexual intercourse. Clinical significance Disorders Paraphimosis is an inability to move the foreskin forward over the glans. It can result from fluid trapped in a foreskin left retracted, perhaps following a medical procedure, or accumulation of fluid in the foreskin because of friction during vigorous sexual activity. In Peyronie's disease, anomalous scar tissue grows in the soft tissue of the penis, causing curvature. Severe cases can be improved by surgical correction. A thrombosis can occur during periods of frequent and prolonged sexual activity, especially fellatio. It is usually harmless and self-corrects within a few weeks. Sexually transmitted infections, for example, herpes virus, which can occur after sexual contact with an infected carrier; this may lead to the development of herpes sores. Balanitis is an inflammation, either infectious or not. Pudendal nerve entrapment is a condition characterized by pain on sitting and the loss of penile sensation and orgasm. Occasionally, there is a total loss of sensation and orgasm. The pudendal nerve can be damaged by narrow, hard bicycle seats and accidents. Penile fracture can occur if the erect penis is bent excessively. A popping or cracking sound and pain is normally associated with this event. Emergency medical assistance should be obtained as soon as possible. Prompt medical attention lowers the likelihood of permanent penile curvature. In diabetes, peripheral neuropathy can cause tingling in the penile skin and possibly reduced or completely absent sensation. The reduced sensations can lead to injuries for either partner and their absence can make it impossible to have sexual pleasure through stimulation of the penis. Since the problems are caused by permanent nerve damage, preventive treatment through good control of the diabetes is the primary treatment. Some limited recovery may be possible through improved diabetes control. Erectile dysfunction is the inability to develop and maintain an erection sufficiently firm for satisfactory sexual performance. Diabetes is a leading cause, as is natural aging. A variety of treatments exist, most notably including the phosphodiesterase type 5 inhibitor drugs (such as sildenafil citrate, marketed as Viagra), which work by vasodilation. Priapism, a form of persistent genital arousal disorder, is a painful and potentially harmful medical condition in which the erect penis does not return to its flaccid state. Priapism lasting over four hours is a medical emergency. The causative mechanisms are poorly understood but involve complex neurological and vascular factors. Potential complications include ischaemia, thrombosis, and impotence. In serious cases the condition may result in gangrene, which may result in amputation. However, that is usually only the case if the organ is broke out and injured because of it. The condition has been associated with a variety of drugs including prostaglandin. Contrary to common knowledge, sildenafil (Viagra) will not cause it. Lymphangiosclerosis is a hardened lymph vessel, although it can feel like a hardened, almost calcified or fibrous, vein. It tends not to share the common blue tint with a vein however. It can be felt as a hardened lump or "vein" even when the penis is flaccid, and is even more prominent during an erection. It is considered a benign physical condition. It is fairly common and can follow a particularly vigorous sexual activity for men, and tends to go away if given rest and more gentle care, for example by use of lubricants. Carcinoma of the penis is rare with a reported rate of 1 person in 100,000 in developed countries. Some sources state that circumcision can protect against this disease, but this notion remains controversial among medical circles. Hard flaccid syndrome is a rare, chronic condition characterized by a flaccid penis that remains in a firm, semi-rigid state in the absence of sexual arousal. Cold glans syndrome is a condition marked by the persistent inability of the glans penis to maintain an erect state during sexual arousal, potentially leading to reduced sensitivity and erection difficulties. Developmental disorders Hypospadias is a developmental disorder where the meatus is positioned wrongly at birth. Hypospadias can also occur iatrogenically by the downward pressure of an indwelling urethral catheter. It is usually corrected by surgery. A micropenis is a very small penis caused by developmental or congenital problems. Diphallia, or penile duplication (PD), is the rare condition of having two penises. Alleged and observed psychological disorders Penis panic (koro in Malaysian/Indonesian) — delusion of shrinkage of the penis and retraction into the body. This appears to be culturally conditioned and largely limited to Ghana, Sudan, China, Japan, Southeast Asia, and West Africa. In April 2008, Kinshasa, Democratic Republic of Congo, the West African police arrested 14 suspected victims (of penis snatching) and sorcerers accused of using black magic or witchcraft to steal (make disappear) or shrink men's penises to extort cash for cure, amid a wave of panic. Arrests were made in an effort to avoid bloodshed seen in Ghana a decade before, when 12 penis snatchers were beaten to death by mobs. Penis envy — the contested Freudian belief of all women inherently envying men for having penises. Society and culture Terminology In many cultures, referring to the penis is considered taboo or vulgar, and a variety of slang words and euphemisms are used to talk about it. In English, these include member, dick, cock, prick, johnson, dork, peter, pecker, manhood, stick, rod, third/middle leg, dong, willy, schlong, and todger. Many of these are used as insults—though sometimes playfully—meaning an unpleasant or unworthy person. Among these, historically, the most commonly used euphemism for penis in English literature and society was member. Alteration The penis is sometimes pierced or decorated by other body art. Other than circumcision, genital alterations are almost universally elective and usually for the purpose of aesthetics or increased sensitivity. Piercings of the penis include the Prince Albert, apadravya, ampallang, dydoe, deep shaft and frenum piercings. Foreskin restoration or stretching is a further form of body modification, as well as implants under the shaft of the penis. Another type of alteration to the penis is genital tattooing. Trans women who undergo sex reassignment surgery have their penis surgically modified into a vagina or clitoris via vaginoplasty or clitoroplasty respectively. Trans men who undergo such surgery have a phalloplasty or metoidioplasty. Other practices that alter the penis are also performed, although they are rare in Western societies without a diagnosed medical condition. Apart from penectomy, perhaps the most radical of these is subincision, in which the urethra is split along the underside of the penis. Subincision originated among Australian Aborigines, although it is now done by some in the U.S. and Europe. Circumcision The most common form of body modification related to the penis is circumcision: removal of part or all of the foreskin. It is most commonly performed as an elective procedure for prophylactic, cultural, or religious reasons. For infant circumcision, modern devices such as the Gomco clamp, Plastibell, and Mogen clamp are available. The ethics of circumcision in children is a source of controversy. Among the world's major medical organizations, there is a consensus that circumcision reduces heterosexual HIV infection rates in high-risk populations during penile-vaginal sex. There are differing perspectives on the prophylactic efficacy and cost effectiveness of circumcision in developed nations. Circumcision plays a significant role in many of the world's cultures. When performed for religious reasons, it is most common among both Jews and Muslims, among whom it is near-universal. Potential regeneration There are efforts by scientists to partially or fully regenerate the structures of the human penis. Patients who can benefit most from this field are those who have congenital defects, cancer, and injuries that have excised parts of their genitalia. Some organizations which perform research into, or conduct regeneration procedures, include the Wake Forest Institute for Regenerative Medicine and the United States Department of Defense. The first successful penis allotransplant surgery was done in September 2005 in a military hospital in Guangzhou, China. A man at 44 sustained an injury after an accident and his penis was severed; urination became difficult as his urethra was partly blocked. A recently brain-dead man, aged 23, was selected for the transplant. Despite atrophy of blood vessels and nerves, the arteries, veins, nerves and the corpora spongiosa were successfully matched. But, on 19 September (after two weeks), the surgery was reversed because of a severe psychological problem (rejection) by the recipient and his wife. In 2009, researchers Chen, Eberli, Yoo and Atala have produced bioengineered penises and implanted them on rabbits. They were able to obtain erection and copulate, with 10 of 12 rabbits achieving ejaculation. This study shows that in the future it could be possible to produce artificial penises for replacement surgeries or phalloplasties. In 2015, the world's first successful penis transplant took place in Cape Town, South Africa in a nine-hour operation performed by surgeons from Stellenbosch University and Tygerberg Hospital. The 21-year-old recipient, who had been sexually active, had lost his penis in a botched circumcision at 18.
Biology and health sciences
Human anatomy
Health
2291204
https://en.wikipedia.org/wiki/Genetically%20modified%20crops
Genetically modified crops
Genetically modified crops (GM crops) are plants used in agriculture, the DNA of which has been modified using genetic engineering methods. Plant genomes can be engineered by physical methods or by use of Agrobacterium for the delivery of sequences hosted in T-DNA binary vectors. In most cases, the aim is to introduce a new trait to the plant which does not occur naturally in the species. Examples in food crops include resistance to certain pests, diseases, environmental conditions, reduction of spoilage, resistance to chemical treatments (e.g. resistance to a herbicide), or improving the nutrient profile of the crop. Examples in non-food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. Farmers have widely adopted GM technology. Acreage increased from 1.7 million hectares in 1996 to 185.1 million hectares in 2016, some 12% of global cropland. As of 2016, major crop (soybean, maize, canola and cotton) traits consist of herbicide tolerance (95.9 million hectares) insect resistance (25.2 million hectares), or both (58.5 million hectares). In 2015, 53.6 million ha of Genetically modified maize were under cultivation (almost 1/3 of the maize crop). GM maize outperformed its predecessors: yield was 5.6 to 24.5% higher with less mycotoxins (−28.8%), fumonisin (−30.6%) and thricotecens (−36.5%). Non-target organisms were unaffected, except for lower populations some parasitoid wasps due to decreased populations of their pest host European corn borer; European corn borer is a target of Lepidoptera active Bt maize. Biogeochemical parameters such as lignin content did not vary, while biomass decomposition was higher. A 2014 meta-analysis concluded that GM technology adoption had reduced chemical pesticide use by 37%, increased crop yields by 22%, and increased farmer profits by 68%. This reduction in pesticide use has been ecologically beneficial, but benefits may be reduced by overuse. Yield gains and pesticide reductions are larger for insect-resistant crops than for herbicide-tolerant crops. Yield and profit gains are higher in developing countries than in developed countries. Pesticide poisonings were reduced by 2.4 to 9 million cases per year in India alone. A 2011 review of the relationship between Bt cotton adoption and farmer suicides in India found that "Available data show no evidence of a 'resurgence' of farmer suicides" and that "Bt cotton technology has been very effective overall in India." During the time period of Bt cotton introduction in India, farmer suicides instead declined by 25%. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. History Humans have directly influenced the genetic makeup of plants to increase their value as a crop through domestication. The first evidence of plant domestication comes from emmer and einkorn wheat found in pre-Pottery Neolithic A villages in Southwest Asia dated about 10,500 to 10,100 BC. The Fertile Crescent of Western Asia, Egypt, and India were sites of the earliest planned sowing and harvesting of plants that had previously been gathered in the wild. Independent development of agriculture occurred in northern and southern China, Africa's Sahel, New Guinea and several regions of the Americas. The eight Neolithic founder crops (emmer wheat, einkorn wheat, barley, peas, lentils, bitter vetch, chick peas and flax) had all appeared by about 7,000 BC. Traditional crop breeders have long introduced foreign germplasm into crops by creating novel crosses. A hybrid cereal grain was created in 1875, by crossing wheat and rye. Since then traits including dwarfing genes and rust resistance have been introduced in that manner. Plant tissue culture and deliberate mutations have enabled humans to alter the makeup of plant genomes. Modern advances in genetics have allowed humans to more directly alter plants genetics. In 1970 Hamilton Smith's lab discovered restriction enzymes that allowed DNA to be cut at specific places, enabling scientists to isolate genes from an organism's genome. DNA ligases that join broken DNA together had been discovered earlier in 1967, and by combining the two technologies, it was possible to "cut and paste" DNA sequences and create recombinant DNA. Plasmids, discovered in 1952, became important tools for transferring information between cells and replicating DNA sequences. In 1907 a bacterium that caused plant tumors, Agrobacterium tumefaciens, was discovered and in the early 1970s the tumor inducing agent was found to be a DNA plasmid called the Ti plasmid. By removing the genes in the plasmid that caused the tumor and adding in novel genes researchers were able to infect plants with A. tumefaciens and let the bacteria insert their chosen DNA sequence into the genomes of the plants. As not all plant cells were susceptible to infection by A. tumefaciens other methods were developed, including electroporation, micro-injection and particle bombardment with a gene gun (invented in 1987). In the 1980s techniques were developed to introduce isolated chloroplasts back into a plant cell that had its cell wall removed. With the introduction of the gene gun in 1987 it became possible to integrate foreign genes into a chloroplast. Genetic transformation has become very efficient in some model organisms. In 2008 genetically modified seeds were produced in Arabidopsis thaliana by dipping the flowers in an Agrobacterium solution. In 2013 CRISPR was first used to target modification of plant genomes. The first genetically engineered crop plant was tobacco, reported in 1983. It was developed creating a chimeric gene that joined an antibiotic resistant gene to the T1 plasmid from Agrobacterium. The tobacco was infected with Agrobacterium transformed with this plasmid resulting in the chimeric gene being inserted into the plant. Through tissue culture techniques a single tobacco cell was selected that contained the gene and a new plant grown from it. The first field trials of genetically engineered plants occurred in France and the US in 1986, tobacco plants were engineered to be resistant to herbicides. In 1987 Plant Genetic Systems, founded by Marc Van Montagu and Jeff Schell, was the first company to genetically engineer insect-resistant plants by incorporating genes that produced insecticidal proteins from Bacillus thuringiensis (Bt) into tobacco. The People's Republic of China was the first country to commercialise transgenic plants, introducing a virus-resistant tobacco in 1992. In 1994 Calgene attained approval to commercially release the Flavr Savr tomato, a tomato engineered to have a longer shelf life. Also in 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialised in Europe. In 1995 Bt Potato was approved safe by the Environmental Protection Agency, after having been approved by the FDA, making it the first pesticide producing crop to be approved in the US. In 1996 a total of 35 approvals had been granted to commercially grow 8 transgenic crops and one flower crop (carnation), with 8 different traits in 6 countries plus the EU. By 2010, 29 countries had planted commercialised genetically modified crops and a further 31 countries had granted regulatory approval for transgenic crops to be imported. GM banana cultivar QCAV-4 was approved by Australia and New Zealand in 2024. The banana resists the fungus that is fatal to the Cavendish banana, the dominant cultivar. Methods Genetically engineered crops have genes added or removed using genetic engineering techniques, originally including gene guns, electroporation, microinjection and agrobacterium. More recently, CRISPR and TALEN offered much more precise and convenient editing techniques. Gene guns (also known as biolistics) "shoot" (direct high energy particles or radiations against) target genes into plant cells. It is the most common method. DNA is bound to tiny particles of gold or tungsten which are subsequently shot into plant tissue or single plant cells under high pressure. The accelerated particles penetrate both the cell wall and membranes. The DNA separates from the metal and is integrated into plant DNA inside the nucleus. This method has been applied successfully for many cultivated crops, especially monocots like wheat or maize, for which transformation using Agrobacterium tumefaciens has been less successful. The major disadvantage of this procedure is that serious damage can be done to the cellular tissue. Agrobacterium tumefaciens-mediated transformation is another common technique. Agrobacteria are natural plant parasites. Their natural ability to transfer genes provides another engineering method. To create a suitable environment for themselves, these Agrobacteria insert their genes into plant hosts, resulting in a proliferation of modified plant cells near the soil level (crown gall). The genetic information for tumor growth is encoded on a mobile, circular DNA fragment (plasmid). When Agrobacterium infects a plant, it transfers this T-DNA to a random site in the plant genome. When used in genetic engineering the bacterial T-DNA is removed from the bacterial plasmid and replaced with the desired foreign gene. The bacterium is a vector, enabling transportation of foreign genes into plants. This method works especially well for dicotyledonous plants like potatoes, tomatoes, and tobacco. Agrobacteria infection is less successful in crops like wheat and maize. Electroporation is used when the plant tissue does not contain cell walls. In this technique, "DNA enters the plant cells through miniature pores which are temporarily caused by electric pulses." Microinjection is used to directly inject foreign DNA into cells. Plant scientists, backed by results of modern comprehensive profiling of crop composition, point out that crops modified using GM techniques are less likely to have unintended changes than are conventionally bred crops. In research tobacco and Arabidopsis thaliana are the most frequently modified plants, due to well-developed transformation methods, easy propagation and well studied genomes. They serve as model organisms for other plant species. Introducing new genes into plants requires a promoter specific to the area where the gene is to be expressed. For instance, to express a gene only in rice grains and not in leaves, an endosperm-specific promoter is used. The codons of the gene must be optimized for the organism due to codon usage bias. Types of modifications Transgenic Transgenic plants have genes inserted into them that are derived from another species. The inserted genes can come from species within the same kingdom (plant to plant), or between kingdoms (for example, bacteria to plant). In many cases the inserted DNA has to be modified slightly in order to be correctly and efficiently expressed in the host organism. Transgenic plants are used to express proteins, like the cry toxins from B. thuringiensis, herbicide-resistant genes, antibodies, and antigens for vaccinations. A study led by the European Food Safety Authority (EFSA) also found viral genes in transgenic plants. Transgenic carrots have been used to produce the drug Taliglucerase alfa which is used to treat Gaucher's disease. In the laboratory, transgenic plants have been modified to increase photosynthesis (currently about 2% at most plants versus the theoretic potential of 9–10%). This is possible by changing the rubisco enzyme (i.e. changing C3 plants into C4 plants), by placing the rubisco in a carboxysome, by adding pumps in the cell wall, or by changing the leaf form or size. Plants have been engineered to exhibit bioluminescence that may become a sustainable alternative to electric lighting. Cisgenic Cisgenic plants are made using genes found within the same species or a sexually-compatible closely related one, where conventional plant breeding can occur. Some breeders and scientists argue that cisgenic modification is useful for plants that are difficult to crossbreed by conventional means (such as potatoes), and that plants in the cisgenic category should not require the same regulatory scrutiny as transgenics. Subgenic Genetically modified plants can also be developed using gene knockdown or gene knockout to alter the genetic makeup of a plant without incorporating genes from other plants. In 2014, Chinese researcher Gao Caixia filed patents on the creation of a strain of wheat that is resistant to powdery mildew. The strain lacks genes that encode proteins that repress defenses against the mildew. The researchers deleted all three copies of the genes from wheat's hexaploid genome. Gao used the TALENs and CRISPR gene editing tools without adding or changing any other genes. No field trials were immediately planned. The CRISPR technique has also been used by Penn State researcher Yinong Yang to modify white button mushrooms (Agaricus bisporus) to be non-browning, and by DuPont Pioneer to make a new variety of corn. Multiple trait integration With multiple trait integration, several new traits may be integrated into a new crop. Economics GM food's economic value to farmers is one of its major benefits, including in developing nations. A 2010 study found that Bt corn provided economic benefits of $6.9 billion over the previous 14 years in five Midwestern states. The majority ($4.3 billion) accrued to farmers producing non-Bt corn. This was attributed to European corn borer populations reduced by exposure to Bt corn, leaving fewer to attack conventional corn nearby. Agriculture economists calculated that "world surplus [increased by] $240.3 million for 1996. Of this total, the largest share (59%) went to U.S. farmers. Seed company Monsanto received the next largest share (21%), followed by US consumers (9%), the rest of the world (6%), and the germplasm supplier, Delta & Pine Land Company of Mississippi (5%)." According to the International Service for the Acquisition of Agri-biotech Applications (ISAAA), in 2014 approximately 18 million farmers grew biotech crops in 28 countries; about 94% of the farmers were resource-poor in developing countries. 53% of the global biotech crop area of 181.5 million hectares was grown in 20 developing countries. PG Economics comprehensive 2012 study concluded that GM crops increased farm incomes worldwide by $14 billion in 2010, with over half this total going to farmers in developing countries. Forgoing these benefits is costly. Wesseler et al., 2017 estimate the cost of delay for several crops including GM banana in Uganda, GM cowpea in west Africa, and GM maize/corn in Kenya. They estimate Nigeria alone loses $33–46m annually. The potential and alleged harms of GM crops must then be compared to these costs of delay. Critics challenged the claimed benefits to farmers over the prevalence of biased observers and by the absence of randomized controlled trials. The main Bt crop grown by small farmers in developing countries is cotton. A 2006 review of Bt cotton findings by agricultural economists concluded, "the overall balance sheet, though promising, is mixed. Economic returns are highly variable over years, farm type, and geographical location". In 2013 the European Academies Science Advisory Council (EASAC) asked the EU to allow the development of agricultural GM technologies to enable more sustainable agriculture, by employing fewer land, water, and nutrient resources. EASAC also criticizes the EU's "time-consuming and expensive regulatory framework" and said that the EU had fallen behind in the adoption of GM technologies. Participants in agriculture business markets include seed companies, agrochemical companies, distributors, farmers, grain elevators and universities that develop new crops/traits and whose agricultural extensions advise farmers on best practices. According to a 2012 review based on data from the late 1990s and early 2000s, much of the GM crop grown each year is used for livestock feed and increased demand for meat leads to increased demand for GM feed crops. Feed grain usage as a percentage of total crop production is 70% for corn and more than 90% of oil seed meals such as soybeans. About 65 million metric tons of GM corn grains and about 70 million metric tons of soybean meals derived from GM soybean become feed. In 2014 the global value of biotech seed was US$15.7 billion; US$11.3 billion (72%) was in industrial countries and US$4.4 billion (28%) was in the developing countries. In 2009, Monsanto had $7.3 billion in sales of seeds and from licensing its technology; DuPont, through its Pioneer subsidiary, was the next biggest company in that market. As of 2009, the overall Roundup line of products including the GM seeds represented about 50% of Monsanto's business. Some patents on GM traits have expired, allowing the legal development of generic strains that include these traits. For example, generic glyphosate-tolerant GM soybean is now available. Another impact is that traits developed by one vendor can be added to another vendor's proprietary strains, potentially increasing product choice and competition. The patent on the first type of Roundup Ready crop that Monsanto produced (soybeans) expired in 2014 and the first harvest of off-patent soybeans occurs in the spring of 2015. Monsanto has broadly licensed the patent to other seed companies that include the glyphosate resistance trait in their seed products. About 150 companies have licensed the technology, including Syngenta and DuPont Pioneer. Yield In 2014, the largest review yet concluded that GM crops' effects on farming were positive. The meta-analysis considered all published English-language examinations of the agronomic and economic impacts between 1995 and March 2014 for three major GM crops: soybean, maize, and cotton. The study found that herbicide-tolerant crops have lower production costs, while for insect-resistant crops the reduced pesticide use was offset by higher seed prices, leaving overall production costs about the same. Yields increased 9% for herbicide tolerance and 25% for insect resistant varieties. Farmers who adopted GM crops made 69% higher profits than those who did not. The review found that GM crops help farmers in developing countries, increasing yields by 14 percentage points. The researchers considered some studies that were not peer-reviewed and a few that did not report sample sizes. They attempted to correct for publication bias, by considering sources beyond academic journals. The large data set allowed the study to control for potentially confounding variables such as fertilizer use. Separately, they concluded that the funding source did not influence study results. Under special conditions meant to reveal only genetic yield factors, many GM crops are known to actually have lower yields. This is variously due to one or both of: Yield drag, wherein the trait itself lowers yield, either by competing for synthesis feedstock or by being inserted slightly inaccurately, into the middle of a yield-relevant gene; and/or yield lag, wherein it takes some time to breed the newest yield genetics into the GM lines. This does not reflect realistic field conditions however, especially leaving out pest pressure which is often the point of the GM trait. See for example Roundup Ready § Productivity claims. Gene editing may also increase yields non-specific to the use of any biocides/pesticides. In March 2022, field test results showed CRISPR-based gene knockout of KRN2 in maize and OsKRN2 in rice increased grain yields by ~10% and ~8% without any detected negative effects. Traits GM crops grown today, or under development, have been modified with various traits. These traits include improved shelf life, disease resistance, stress resistance, herbicide resistance, pest resistance, production of useful goods such as biofuel or drugs, and ability to absorb toxins and for use in bioremediation of pollution. Recently, research and development has been targeted to enhancement of crops that are locally important in developing countries, such as insect-resistant cowpea for Africa and insect-resistant brinjal (eggplant). Extended shelf life The first genetically modified crop approved for sale in the U.S. was the FlavrSavr tomato, which had a longer shelf life. First sold in 1994, FlavrSavr tomato production ceased in 1997. It is no longer on the market. In November 2014, the USDA approved a GM potato that prevents bruising. In February 2015 Arctic Apples were approved by the USDA, becoming the first genetically modified apple approved for US sale. Gene silencing was used to reduce the expression of polyphenol oxidase (PPO), thus preventing enzymatic browning of the fruit after it has been sliced open. The trait was added to Granny Smith and Golden Delicious varieties. The trait includes a bacterial antibiotic resistance gene that provides resistance to the antibiotic kanamycin. The genetic engineering involved cultivation in the presence of kanamycin, which allowed only resistant cultivars to survive. Humans consuming apples do not acquire kanamycin resistance, per arcticapple.com. The FDA approved the apples in March 2015. Improved photosynthesis Plants use non-photochemical quenching to protect them from excessive amounts of sunlight. Plants can switch on the quenching mechanism almost instantaneously, but it takes much longer for it to switch off again. During the time that it is switched on, the amount of energy that is wasted increases. A genetic modification in three genes allows to correct this (in a trial with tobacco plants). As a result, yields were 14-20% higher, in terms of the weight of the dry leaves harvested. The plants had larger leaves, were taller and had more vigorous roots. Another improvement that can be made on the photosynthesis process (with C3 pathway plants) is on photorespiration. By inserting the C4 pathway into C3 plants, productivity may increase by as much as 50% for cereal crops, such as rice. Improved biosequestration capability The Harnessing Plants Initiative focuses on creating GM plants that have increased root mass, root depth and suberin content. Improved nutritional value Edible oils Some GM soybeans offer improved oil profiles for processing. Camelina sativa has been modified to produce plants that accumulate high levels of oils similar to fish oils. Vitamin enrichment Golden rice, developed by the International Rice Research Institute (IRRI), provides greater amounts of vitamin A targeted at reducing vitamin A deficiency. As of January 2016, golden rice has not yet been grown commercially in any country. Toxin reduction A genetically modified cassava under development offers lower cyanogen glucosides and enhanced protein and other nutrients (called BioCassava). In November 2014, the USDA approved a potato that prevents bruising and produces less acrylamide when fried. They do not employ genes from non-potato species. The trait was added to the Russet Burbank, Ranger Russet and Atlantic varieties. Stress resistance Plants have been engineered to tolerate non-biological stressors, such as drought, frost, and high soil salinity. In 2011, Monsanto's DroughtGard maize became the first drought-resistant GM crop to receive US marketing approval. Drought resistance occurs by modifying the plant's genes responsible for the mechanism known as the crassulacean acid metabolism (CAM), which allows the plants to survive despite low water levels. This holds promise for water-heavy crops such as rice, wheat, soybeans and poplar to accelerate their adaptation to water-limited environments. Several salinity tolerance mechanisms have been identified in salt-tolerant crops. For example, rice, canola and tomato crops have been genetically modified to increase their tolerance to salt stress. Herbicides Glyphosate The most prevalent GM trait is herbicide tolerance, where glyphosate-tolerance is the most common. Glyphosate (the active ingredient in Roundup and other herbicide products) kills plants by interfering with the shikimate pathway in plants, which is essential for the synthesis of the aromatic amino acids phenylalanine, tyrosine, and tryptophan. The shikimate pathway is not present in animals, which instead obtain aromatic amino acids from their diet. More specifically, glyphosate inhibits the enzyme 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS). This trait was developed because the herbicides used on grain and grass crops at the time were highly toxic and not effective against narrow-leaved weeds. Thus, developing crops that could withstand spraying with glyphosate would both reduce environmental and health risks, and give an agricultural edge to the farmer. Some micro-organisms have a version of EPSPS that is resistant to glyphosate inhibition. One of these was isolated from an Agrobacterium strain CP4 (CP4 EPSPS) that was resistant to glyphosate. The CP4 EPSPS gene was engineered for plant expression by fusing the 5' end of the gene to a chloroplast transit peptide derived from the petunia EPSPS. This transit peptide was used because it had shown previously an ability to deliver bacterial EPSPS to the chloroplasts of other plants. This CP4 EPSPS gene was cloned and transfected into soybeans. The plasmid used to move the gene into soybeans was PV-GMGTO4. It contained three bacterial genes, two CP4 EPSPS genes, and a gene encoding beta-glucuronidase (GUS) from Escherichia coli as a marker. The DNA was injected into the soybeans using the particle acceleration method. Soybean cultivar A54O3 was used for the transformation. Bromoxynil Tobacco plants have been engineered to be resistant to the herbicide bromoxynil. Glufosinate Crops have been commercialized that are resistant to the herbicide glufosinate, as well. Crops engineered for resistance to multiple herbicides to allow farmers to use a mixed group of two, three, or four different chemicals are under development to combat growing herbicide resistance. 2,4-D In October 2014 the US EPA registered Dow's Enlist Duo maize, which is genetically modified to be resistant to both glyphosate and 2,4-D, in six states. Inserting a bacterial aryloxyalkanoate dioxygenase gene, aad1 makes the corn resistant to 2,4-D. The USDA had approved maize and soybeans with the mutation in September 2014. Dicamba Monsanto has requested approval for a stacked strain that is tolerant of both glyphosate and dicamba. The request includes plans for avoiding herbicide drift to other crops. Significant damage to other non-resistant crops occurred from dicamba formulations intended to reduce volatilization drifting when sprayed on resistant soybeans in 2017. The newer dicamba formulation labels specify to not spray when average wind speeds are above to avoid particle drift, average wind speeds below to avoid temperature inversions, and rain or high temperatures are in the next day forecast. However, these conditions typically only occur during June and July for a few hours at a time. Pest resistance Insects Tobacco, corn, rice and some other crops have been engineered to express genes encoding for insecticidal proteins from Bacillus thuringiensis (Bt). The introduction of Bt crops during the period between 1996 and 2005 has been estimated to have reduced the total volume of insecticide active ingredient use in the United States by over 100 thousand tons. This represents a 19.4% reduction in insecticide use. In the late 1990s, a genetically modified potato that was resistant to the Colorado potato beetle was withdrawn because major buyers rejected it, fearing consumer opposition. Viruses Plant viruses are a cause of around half of the plant diseases emerging worldwide, and an estimated 10–15% of losses in crop yields. Papaya, potatoes, and squash have been engineered to resist viral pathogens such as cucumber mosaic virus which, despite its name, infects a wide variety of plants. Virus resistant papaya were developed in response to a papaya ringspot virus (PRV) outbreak in Hawaii in the late 1990s. They incorporate PRV DNA. By 2010, 80% of Hawaiian papaya plants were genetically modified. Potatoes were engineered for resistance to potato leaf roll virus and Potato virus Y in 1998. Poor sales led to their market withdrawal after three years. Yellow squash that were resistant to at first two, then three viruses were developed, beginning in the 1990s. The viruses are watermelon, cucumber and zucchini/courgette yellow mosaic. Squash was the second GM crop to be approved by US regulators. The trait was later added to zucchini. Many strains of corn have been developed in recent years to combat the spread of Maize dwarf mosaic virus, a costly virus that causes stunted growth which is carried in Johnson grass and spread by aphid insect vectors. These strands are commercially available although the resistance is not standard among GM corn variants. By-products Drugs In 2012, the FDA approved the first plant-produced pharmaceutical, a treatment for Gaucher's Disease. Tobacco plants have been modified to produce therapeutic antibodies. Biofuel Algae is under development for use in biofuels. The focus of Microalgae for mass production for biofuels modifying the algae to produce more lipid has become a focus yet will take years to see results due to the cost of this process to extract lipids. Researchers in Singapore were working on GM jatropha for biofuel production. Syngenta has USDA approval to market a maize trademarked Enogen that has been genetically modified to convert its starch to sugar for ethanol. Some trees have been genetically modified to either have less lignin, or to express lignin with chemically labile bonds. Lignin is the critical limiting factor when using wood to make bio-ethanol because lignin limits the accessibility of cellulose microfibrils to depolymerization by enzymes. Besides with trees, the chemically labile lignin bonds are also very useful for cereal crops such as maize, Materials Companies and labs are working on plants that can be used to make bioplastics. Potatoes that produce industrially useful starches have been developed as well. Oilseed can be modified to produce fatty acids for detergents, substitute fuels and petrochemicals. Non-pesticide pest management products Besides the modified oilcrop above, Camelina sativa has also been modified to produce Helicoverpa armigera pheromones and is in progress with a Spodoptera frugiperda version. The H. armigera pheromones have been tested and are effective. Bioremediation Scientists at the University of York developed a weed (Arabidopsis thaliana) that contains genes from bacteria that could clean TNT and RDX-explosive soil contaminants in 2011. 16 million hectares in the US (1.5% of the total surface) are estimated to be contaminated with TNT and RDX. However A. thaliana was not tough enough for use on military test grounds. Modifications in 2016 included switchgrass and bentgrass. Genetically modified plants have been used for bioremediation of contaminated soils. Mercury, selenium and organic pollutants such as polychlorinated biphenyls (PCBs). Marine environments are especially vulnerable since pollution such as oil spills are not containable. In addition to anthropogenic pollution, millions of tons of petroleum annually enter the marine environment from natural seepages. Despite its toxicity, a considerable fraction of petroleum oil entering marine systems is eliminated by the hydrocarbon-degrading activities of microbial communities. Particularly successful is a recently discovered group of specialists, the so-called hydrocarbonoclastic bacteria (HCCB) that may offer useful genes. Asexual reproduction Crops such as maize reproduce sexually each year. This randomizes which genes get propagated to the next generation, meaning that desirable traits can be lost. To maintain a high-quality crop, some farmers purchase seeds every year. Typically, the seed company maintains two inbred varieties and crosses them into a hybrid strain that is then sold. Related plants like sorghum and gamma grass are able to perform apomixis, a form of asexual reproduction that keeps the plant's DNA intact. This trait is apparently controlled by a single dominant gene, but traditional breeding has been unsuccessful in creating asexually-reproducing maize. Genetic engineering offers another route to this goal. Successful modification would allow farmers to replant harvested seeds that retain desirable traits, rather than relying on purchased seed. Other Genetic modifications to some crops also exist, which make it easier to process the crop, i.e. by growing it in a more compact form. Crops such as tomatoes have been modified to be seedless. Tobacco has been modified to produce chlorophyll c in addition to a and b, increasing growth rates. The transgene was discovered in marine algae, which uses it to gain energy from the blue light that is able to penetrate seawater more effectively than longer wavelengths. Crops Herbicide tolerance Insect resistance Other modified traits GM Camelina Several modifications of Camelina sativa have been done, see §Edible oils and §Non-pesticide pest management products above. Development The number of USDA-approved field releases for testing grew from 4 in 1985 to 1,194 in 2002 and averaged around 800 per year thereafter. The number of sites per release and the number of gene constructs (ways that the gene of interest is packaged together with other elements) – have rapidly increased since 2005. Releases with agronomic properties (such as drought resistance) jumped from 1,043 in 2005 to 5,190 in 2013. As of September 2013, about 7,800 releases had been approved for corn, more than 2,200 for soybeans, more than 1,100 for cotton, and about 900 for potatoes. Releases were approved for herbicide tolerance (6,772 releases), insect resistance (4,809), product quality such as flavor or nutrition (4,896), agronomic properties like drought resistance (5,190), and virus/fungal resistance (2,616). The institutions with the most authorized field releases include Monsanto with 6,782, Pioneer/DuPont with 1,405, Syngenta with 565, and USDA's Agricultural Research Service with 370. As of September 2013 USDA had received proposals for releasing GM rice, squash, plum, rose, tobacco, flax, and chicory. GMO designer plants for Mars Researchers at North Carolina State University are designing genetically modified plants or seeds to ship to Mars, that can live in habitable greenhouses or bio-domes to help build plant life on the planet. NASA's NIAC is sponsoring this work on designer plants/trees or genetically modified vegetation that could better survive on Mars. CRISPR gene editing from extremophiles on Earth is used to help withstand the harsh Martian regolith and atmosphere, including such challenges as ultraviolet radiation, extreme cold, low atmospheric pressure, perchlorates, and drought tolerance. The plants and seeds could then be tested outdoors to try and start an ecosystem for the full terraforming of Mars. Farming practices Resistance Bacillus thuringiensis Constant exposure to a toxin creates evolutionary pressure for pests resistant to that toxin. Over-reliance on glyphosate and a reduction in the diversity of weed management practices allowed the spread of glyphosate resistance in 14 weed species in the US, and in soybeans. To reduce resistance to Bacillus thuringiensis (Bt) crops, the 1996 commercialization of transgenic cotton and maize came with a management strategy to prevent insects from becoming resistant. Insect resistance management plans are mandatory for Bt crops. The aim is to encourage a large population of pests so that any (recessive) resistance genes are diluted within the population. Resistance lowers evolutionary fitness in the absence of the stressor, Bt. In refuges, non-resistant strains outcompete resistant ones. With sufficiently high levels of transgene expression, nearly all of the heterozygotes (S/s), i.e., the largest segment of the pest population carrying a resistance allele, will be killed before maturation, thus preventing transmission of the resistance gene to their progeny. Refuges (i. e., fields of nontransgenic plants) adjacent to transgenic fields increases the likelihood that homozygous resistant (s/s) individuals and any surviving heterozygotes will mate with susceptible (S/S) individuals from the refuge, instead of with other individuals carrying the resistance allele. As a result, the resistance gene frequency in the population remains lower. Complicating factors can affect the success of the high-dose/refuge strategy. For example, if the temperature is not ideal, thermal stress can lower Bt toxin production and leave the plant more susceptible. More importantly, reduced late-season expression has been documented, possibly resulting from DNA methylation of the promoter. The success of the high-dose/refuge strategy has successfully maintained the value of Bt crops. This success has depended on factors independent of management strategy, including low initial resistance allele frequencies, fitness costs associated with resistance, and the abundance of non-Bt host plants outside the refuges. Companies that produce Bt seed are introducing strains with multiple Bt proteins. Monsanto did this with Bt cotton in India, where the product was rapidly adopted. Monsanto has also; in an attempt to simplify the process of implementing refuges in fields to comply with Insect Resistance Management(IRM) policies and prevent irresponsible planting practices; begun marketing seed bags with a set proportion of refuge (non-transgenic) seeds mixed in with the Bt seeds being sold. Coined "Refuge-In-a-Bag" (RIB), this practice is intended to increase farmer compliance with refuge requirements and reduce additional labor needed at planting from having separate Bt and refuge seed bags on hand. This strategy is likely to reduce the likelihood of Bt-resistance occurring for corn rootworm, but may increase the risk of resistance for lepidopteran corn pests, such as European corn borer. Increased concerns for resistance with seed mixtures include partially resistant larvae on a Bt plant being able to move to a susceptible plant to survive or cross pollination of refuge pollen on to Bt plants that can lower the amount of Bt expressed in kernels for ear feeding insects. Herbicide resistance Best management practices (BMPs) to control weeds may help delay resistance. BMPs include applying multiple herbicides with different modes of action, rotating crops, planting weed-free seed, scouting fields routinely, cleaning equipment to reduce the transmission of weeds to other fields, and maintaining field borders. The most widely planted GM crops are designed to tolerate herbicides. By 2006 some weed populations had evolved to tolerate some of the same herbicides. Palmer amaranth is a weed that competes with cotton. A native of the southwestern US, it traveled east and was first found resistant to glyphosate in 2006, less than 10 years after GM cotton was introduced. Plant protection Farmers generally use less insecticide when they plant Bt-resistant crops. Insecticide use on corn farms declined from 0.21 pound per planted acre in 1995 to 0.02 pound in 2010. This is consistent with the decline in European corn borer populations as a direct result of Bt corn and cotton. The establishment of minimum refuge requirements helped delay the evolution of Bt resistance. However, resistance appears to be developing to some Bt traits in some areas. In Colombia, GM cotton has reduced insecticide usage by 25% and herbicide usage by 5%, and GM corn has reduced insecticide and herbicide usage by 66% and 13%, respectively. Tillage By leaving at least 30% of crop residue on the soil surface from harvest through planting, conservation tillage reduces soil erosion from wind and water, increases water retention, and reduces soil degradation as well as water and chemical runoff. In addition, conservation tillage reduces the carbon footprint of agriculture. A 2014 review covering 12 states from 1996 to 2006, found that a 1% increase in herbicde-tolerant (HT) soybean adoption leads to a 0.21% increase in conservation tillage and a 0.3% decrease in quality-adjusted herbicide use. Greenhouse gas emissions Combined features of increased yield, decreased land use, reduced use of fertilizer and reduced farming machinery use create a feedback loop that reduces carbon emissions related to farming. These reductions have been estimated at 7.5% of total agricultural emissions in the EU or 33 millions tons of and an estimated 8.76 million tons of in Colombia. Drought tolerance The use of drought tolerant crops can increase yield in water-scarce locations, making farming possible in new areas. The adoption of drought tolerant maize in Ghana was shown to increase yield by more than 150% and boost commercialization intensity, although it did not significantly affect farm income. Regulation The regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of genetically modified crops. There are differences in the regulation of GM crops between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of each product. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. Production In 2013, GM crops were planted in 27 countries; 19 were developing countries and 8 were developed countries. 2013 was the second year in which developing countries grew a majority (54%) of the total GM harvest. 18 million farmers grew GM crops; around 90% were small-holding farmers in developing countries. The United States Department of Agriculture (USDA) reports every year on the total area of GM crop varieties planted in the United States. According to National Agricultural Statistics Service, the states published in these tables represent 81–86 percent of all corn planted area, 88–90 percent of all soybean planted area, and 81–93 percent of all upland cotton planted area (depending on the year). Global estimates are produced by the International Service for the Acquisition of Agri-biotech Applications (ISAAA) and can be found in their annual reports, "Global Status of Commercialized Transgenic Crops". Farmers have widely adopted GM technology (see figure). Between 1996 and 2013, the total surface area of land cultivated with GM crops increased by a factor of 100, from to 1,750,000 km2 (432 million acres). 10% of the world's arable land was planted with GM crops in 2010. As of 2011, 11 different transgenic crops were grown commercially on 395 million acres (160 million hectares) in 29 countries such as the US, Brazil, Argentina, India, Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain. One of the key reasons for this widespread adoption is the perceived economic benefit the technology brings to farmers. For example, the system of planting glyphosate-resistant seed and then applying glyphosate once plants emerged provided farmers with the opportunity to dramatically increase the yield from a given plot of land, since this allowed them to plant rows closer together. Without it, farmers had to plant rows far enough apart to control post-emergent weeds with mechanical tillage. Likewise, using Bt seeds means that farmers do not have to purchase insecticides, and then invest time, fuel, and equipment in applying them. However critics have disputed whether yields are higher and whether chemical use is less, with GM crops. See Genetically modified food controversies article for information. In the US, by 2014, 94% of the planted area of soybeans, 96% of cotton and 93% of corn were genetically modified varieties. Genetically modified soybeans carried herbicide-tolerant traits only, but maize and cotton carried both herbicide tolerance and insect protection traits (the latter largely Bt protein). These constitute "input-traits" that are aimed to financially benefit the producers, but may have indirect environmental benefits and cost benefits to consumers. The Grocery Manufacturers of America estimated in 2003 that 70–75% of all processed foods in the U.S. contained a GM ingredient. As of 2024, the cultivation of genetically engineered crops is banned in 38 countries, while 9 countries have banned their import. Europe grows relatively few genetically engineered crops with the exception of Spain, where one fifth of maize is genetically engineered, and smaller amounts in five other countries. The EU had a 'de facto' ban on the approval of new GM crops, from 1999 until 2004. GM crops are now regulated by the EU. Developing countries grew 54 percent of genetically engineered crops in 2013. In recent years GM crops expanded rapidly in developing countries. In 2013 approximately 18 million farmers grew 54% of worldwide GM crops in developing countries. 2013's largest increase was in Brazil (403,000 km2 versus 368,000 km2 in 2012). GM cotton began growing in India in 2002, reaching 110,000 km2 in 2013. According to the 2013 ISAAA brief: "a total of 36 countries (35 + EU-28) have granted regulatory approvals for biotech crops for food and/or feed use and for environmental release or planting since 1994 ... a total of 2,833 regulatory approvals involving 27 GM crops and 336 GM events (NB: an "event" is a specific genetic modification in a specific species) have been issued by authorities, of which 1,321 are for food use (direct use or processing), 918 for feed use (direct use or processing) and 599 for environmental release or planting. Japan has the largest number (198), followed by the U.S.A. (165, not including "stacked" events), Canada (146), Mexico (131), South Korea (103), Australia (93), New Zealand (83), European Union (71 including approvals that have expired or under renewal process), Philippines (68), Taiwan (65), Colombia (59), China (55) and South Africa (52). Maize has the largest number (130 events in 27 countries), followed by cotton (49 events in 22 countries), potato (31 events in 10 countries), canola (30 events in 12 countries) and soybean (27 events in 26 countries). Controversy Direct genetic engineering has been controversial since its introduction. Most, but not all of the controversies are over GM foods rather than crops per se. GM foods are the subject of protests, vandalism, referendums, legislation, court action and scientific disputes. The controversies involve consumers, biotechnology companies, governmental regulators, non-governmental organizations and scientists. Opponents have objected to GM crops on multiple grounds including environmental impacts, food safety, whether GM crops are needed to address food needs, whether they are sufficiently accessible to farmers in developing countries, concerns over subjecting crops to intellectual property law, and on religious grounds. Secondary issues include labeling, the behavior of government regulators, the effects of pesticide use and pesticide tolerance. A significant environmental concern about using genetically modified crops is possible cross-breeding with related crops, giving them advantages over naturally occurring varieties. One example is a glyphosate-resistant rice crop that crossbreeds with a weedy relative, giving the weed a competitive advantage. The transgenic hybrid had higher rates of photosynthesis, more shoots and flowers, and more seeds than the non-transgenic hybrids. This demonstrates the possibility of ecosystem damage by GM crop usage. The role of biopiracy in the development of GM crops is also potentially problematic, as developed countries have gotten economic gain by using the genetic resources of developing countries. In the twentieth century, the International Rice Research Institute catalogued the genomes of almost 80,000 varieties of rice from Asian farms, which has since been used to create new higher yielding varieties of rice. These new varieties create almost 655 million dollars of economic gain for Australia, USA, Canada, and New Zealand every year. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. No reports of ill effects from GM food have been documented in the human population. GM crop labeling is required in many countries, although the United States Food and Drug Administration does not, nor does it distinguish between approved GM and non-GM foods. The United States enacted a law that requires labeling regulations to be issued by July 2018. It allows indirect disclosure such as with a phone number, bar code, or web site. Advocacy groups such as Center for Food Safety, Union of Concerned Scientists, and Greenpeace claim that risks related to GM food have not been adequately examined and managed, that GM crops are not sufficiently tested and should be labelled, and that regulatory authorities and scientific bodies are too closely tied to industry. Some studies have claimed that genetically modified crops can cause harm; a 2016 review that reanalyzed the data from six of these studies found that their statistical methodologies were flawed and did not demonstrate harm, and said that conclusions about GM crop safety should be drawn from "the totality of the evidence ... instead of far-fetched evidence from single studies".
Technology
Agriculture_2
null
2292483
https://en.wikipedia.org/wiki/Degenerate%20energy%20levels
Degenerate energy levels
In quantum mechanics, an energy level is degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy (or simply the degeneracy) of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same energy eigenvalue. When this is the case, energy alone is not enough to characterize what state the system is in, and other quantum numbers are needed to characterize the exact state when distinction is desired. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy. Degeneracy plays a fundamental role in quantum statistical mechanics. For an -particle system in three dimensions, a single energy level may correspond to several different wave functions or energy states. These degenerate states at the same level all have an equal probability of being filled. The number of such states gives the degeneracy of a particular energy level. Mathematics The possible states of a quantum mechanical system may be treated mathematically as abstract vectors in a separable, complex Hilbert space, while the observables may be represented by linear Hermitian operators acting upon them. By selecting a suitable basis, the components of these vectors and the matrix elements of the operators in that basis may be determined. If is a matrix, a non-zero vector, and is a scalar, such that , then the scalar is said to be an eigenvalue of and the vector is said to be the eigenvector corresponding to . Together with the zero vector, the set of all eigenvectors corresponding to a given eigenvalue form a subspace of , which is called the eigenspace of . An eigenvalue which corresponds to two or more different linearly independent eigenvectors is said to be degenerate, i.e., and , where and are linearly independent eigenvectors. The dimension of the eigenspace corresponding to that eigenvalue is known as its degree of degeneracy, which can be finite or infinite. An eigenvalue is said to be non-degenerate if its eigenspace is one-dimensional. The eigenvalues of the matrices representing physical observables in quantum mechanics give the measurable values of these observables while the eigenstates corresponding to these eigenvalues give the possible states in which the system may be found, upon measurement. The measurable values of the energy of a quantum system are given by the eigenvalues of the Hamiltonian operator, while its eigenstates give the possible energy states of the system. A value of energy is said to be degenerate if there exist at least two linearly independent energy states associated with it. Moreover, any linear combination of two or more degenerate eigenstates is also an eigenstate of the Hamiltonian operator corresponding to the same energy eigenvalue. This clearly follows from the fact that the eigenspace of the energy value eigenvalue is a subspace (being the kernel of the Hamiltonian minus times the identity), hence is closed under linear combinations. Effect of degeneracy on the measurement of energy In the absence of degeneracy, if a measured value of energy of a quantum system is determined, the corresponding state of the system is assumed to be known, since only one eigenstate corresponds to each energy eigenvalue. However, if the Hamiltonian has a degenerate eigenvalue of degree gn, the eigenstates associated with it form a vector subspace of dimension gn. In such a case, several final states can be possibly associated with the same result , all of which are linear combinations of the gn orthonormal eigenvectors . In this case, the probability that the energy value measured for a system in the state will yield the value is given by the sum of the probabilities of finding the system in each of the states in this basis, i.e., Degeneracy in different dimensions This section intends to illustrate the existence of degenerate energy levels in quantum systems studied in different dimensions. The study of one and two-dimensional systems aids the conceptual understanding of more complex systems. Degeneracy in one dimension In several cases, analytic results can be obtained more easily in the study of one-dimensional systems. For a quantum particle with a wave function moving in a one-dimensional potential , the time-independent Schrödinger equation can be written as Since this is an ordinary differential equation, there are two independent eigenfunctions for a given energy at most, so that the degree of degeneracy never exceeds two. It can be proven that in one dimension, there are no degenerate bound states for normalizable wave functions. A sufficient condition on a piecewise continuous potential and the energy is the existence of two real numbers with such that we have . In particular, is bounded below in this criterion. {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Proof of the above theorem. |- |Considering a one-dimensional quantum system in a potential with degenerate states and corresponding to the same energy eigenvalue , writing the time-independent Schrödinger equation for the system: Multiplying the first equation by and the second by and subtracting one from the other, we get: Integrating both sides In case of well-defined and normalizable wave functions, the above constant vanishes, provided both the wave functions vanish at at least one point, and we find: where is, in general, a complex constant. For bound state eigenfunctions (which tend to zero as ), and assuming and satisfy the condition given above, it can be shown that also the first derivative of the wave function approaches zero in the limit , so that the above constant is zero and we have no degeneracy. |} Degeneracy in two-dimensional quantum systems Two-dimensional quantum systems exist in all three states of matter and much of the variety seen in three dimensional matter can be created in two dimensions. Real two-dimensional materials are made of monoatomic layers on the surface of solids. Some examples of two-dimensional electron systems achieved experimentally include MOSFET, two-dimensional superlattices of Helium, Neon, Argon, Xenon etc. and surface of liquid Helium. The presence of degenerate energy levels is studied in the cases of Particle in a box and two-dimensional harmonic oscillator, which act as useful mathematical models for several real world systems. Particle in a rectangular plane Consider a free particle in a plane of dimensions and in a plane of impenetrable walls. The time-independent Schrödinger equation for this system with wave function can be written as The permitted energy values are The normalized wave function is where So, quantum numbers and are required to describe the energy eigenvalues and the lowest energy of the system is given by For some commensurate ratios of the two lengths and , certain pairs of states are degenerate. If , where p and q are integers, the states and have the same energy and so are degenerate to each other. Particle in a square box In this case, the dimensions of the box and the energy eigenvalues are given by Since and can be interchanged without changing the energy, each energy level has a degeneracy of at least two when and are different. Degenerate states are also obtained when the sum of squares of quantum numbers corresponding to different energy levels are the same. For example, the three states (nx = 7, ny = 1), (nx = 1, ny = 7) and (nx = ny = 5) all have and constitute a degenerate set. Degrees of degeneracy of different energy levels for a particle in a square box: Particle in a cubic box In this case, the dimensions of the box and the energy eigenvalues depend on three quantum numbers. Since , and can be interchanged without changing the energy, each energy level has a degeneracy of at least three when the three quantum numbers are not all equal. Finding a unique eigenbasis in case of degeneracy If two operators and commute, i.e., , then for every eigenvector of , is also an eigenvector of with the same eigenvalue. However, if this eigenvalue, say , is degenerate, it can be said that belongs to the eigenspace of , which is said to be globally invariant under the action of . For two commuting observables and , one can construct an orthonormal basis of the state space with eigenvectors common to the two operators. However, is a degenerate eigenvalue of , then it is an eigensubspace of that is invariant under the action of , so the representation of in the eigenbasis of is not a diagonal but a block diagonal matrix, i.e. the degenerate eigenvectors of are not, in general, eigenvectors of . However, it is always possible to choose, in every degenerate eigensubspace of , a basis of eigenvectors common to and . Choosing a complete set of commuting observables If a given observable A is non-degenerate, there exists a unique basis formed by its eigenvectors. On the other hand, if one or several eigenvalues of are degenerate, specifying an eigenvalue is not sufficient to characterize a basis vector. If, by choosing an observable , which commutes with , it is possible to construct an orthonormal basis of eigenvectors common to and , which is unique, for each of the possible pairs of eigenvalues {a,b}, then and are said to form a complete set of commuting observables. However, if a unique set of eigenvectors can still not be specified, for at least one of the pairs of eigenvalues, a third observable , which commutes with both and can be found such that the three form a complete set of commuting observables. It follows that the eigenfunctions of the Hamiltonian of a quantum system with a common energy value must be labelled by giving some additional information, which can be done by choosing an operator that commutes with the Hamiltonian. These additional labels required naming of a unique energy eigenfunction and are usually related to the constants of motion of the system. Degenerate energy eigenstates and the parity operator The parity operator is defined by its action in the representation of changing r to −r, i.e. The eigenvalues of P can be shown to be limited to , which are both degenerate eigenvalues in an infinite-dimensional state space. An eigenvector of P with eigenvalue +1 is said to be even, while that with eigenvalue −1 is said to be odd. Now, an even operator is one that satisfies, while an odd operator is one that satisfies Since the square of the momentum operator is even, if the potential V(r) is even, the Hamiltonian is said to be an even operator. In that case, if each of its eigenvalues are non-degenerate, each eigenvector is necessarily an eigenstate of P, and therefore it is possible to look for the eigenstates of among even and odd states. However, if one of the energy eigenstates has no definite parity, it can be asserted that the corresponding eigenvalue is degenerate, and is an eigenvector of with the same eigenvalue as . Degeneracy and symmetry The physical origin of degeneracy in a quantum-mechanical system is often the presence of some symmetry in the system. Studying the symmetry of a quantum system can, in some cases, enable us to find the energy levels and degeneracies without solving the Schrödinger equation, hence reducing effort. Mathematically, the relation of degeneracy with symmetry can be clarified as follows. Consider a symmetry operation associated with a unitary operator . Under such an operation, the new Hamiltonian is related to the original Hamiltonian by a similarity transformation generated by the operator , such that , since is unitary. If the Hamiltonian remains unchanged under the transformation operation , we have Now, if is an energy eigenstate, where is the corresponding energy eigenvalue. which means that is also an energy eigenstate with the same eigenvalue . If the two states and are linearly independent (i.e. physically distinct), they are therefore degenerate. In cases where is characterized by a continuous parameter , all states of the form have the same energy eigenvalue. Symmetry group of the Hamiltonian The set of all operators which commute with the Hamiltonian of a quantum system are said to form the symmetry group of the Hamiltonian. The commutators of the generators of this group determine the algebra of the group. An n-dimensional representation of the Symmetry group preserves the multiplication table of the symmetry operators. The possible degeneracies of the Hamiltonian with a particular symmetry group are given by the dimensionalities of the irreducible representations of the group. The eigenfunctions corresponding to a n-fold degenerate eigenvalue form a basis for a n-dimensional irreducible representation of the Symmetry group of the Hamiltonian. Types of degeneracy Degeneracies in a quantum system can be systematic or accidental in nature. Systematic or essential degeneracy This is also called a geometrical or normal degeneracy and arises due to the presence of some kind of symmetry in the system under consideration, i.e. the invariance of the Hamiltonian under a certain operation, as described above. The representation obtained from a normal degeneracy is irreducible and the corresponding eigenfunctions form a basis for this representation. Accidental degeneracy It is a type of degeneracy resulting from some special features of the system or the functional form of the potential under consideration, and is related possibly to a hidden dynamical symmetry in the system. It also results in conserved quantities, which are often not easy to identify. Accidental symmetries lead to these additional degeneracies in the discrete energy spectrum. An accidental degeneracy can be due to the fact that the group of the Hamiltonian is not complete. These degeneracies are connected to the existence of bound orbits in classical Physics. Examples: Coulomb and Harmonic Oscillator potentials For a particle in a central potential, the Laplace–Runge–Lenz vector is a conserved quantity resulting from an accidental degeneracy, in addition to the conservation of angular momentum due to rotational invariance. For a particle moving on a cone under the influence of and potentials, centred at the tip of the cone, the conserved quantities corresponding to accidental symmetry will be two components of an equivalent of the Runge-Lenz vector, in addition to one component of the angular momentum vector. These quantities generate SU(2) symmetry for both potentials. Example: Particle in a constant magnetic field A particle moving under the influence of a constant magnetic field, undergoing cyclotron motion on a circular orbit is another important example of an accidental symmetry. The symmetry multiplets in this case are the Landau levels which are infinitely degenerate. Examples The hydrogen atom In atomic physics, the bound states of an electron in a hydrogen atom show us useful examples of degeneracy. In this case, the Hamiltonian commutes with the total orbital angular momentum , its component along the z-direction, , total spin angular momentum and its z-component . The quantum numbers corresponding to these operators are , , (always 1/2 for an electron) and respectively. The energy levels in the hydrogen atom depend only on the principal quantum number . For a given , all the states corresponding to have the same energy and are degenerate. Similarly for given values of and , the , states with are degenerate. The degree of degeneracy of the energy level En is therefore which is doubled if the spin degeneracy is included. The degeneracy with respect to is an essential degeneracy which is present for any central potential, and arises from the absence of a preferred spatial direction. The degeneracy with respect to is often described as an accidental degeneracy, but it can be explained in terms of special symmetries of the Schrödinger equation which are only valid for the hydrogen atom in which the potential energy is given by Coulomb's law. Isotropic three-dimensional harmonic oscillator It is a spinless particle of mass m moving in three-dimensional space, subject to a central force whose absolute value is proportional to the distance of the particle from the centre of force. It is said to be isotropic since the potential acting on it is rotationally invariant, i.e., where is the angular frequency given by . Since the state space of such a particle is the tensor product of the state spaces associated with the individual one-dimensional wave functions, the time-independent Schrödinger equation for such a system is given by- So, the energy eigenvalues are or, where n is a non-negative integer. So, the energy levels are degenerate and the degree of degeneracy is equal to the number of different sets satisfying The degeneracy of the -th state can be found by considering the distribution of quanta across , and . Having 0 in gives possibilities for distribution across and . Having 1 quanta in gives possibilities across and and so on. This leads to the general result of and summing over all leads to the degeneracy of the -th state, For the ground state , the degeneracy is so the state is non-degenerate. For all higher states, the degeneracy is greater than 1 so the state is degenerate. Removing degeneracy The degeneracy in a quantum mechanical system may be removed if the underlying symmetry is broken by an external perturbation. This causes splitting in the degenerate energy levels. This is essentially a splitting of the original irreducible representations into lower-dimensional such representations of the perturbed system. Mathematically, the splitting due to the application of a small perturbation potential can be calculated using time-independent degenerate perturbation theory. This is an approximation scheme that can be applied to find the solution to the eigenvalue equation for the Hamiltonian H of a quantum system with an applied perturbation, given the solution for the Hamiltonian H0 for the unperturbed system. It involves expanding the eigenvalues and eigenkets of the Hamiltonian H in a perturbation series. The degenerate eigenstates with a given energy eigenvalue form a vector subspace, but not every basis of eigenstates of this space is a good starting point for perturbation theory, because typically there would not be any eigenstates of the perturbed system near them. The correct basis to choose is one that diagonalizes the perturbation Hamiltonian within the degenerate subspace. Physical examples of removal of degeneracy by a perturbation Some important examples of physical situations where degenerate energy levels of a quantum system are split by the application of an external perturbation are given below. Symmetry breaking in two-level systems A two-level system essentially refers to a physical system having two states whose energies are close together and very different from those of the other states of the system. All calculations for such a system are performed on a two-dimensional subspace of the state space. If the ground state of a physical system is two-fold degenerate, any coupling between the two corresponding states lowers the energy of the ground state of the system, and makes it more stable. If and are the energy levels of the system, such that , and the perturbation is represented in the two-dimensional subspace as the following 2×2 matrix then the perturbed energies are Examples of two-state systems in which the degeneracy in energy states is broken by the presence of off-diagonal terms in the Hamiltonian resulting from an internal interaction due to an inherent property of the system include: Benzene, with two possible dispositions of the three double bonds between neighbouring Carbon atoms. Ammonia molecule, where the Nitrogen atom can be either above or below the plane defined by the three Hydrogen atoms. molecule, in which the electron may be localized around either of the two nuclei. Fine-structure splitting The corrections to the Coulomb interaction between the electron and the proton in a Hydrogen atom due to relativistic motion and spin–orbit coupling result in breaking the degeneracy in energy levels for different values of l corresponding to a single principal quantum number n. The perturbation Hamiltonian due to relativistic correction is given by where is the momentum operator and is the mass of the electron. The first-order relativistic energy correction in the basis is given by Now where is the fine structure constant. The spin–orbit interaction refers to the interaction between the intrinsic magnetic moment of the electron with the magnetic field experienced by it due to the relative motion with the proton. The interaction Hamiltonian is which may be written as The first order energy correction in the basis where the perturbation Hamiltonian is diagonal, is given by where is the Bohr radius. The total fine-structure energy shift is given by for . Zeeman effect The splitting of the energy levels of an atom when placed in an external magnetic field because of the interaction of the magnetic moment of the atom with the applied field is known as the Zeeman effect. Taking into consideration the orbital and spin angular momenta, and , respectively, of a single electron in the Hydrogen atom, the perturbation Hamiltonian is given by where and . Thus, Now, in case of the weak-field Zeeman effect, when the applied field is weak compared to the internal field, the spin–orbit coupling dominates and and are not separately conserved. The good quantum numbers are n, , j and mj, and in this basis, the first order energy correction can be shown to be given by where is called the Bohr Magneton. Thus, depending on the value of , each degenerate energy level splits into several levels. In case of the strong-field Zeeman effect, when the applied field is strong enough, so that the orbital and spin angular momenta decouple, the good quantum numbers are now n, l, ml, and ms. Here, Lz and Sz are conserved, so the perturbation Hamiltonian is given by- assuming the magnetic field to be along the z-direction. So, For each value of m, there are two possible values of ms, . Stark effect The splitting of the energy levels of an atom or molecule when subjected to an external electric field is known as the Stark effect. For the hydrogen atom, the perturbation Hamiltonian is if the electric field is chosen along the z-direction. The energy corrections due to the applied field are given by the expectation value of in the basis. It can be shown by the selection rules that when and . The degeneracy is lifted only for certain states obeying the selection rules, in the first order. The first-order splitting in the energy levels for the degenerate states and , both corresponding to n = 2, is given by .
Physical sciences
Atomic physics
Physics
2292624
https://en.wikipedia.org/wiki/Continuous%20stirred-tank%20reactor
Continuous stirred-tank reactor
The continuous stirred-tank reactor (CSTR), also known as vat- or backmix reactor, mixed flow reactor (MFR), or a continuous-flow stirred-tank reactor (CFSTR), is a common model for a chemical reactor in chemical engineering and environmental engineering. A CSTR often refers to a model used to estimate the key unit operation variables when using a continuous agitated-tank reactor to reach a specified output. The mathematical model works for all fluids: liquids, gases, and slurries. The behavior of a CSTR is often approximated or modeled by that of an ideal CSTR, which assumes perfect mixing. In a perfectly mixed reactor, reagent is instantaneously and uniformly mixed throughout the reactor upon entry. Consequently, the output composition is identical to composition of the material inside the reactor, which is a function of residence time and reaction rate. The CSTR is the ideal limit of complete mixing in reactor design, which is the complete opposite of a plug flow reactor (PFR). In practice, no reactors behave ideally but instead fall somewhere in between the mixing limits of an ideal CSTR and PFR. Ideal CSTR Modeling A continuous fluid flow containing non-conservative chemical reactant A enters an ideal CSTR of volume V. Assumptions: perfect or ideal mixing steady state , where NA is the number of moles of species A closed boundaries constant fluid density (valid for most liquids; valid for gases only if there is no net change in the number of moles or drastic temperature change) nth-order reaction (r = kCAn), where k is the reaction rate constant, CA is the concentration of species A, and n is the order of the reaction isothermal conditions, or constant temperature (k is constant) single, irreversible reaction (νA = −1) All reactant A is converted to products via chemical reaction NA = CA V Integral mass balance on number of moles NA of species A in a reactor of volume V: where FAo is the molar flow rate inlet of species A FA is the molar flow rate outlet of species A vA is the stoichiometric coefficient rA is the reaction rate Applying the assumptions of steady state and νA = −1, Equation 2 simplifies to: The molar flow rates of species A can then be rewritten in terms of the concentration of A and the fluid flow rate (Q): Equation 4 can then be rearranged to isolate rA and simplified: where is the theoretical residence time () CAo is the inlet concentration of species A CA is the reactor/outlet concentration of species A Residence time is the total amount of time a discrete quantity of reagent spends inside the reactor. For an ideal reactor, the theoretical residence time, , is always equal to the reactor volume divided by the fluid flow rate. See the next section for a more in-depth discussion on the residence time distribution of a CSTR. Depending on the order of the reaction, the reaction rate, rA, is generally dependent on the concentration of species A in the reactor and the rate constant. A key assumption when modeling a CSTR is that any reactant in the fluid is perfectly (i.e. uniformly) mixed in the reactor, implying that the concentration within the reactor is the same in the outlet stream. The rate constant can be determined using a known empirical reaction rate that is adjusted for temperature using the Arrhenius temperature dependence. Generally, as the temperature increases so does the rate at which the reaction occurs. Equation 6 can be solved by integration after substituting the proper rate expression. The table below summarizes the outlet concentration of species A for an ideal CSTR. The values of the outlet concentration and residence time are major design criteria in the design of CSTRs for industrial applications. Residence time distribution An ideal CSTR will exhibit well-defined flow behavior that can be characterized by the reactor's residence time distribution, or exit age distribution. Not all fluid particles will spend the same amount of time within the reactor. The exit age distribution (E(t)) defines the probability that a given fluid particle will spend time t in the reactor. Similarly, the cumulative age distribution (F(t)) gives the probability that a given fluid particle has an exit age less than time t. One of the key takeaways from the exit age distribution is that a very small number of fluid particles will never exit the CSTR. Depending on the application of the reactor, this may either be an asset or a drawback. Non-ideal CSTR While the ideal CSTR model is useful for predicting the fate of constituents during a chemical or biological process, CSTRs rarely exhibit ideal behavior in reality. More commonly, the reactor hydraulics do not behave ideally or the system conditions do not obey the initial assumptions. Perfect mixing is a theoretical concept that is not achievable in practice. For engineering purposes, however, if the residence time is 5–10 times the mixing time, the perfect mixing assumption generally holds true. Non-ideal hydraulic behavior is commonly classified by either dead space or short-circuiting. These phenomena occur when some fluid spends less time in the reactor than the theoretical residence time, . The presence of corners or baffles in a reactor often results in some dead space where the fluid is poorly mixed. Similarly, a jet of fluid in the reactor can cause short-circuiting, in which a portion of the flow exits the reactor much quicker than the bulk fluid. If dead space or short-circuiting occur in a CSTR, the relevant chemical or biological reactions may not finish before the fluid exits the reactor. Any deviation from ideal flow will result in a residence time distribution different from the ideal distribution, as seen at right. Modeling non-ideal flow Although ideal flow reactors are seldom found in practice, they are useful tools for modeling non-ideal flow reactors. Any flow regime can be achieved by modeling a reactor as a combination of ideal CSTRs and plug flow reactors (PFRs) either in series or in parallel. For examples, an infinite series of ideal CSTRs is hydraulically equivalent to an ideal PFR. Reactor models combining a number of CSTRs in series are often termed tanks-in-series (TIS) models. To model systems that do not obey the assumptions of constant temperature and a single reaction, additional dependent variables must be considered. If the system is considered to be in unsteady-state, a differential equation or a system of coupled differential equations must be solved. Deviations of the CSTR behavior can be considered by the dispersion model. CSTRs are known to be one of the systems which exhibit complex behavior such as steady-state multiplicity, limit cycles, and chaos. Cascades of CSTRs Cascades of CSTRs, also known as a series of CSTRs, are used to decrease the volume of a system. Minimizing volume As seen in the graph with one CSTR, where the inverse rate is plotted as a function of fractional conversion, the area in the box is equal to where V is the total reactor volume and is the molar flow rate of the feed. When the same process is applied to a cascade of CSTRs as seen in the graph with three CSTRs, the volume of each reactor is calculated from each inlet and outlet fractional conversion, therefore resulting in a decrease in total reactor volume. Optimum size is achieved when the area above the rectangles from the CSTRs in series that was previously covered by a single CSTR is maximized. For a first order reaction with two CSTRs, equal volumes should be used. As the number of ideal CSTRs (n) approaches infinity, the total reactor volume approaches that of an ideal PFR for the same reaction and fractional conversion. Ideal cascade of CSTRs From the design equation of a single CSTR where , we can determine that for a single CSTR in series that , where is the space time of the reactor, is the feed concentration of A, is the outlet concentration of A, and is the rate of reaction of A. First order For an isothermal first order, constant density reaction in a cascade of identical CSTRs operating at steady state For one CSTR: , where k is the rate constant and is the outlet concentration of A from the first CSTR Two CSTRs: and Plugging in the first CSTR equation to the second: Therefore for m identical CSTRs in series: When the volumes of the individual CSTRs in series vary, the order of the CSTRs does not change the overall conversion for a first order reaction as long as the CSTRs are run at the same temperature. Zeroth order At steady state, the general equation for an isothermal zeroth order reaction at in a cascade of CSTRs is given by When the cascade of CSTRs is isothermal with identical reactors, the concentration is given by Second order For an isothermal second order reaction at steady state in a cascade of CSTRs, the general design equation is Non-ideal cascade of CSTRs With non-ideal reactors, residence time distributions can be calculated. At the concentration at the jth reactor in series is given by where n is the total number of CSTRs in series, and is the average residence time of the cascade given by where Q is the volumetric flow rate. From this, the cumulative residence time distribution (F(t)) can be calculated as As n → ∞, F(t) approaches the ideal PFR response. The variance associated with F(t) for a pulse stimulus into a cascade of CSTRs is . Cost When determining the cost of a series of CSTRs, capital and operating costs must be taken into account. As seen above, an increase in the number of CSTRs in series will decrease the total reactor volume. Since cost scales with volume, capital costs are lowered by increasing the number of CSTRs. The largest decrease in cost, and therefore volume, occurs between a single CSTR and having two CSTRs in series. When considering operating cost, operating cost scales with the number of pumps and controls, construction, installation, and maintenance that accompany larger cascades. Therefore as the number of CSTRs increases, the operating cost increases. Therefore, there is a minimum cost associated with a cascade of CSTRs. Zeroth order reactions From a rearrangement of the equation given for identical isothermal CSTRs running a zeroth order reaction: , the volume of each individual CSTR will scale by . Therefore the total reactor volume is independent of the number of CSTRs for a zeroth order reaction. Therefore, cost is not a function of the number of reactors for a zeroth order reaction and does not decrease as the number of CSTRs increases. Selectivity of parallel reactions When considering parallel reactions, utilizing a cascade of CSTRs can achieve greater selectivity for a desired product. For a given parallel reaction A -> B and A -> C with constants and and rate equations and , respectively, we can obtain a relationship between the two by dividing by . Therefore . In the case where and B is the desired product, the cascade of CSTRs is favored with a fresh secondary feed of A in order to maximize the concentration of A . For a parallel reaction with two or more reactants such as A + D -> B and A + D -> C with constants and and rate equations and , respectively, we can obtain a relationship between the two by dividing by . Therefore . In the case where and and B is the desired product, a cascade of CSTRs with an inlet stream of high [A] and [D] is favored. In the case where and and B is the desired product, a cascade of CSTRs with a high concentration of A in the feed and small secondary streams of D is favored. Series reactions such as A -> B -> C also have selectivity between B and C but CSTRs in general are typically not chosen when the desired product is B as the back mixing from the CSTR favors C . Typically a batch reactor or PFR is chosen for these reactions. Applications CSTRs facilitate rapid dilution of reagents through mixing. Therefore, for non-zero-order reactions, the low concentration of reagent in the reactor means a CSTR will be less efficient at removing the reagent compared to a PFR with the same residence time. Therefore, CSTRs are typically larger than PFRs, which may be a challenge in applications where space is limited. However, one of the added benefits of dilution in CSTRs is the ability to neutralize shocks to the system. As opposed to PFRs, the performance of CSTRs is less susceptible to changes in the influent composition, which makes it ideal for a variety of industrial applications: Environmental engineering Activated sludge process for wastewater treatment Lagoon treatment systems for natural wastewater treatment Anaerobic digesters for the stabilization of wastewater biosolids Treatment wetlands for wastewater and storm water runoff Chemical engineering Loop reactor for production of pharmaceuticals Fermentation Biogas production
Physical sciences
Chemical engineering
Chemistry
2293159
https://en.wikipedia.org/wiki/Lime%20sulfur
Lime sulfur
In horticulture, lime sulfur (lime sulphur in British English, see American and British English spelling differences) is mainly a mixture of calcium polysulfides and thiosulfate (plus other reaction by-products as sulfite and sulfate) formed by reacting calcium hydroxide with elemental sulfur, used in pest control. It can be prepared by boiling in water a suspension of poorly soluble calcium hydroxide (lime) and solid sulfur together with a small amount of surfactant to facilitate the dispersion of these solids in water. After elimination of residual solids (flocculation, decantation, and filtration), it is normally used as an aqueous solution, which is reddish-yellow in colour and has a distinctive offensive odor of hydrogen sulfide (H2S, rotten eggs). Synthesis reaction The exact chemical reaction leading to the synthesis of lime sulfur is generally written as: as reported in a document of the US Department of Agriculture (USDA). This vague reaction is poorly understood, because it involves the reduction of elemental sulfur and no reductant appears in the equation while sulfur oxidation products are also mentioned as products. The initial pH of the solution imposed by poorly soluble hydrated lime is alkaline (pH = 12.5) while the final pH is in range 11–12, typical for sulfides which are also strong bases. When the hydrolysis of calcium sulfide is accounted for, the individual reactions for each of the by-products are: However, elemental sulfur can undergo a disproportionation reaction, also called dismutation. The first reaction resembles a disproportionation reaction. The inverse comproportionation reaction is the reaction occurring in the Claus process used for desulfurization of oil and gas crude products in the refining industry: By rewriting the last reaction in the inverse direction one obtains a reaction consistent with what is observed in the lime sulfur global reaction: In alkaline conditions, it gives: and after simplification, or more exactly recycling, of water molecules in the above reaction: adding back cations from hydrated lime for the sake of electroneutrality, one obtains the global reaction. This last reaction is consistent with the global lime sulfur reaction mentioned in the USDA document. However, it does not account of all the details, a.o., the production of thiosulfate and sulfate amongst the end-products of the reaction. Meanwhile, it is a good first order approximation and it usefully highlights the overall lime sulfur reaction scheme because the chemistry of reduced or partially oxidized forms of sulfur is particularly complex and all the intermediate steps or involved mechanisms are hard to unravel. Moreover, once exposed to atmospheric oxygen and microbial activity, the lime sulfur system will undergo a fast oxidation and its different products will continue to evolve and finally enter the natural sulfur cycle. The presence of thiosulfate in the lime sulfur reaction can be accounted by the reaction between sulfite and elemental sulfur (or with sulfide and polysulfides) and that of sulfate by the complete oxidation of sulfite or thiosulfate following a more complex reaction scheme. More information on calcium thiosulfate production has been described in a patent registered by Hajjatie et al. (2006). Hajjatie et al. (2006) wrote the lime sulfur reaction in various ways depending on the degree of polymerisation of calcium polysulfides, but the following reaction is probably the simplest of their series: where the species corresponds to the disulfide anion (with a covalent bond between the 2 sulfur atoms) also present in pyrite (), a Fe(II) disulfide mineral. They also managed to successfully control this reaction to achieve the conversion of elemental sulfur in a quasi-pure solution of calcium thiosulfate. Preparation The New York State Agricultural Experiment Station recipe for the concentrate suggests starting with 80 lb. of sulfur, 36 lb. of quicklime, and 50 gal. of water, equivalent to 19.172 kg of sulfur and 8.627 kg of calcium oxide per 100 litres of water. About 2.2:1 is the ratio (by weight) for compounding sulfur and quicklime; this makes the highest proportion of calcium pentasulfide. If calcium hydroxide (builders or hydrated lime) is used, an increase by 1/3 or more (to 115 g/L or more) might be used with the 192 g/L of sulfur. If the quicklime is 85%, 90%, or 95% pure, use 101 g/L, 96 g/L, or 91 g/L; if impure hydrated lime is used, its quantity is increased to compensate, though in practice lime with a purity lower than 90% is rarely used. The mixture is then boiled for one hour while being stirred while small amounts of water are added for evaporation. Use In agriculture and horticulture, lime sulfur is sold as a spray to control fungi, bacteria, and insects. On deciduous trees it can be sprayed during the winter on the surface of the bark in high concentrations, but as lime sulfur can burn foliage, it must be heavily diluted before spraying onto herbaceous crops, especially during warm weather. Lime sulfur is approved for use on organic crops in the European Union and the United Kingdom. Bonsai enthusiasts use undiluted lime sulfur to bleach, sterilize, and preserve deadwood on bonsai trees while giving an aged look. Rather than spraying the entire tree, as with the pesticidal usage, lime sulfur is painted directly onto the exposed deadwood, and is often colored with a small amount of dark paint to make it look more natural. Without paint pigments, the lime sulfur solution bleaches wood to a bone-white color that takes time to weather and become natural-looking. In the very specific case of the bonsai culture, if the lime sulfur is carefully and very patiently applied by hand with a small brush and does not enter into direct contact with the leaves or needles, this technique can be used on evergreen bonsai trees as well as other types of green trees. However, this does not apply for a normal use on common trees with green leaves. Diluted solutions of lime sulfur (between 1:16 and 1:32) are also used as a dip for pets to help control ringworm (a fungus), mange and other dermatoses and parasites. Undiluted lime sulfur is corrosive to skin and eyes and can cause serious injury like blindness. Safety Lime sulfur reacts with strong acids (including stomach acid) to produce highly toxic hydrogen sulfide (rotten egg gas) and indeed usually has a distinct "rotten egg" odor to it. Lime sulfur is not flammable but can release highly irritating sulfur dioxide gas when in a fire. Safety goggles and impervious gloves must be worn while handling lime sulfur. Lime sulfur solutions are strongly alkaline (typical commercial concentrates have a pH over 11.5 because of the presence of dissolved sulfides and hydroxide anions), and are harmful for living organisms and can cause blindness if splashed in the eyes. The corrosive nature of lime sulfur is due to the reduced species of sulfur it contains, in particular the sulfides responsible for stress corrosion cracking and the thiosulfates causing pitting corrosion. Localized corrosion by the reduced species of sulfur can be dramatic, even the mere presence of elemental sulfur in contact with metals is enough to corrode them considerably, including so-called stainless steels. History Lime sulfur is believed to be the earliest synthetic chemical used as a pesticide, being used in the 1840s in France to control grape vine powdery mildew Uncinula necator, which had been introduced from the USA in 1845 and reduced wine production by 80%. In 1886 it was first used in California to control San Jose scale. Commencing around 1904, commercial suppliers began to manufacture lime sulfur; prior to that time, gardeners were expected to manufacture their own. By the 1920s essentially all commercial orchards in western countries were protected by regular spraying with lime sulfur. However by the 1940s, lime sulfur began to be replaced by synthetic organic fungicides which risked less damage to the crop's foliage.
Physical sciences
Sulfide salts
Chemistry
2293875
https://en.wikipedia.org/wiki/ACARS
ACARS
In aviation, ACARS (; an acronym for Aircraft Communications Addressing and Reporting System) is a digital datalink system for transmission of short messages between aircraft and ground stations via airband radio or satellite. The protocol was designed by ARINC and deployed in 1978, using the Telex format. More ACARS radio stations were added subsequently by SITA. History of ACARS Prior to the introduction of datalink in aviation, all communication between the aircraft and ground personnel was performed by the flight crew using voice communication, using either VHF or HF voice radios. In many cases, the voice-relayed information involved dedicated radio operators and digital messages sent to an airline teletype system or successor systems. Further, the hourly rates for flight and cabin crew salaries depended on whether the aircraft was airborne or not, and if on the ground whether it was at the gate or not. The flight crews reported these times by voice to geographically dispersed radio operators. Airlines wanted to eliminate self-reported times to preclude inaccuracies, whether accidental or deliberate. Doing so also reduced the need for human radio operators to receive the reports. In an effort to reduce crew workload and improve data integrity, the engineering department at ARINC introduced the ACARS system in July 1978, as an automated time clock system. Teledyne Controls produced the avionics and the launch customer was Piedmont Airlines. The original expansion of the abbreviation was "Arinc Communications Addressing and Reporting System". Later, it was changed to "Aircraft Communications, Addressing and Reporting System". The original avionics standard was ARINC 597, which defined an ACARS Management Unit consisting of discrete inputs for the doors, parking brake and weight on wheels sensors to automatically determine the flight phase and generate and send as telex messages. It also contained a MSK modem, which was used to transmit the reports over existing VHF voice radios. Global standards for ACARS were prepared by the Airlines Electronic Engineering Committee (AEEC). The first day of ACARS operations saw about 4,000 transactions, but it did not experience widespread use by the major airlines until the 1980s. Early ACARS systems were extended over the years to support aircraft with digital data bus interfaces, flight management systems, and thermal printers. System description and functions ACARS as a term refers to the complete air and ground system, consisting of equipment on board, equipment on the ground, and a service provider. On-board ACARS equipment consists of end systems with a router, which routes messages through the air-ground subnetwork. Ground equipment is made up of a network of radio transceivers managed by a central site computer called AFEPS (Arinc Front End Processor System), which handles and routes messages. Generally, ground ACARS units are either government agencies such as the Federal Aviation Administration, an airline operations headquarters, or, for small airlines or general aviation, a third-party subscription service. Usually government agencies are responsible for clearances, while airline operations handle gate assignments, maintenance, and passenger needs. The ground processing system Ground system provision is the responsibility of either a participating air navigation service provider (ANSP) or an aircraft operator. Aircraft operators often contract out the function to either datalink service provider (DSP) or to a separate service provider. Messages from aircraft, especially automatically generated ones, can be pre-configured according to message type so that they are automatically delivered to the appropriate recipient just as ground-originated messages can be configured to reach the correct aircraft. The ACARS equipment on the aircraft is linked to that on the ground by the DSP. Because the ACARS network is modeled after the point-to-point telex network, all messages come to a central processing location to be routed. ARINC and SITA are the two primary service providers, with smaller operations from others in some areas. Some areas have multiple service providers. ACARS message types ACARS messages may be of three broad types: Air traffic control messages are used to request or provide clearances. Aeronautical operational control Airline administrative control Control messages are used to communicate between the aircraft and its base, with messages either standardized according to ARINC Standard 633, or user-defined in accordance with ARINC Standard 618. The contents of such messages can be OOOI events, flight plans, weather information, equipment health, status of connecting flights, etc. OOOI events A major function of ACARS is to automatically detect and report the start of each major flight phase, called OOOI events in the industry (out of the gate, off the ground, on the ground, and into the gate). These OOOI events are detected using input from aircraft sensors mounted on doors, parking brakes, and struts. At the start of each flight phase, an ACARS message is transmitted to the ground describing the flight phase, the time at which it occurred, and other related information such as the amount of fuel on board or the flight origin and destination. These messages are used to track the status of aircraft and crews. Flight management system interface ACARS interfaces with flight management systems (FMS), acting as the communication system for flight plans and weather information to be sent from the ground to the FMS. This enables the airline to update the FMS while in flight, and allows the flight crew to evaluate new weather conditions or alternative flight plans. Equipment health and maintenance data ACARS is used to send information from the aircraft to ground stations about the conditions of various aircraft systems and sensors in real-time. Maintenance faults and abnormal events are also transmitted to ground stations along with detailed messages, which are used by the airline for monitoring equipment health, and to better plan repair and maintenance activities. Ping messages Automated ping messages are used to test an aircraft's connection with the communication station. In the event that the aircraft ACARS unit has been silent for longer than a preset time interval, the ground station can ping the aircraft (directly or via satellite). A ping response indicates a healthy ACARS communication. Manually sent messages ACARS interfaces with interactive display units in the cockpit, which flight crews can use to send and receive technical messages and reports to or from ground stations, such as a request for weather information or clearances or the status of connecting flights. The response from the ground station is received on the aircraft via ACARS as well. Each airline customizes ACARS to this role to suit its needs. Communication details ACARS messages may be sent using a choice of communication methods, such as VHF or HF, either direct to ground or via satellite, using minimum-shift keying (MSK) modulation. ACARS can send messages over VHF if a VHF ground station network exists in the current area of the aircraft. VHF communication is line-of-sight propagation and the typical range is up to 200 nautical miles at high altitudes. Where VHF is absent, an HF network or satellite communication may be used if available. Satellite coverage may be limited at high latitudes (trans-polar flights). Role of ACARS in air accidents and incidents In the wake of the crash of Air France Flight 447 in 2009, there was discussion about making ACARS an "online-black-box" to reduce the effects of the loss of a flight recorder. However no changes were made to the ACARS system. In March 2014, ACARS messages and Doppler analysis of ACARS satellite communication data played a very significant role in efforts to trace Malaysia Airlines Flight 370 to an approximate location. While the primary ACARS system on board MH370 had been switched off, a second ACARS system called Classic Aero was active as long as the plane was powered up, and kept trying to establish a connection to an Inmarsat satellite every hour. The ACARS unit on the Airbus A320 of EgyptAir Flight 804 sent ACARS messages indicating the presence of smoke in toilets and the avionics bay prior to the aircraft's crash into the Mediterranean Sea on May 19, 2016, which killed all 66 persons on board. On 24 February 2021, the ACARS unit of a South African Airways flight from Johannesburg's OR Tambo International Airport to Brussels sent an ACARS message about an “alpha floor event”, which was activated when the Airbus A340-600's envelope protection system activated to override the pilots to prevent the plane from stalling on take-off. On 19 February 2023, there were numerous ACARS reports of a large white balloon near Hawaii in the aircraft lanes. Uses of ACARS outside aviation In 2002, ACARS was added to the NOAA Observing System Architecture. Thus commercial aircraft can act as weather data providers for weather agencies to use in their forecast models, sending meteorological observations like winds and temperatures over the ACARS network. NOAA provides real-time weather maps.
Technology
Aircraft components
null
2295173
https://en.wikipedia.org/wiki/Rana%20%28genus%29
Rana (genus)
Rana (derived from Latin rana, meaning 'frog') is a genus of frogs commonly known as the Holarctic true frogs, pond frogs or brown frogs. Members of this genus are found through much of Eurasia and western North America. Many other genera were formerly included here. These true frogs are usually largish species characterized by their slim waists and wrinkled skin; many have thin ridges running along their backs, but they generally lack "warts" as in typical toads. They are excellent jumpers due to their long, slender legs. The typical webbing found on their hind feet allows for easy movement through water. Coloration is mostly greens and browns above, with darker and yellowish spots. Distribution and habitat Many frogs in this genus breed in early spring, although subtropical and tropical species may breed throughout the year. Males of most of the species are known to call, but a few species are thought to be voiceless. Females lay eggs in rafts or large, globular clusters, and can produce up to 20,000 at one time. Diet Rana species feed mainly on insects and invertebrates, but swallow anything they can fit into their mouths, including small vertebrates. Among their predators are egrets, crocodiles, and snakes. Systematics Some 50 to 100 extant species are now placed in this genus by various authors; many other species formerly placed in Rana are now placed elsewhere. Frost restricted Rana to the Old World true frogs and the Eurasian brown and pond frogs of the common frog R. temporaria group, although other authors disagreed with this arrangement. In 2016, a consortium of Rana researchers from throughout Europe, Asia, and North America revised the group, and reported that the arrangement of Frost (2006) resulted in nonmonophyletic groups. Yuan et al. (2016) included all the North American ranids within Rana, and used subgenera for the well-differentiated species groups within Rana. Both of these classifications are presented below. Genera recently split from Rana are Babina, Clinotarsus (including Nasirana), Glandirana, Hydrophylax, Hylarana, Lithobates, Odorrana (including Wurana), Pelophylax, Pulchrana, Sanguirana, and Sylvirana. Of these, Odorrana and Lithobates are so closely related to Rana proper, they could conceivably be included here once again. The others seem to be far more distant relatives, in particular Pelophylax. New species are still being described in some numbers. A number of extinct species are in the genus, including Rana basaltica, from Miocene deposits in China. Species The following species are recognised in the genus Rana: Rana amurensis (Boulenger, 1886) – Siberian tree frog, Siberian wood frog, Amur brown frog Rana arvalis – Moor frog Rana asiatica – Central Asiatic frog, Asian frog Rana aurora – Northern red-legged frog Rana boylii – Foothill yellow-legged frog Rana cascadae – Cascades frog Rana chaochiaoensis – Chaochiao frog Rana chensinensis – Asiatic grass frog, Chinese brown frog Rana chevronta – Chevron-spotted brown frog Rana coreana – Korean brown frog Rana dalmatina – Agile frog Rana draytonii – California red-legged frog Rana dybowskii – Dybowski's frog Rana graeca – Greek stream frog, Greek frog Rana hanluica Rana huanrenensis – Huanren frog Rana iberica – Iberian frog Rana italica – Italian stream frog Rana japonica – Japanese brown frog Rana jiemuxiensis – Jiemuxi brown frog Rana johnsi – Johns' groove-toed frog Rana kobai – Ryukyu brown frog Rana kukunoris – Plateau brown frog Rana latastei – Italian agile frog, Lataste's frog Rana longicrus – Taipa frog Rana luanchuanensis Rana luteiventris – Columbia spotted frog Rana macrocnemis – Long-legged wood frog, Caucasus frog, Turkish frog, Brusa frog Rana maoershanensis Rana muscosa – Southern mountain yellow-legged frog Rana matsuoi – Goto Tago’s brown frog Rana neba Rana omeimontis – Omei brown frog, Omei wood frog Rana ornativentris – Montane brown frog, Nikkō frog Rana pirica – Hokkaidō frog Rana pretiosa – Oregon spotted frog Rana pseudodalmatina Rana pyrenaica – Pyrenean frog, Pyrenees frog Rana sakuraii – Stream brown frog, Napparagawa frog Rana sauteri – Sauter's brown frog, Kanshirei village frog, Taiwan groove-toed frog, Taiwan pseudotorrent frog Rana sangzhiensis – Sangzhi frog, Sangzhi groove-toed frog Rana shuchinae – Sichuan frog Rana sierrae – Sierra Nevada yellow-legged frog, Sierra Nevada Mountain yellow-legged frog Rana tagoi – Tago's brown frog Rana tavasensis – Tavas frog Rana temporaria – Common frog, European common frog, European common brown frog, European grass frog Rana tsushimensis – Tsushima brown frog, Tsushima leopard frog Rana uenoi Rana ulma – Okinawa frog Rana zhengi Rana zhenhaiensis – Zhenhai brown frog *Rana maoershanensis is likely not its own species, according to new genetic research. The following fossil species are also known: †Rana architemporaria (Pliocene of Japan) †Rana basaltica (Miocene of China) †?Rana hipparionum (Late Miocene/Early Pliocene of China, nomen dubium) †Rana pliocenica (Late Miocene of California) †Rana muelleri (Pleistocene of Germany) †Rana strausi (Late Pliocene of Germany) †?Rana yushensis (Early Pliocene of China, nomen nudum) The earliest known fossils of true Rana are of an indeterminate species from the Early Miocene of Germany. The paleosubspecies Rana temporaria fossilis was described in 1951 for articulated fossils from the late Eocene/early Oligocene of Bulgaria, but this taxonomic proposal was found to be invalid. Rana likely originated in Asia and migrated west to colonize Europe by the early Miocene, as was done earlier by Pelophylax. Alternative classifications AmphibiaWeb includes the following species, arranged in subgenera: Subgenus Amerana (Pacific brown frogs) Rana aurora – northern red-legged frog Rana boylii – foothill yellow-legged frog Rana cascadae – Cascades frog Rana draytonii – California red-legged frog Rana luteiventris – Columbia spotted frog Rana muscosa – southern mountain yellow-legged frog Rana pretiosa – Oregon spotted frog Rana sierrae – Sierra Nevada yellow-legged frog, Sierra Nevada Mountain yellow-legged frog Subgenus Aquarana (North American water frogs) Rana catesbeiana Shaw, 1802 – American bullfrog Rana clamitans Latreille, 1801 – green frog, bronze frog, northern green frog Rana grylio Stejneger, 1901 – pig frog Rana heckscheri Wright, 1924 – river frog Rana okaloosae Moler, 1985 – Florida bog frog Rana septentrionalis Baird, 1854 – mink frog Rana virgatipes Cope, 1891 – carpenter frog Subgenus Lithobates (neotropical true frogs) Rana bwana Hillis and de Sá, 1988 – Rio Chipillico frog Rana juliani Hillis and de Sá, 1988 – Maya Mountains frog Rana maculata Brocchi, 1877 Rana palmipes Spix, 1824 – Amazon River frog Rana vaillanti Brocchi, 1877 – Vaillant's frog Rana vibicaria (Cope, 1894) Rana warszewitschii Schmidt, 1857 Subgenus Liuhurana Rana shuchinae Liu, 1950 Subgenus Pantherana (leopard, pickerel and gopher frogs) Rana areolata Baird and Girard, 1852 – crawfish frog Rana berlandieri Baird, 1859 – Rio Grande leopard frog Rana blairi Mecham et al., 1973 – plains leopard frog Rana brownorum Sanders, 1973 – Gulf Coast leopard frog Rana capito LeConte, 1855 – Carolina gopher frog Rana chichicuahutla Cuellar, Méndez-De La Cruz, and Villagrán-Santa Cruz, 1996 Rana chiricahuensis Platz and Mecham, 1979 – Chiricahua leopard frog Rana dunni Zweifel, 1957 – Lake Patzcuaro frog Rana fisheri Stejneger, 1893 – Mogollon Rim leopard frog Rana forreri (Boulenger, 1883) – Forrer's leopard frog Rana kauffeldi Feinberg et al., 2014 – Atlantic Coast leopard frog Rana lemosespinali Smith and Chiszar, 2003 Rana lenca (Luque-Montes et al., 2018) Rana macroglossa Brocchi, 1877 Rana magnaocularis Frost and Bagnara, 1974 Rana megapoda Taylor, 1942 Rana miadis Barbour and Loveridge, 1929 Rana montezumae Baird, 1854 Rana neovolcanica Hillis and Frost, 1985 Rana omiltemana Günther, 1900 Rana onca Cope, 1875 – relict leopard frog Rana palustris LeConte, 1825 – pickerel frog Rana pipiens Schreber, 1782 – northern leopard frog Rana sevosa Goin and Netting, 1940 – dusky gopher frog Rana spectabilis Hillis and Frost, 1985 – brilliant leopard frog Rana sphenocephala Cope, 1886 – southern leopard frog Rana taylori Smith, 1959 – Peralta frog Rana tlaloci Hillis and Frost, 1985 – Tlaloc's leopard frog Rana yavapaiensis Platz and Frost, 1984 – lowland leopard frog Subgenus Pseudorana (Weining brown frog) Rana weiningensis Subgenus Rana (Eurasian brown frogs) Rana amurensis – Siberian tree frog, Siberian wood frog, Amur brown frog Rana arvalis – moor frog Rana asiatica – Central Asiatic frog, Asian frog Rana camerani – long-legged wood frog Rana chaochiaoensis – Chaochiao frog Rana chensinensis – Asiatic grass frog, Chinese brown frog Rana chevronta – chevron-spotted brown frog Rana coreana – Korean brown frog Rana culaiensis – Culai brown frog Rana dalmatina – agile frog Rana dybowskii – Dybowski's frog Rana graeca – Greek stream frog, Greek frog Rana hanluica Rana holtzi – long-legged wood frog Rana huanrenensis – Huanren frog Rana iberica – Iberian frog Rana italica – Italian stream frog Rana japonica – Japanese brown frog Rana jiemuxiensis – Jiemuxi brown frog Rana johnsi – Johns' groove-toed frog Rana kobai – Ryukyu brown frog Rana kukunoris – plateau brown frog Rana latastei – Italian agile frog, Lataste's frog Rana longicrus – Taipa frog Rana macrocnemis – long-legged wood frog, Caucasus frog, Turkish frog, Brusa frog Rana maoershanensis Rana neba Rana omeimontis – Omei brown frog, Omei wood frog Rana ornativentris – montane brown frog, Nikkō frog Rana pirica – Hokkaidō frog Rana pseudodalmatina Rana pyrenaica – Pyrenean frog, Pyrenees frog Rana sakuraii – stream brown frog, Napparagawa frog Rana sangzhiensis Rana sauteri – Sauter's brown frog, Kanshirei village frog, Taiwan groove-toed frog, Taiwan pseudotorrent frog Rana tagoi – Tago's brown frog Rana tavasensis – Tavas brown frog Rana temporaria – common frog, European common frog, European common brown frog, European grass frog Rana tsushimensis – Tsushima brown frog, Tsushima leopard frog Rana uenoi Rana ulma Rana wuyiensis – Wuyi brown frog Rana zhengi Rana zhenhaiensis – Zhenhai brown frog Rana zhijinensis Luo, Xiao & Zhou, 2022 – Zhijin brown frog Subgenus Zweifelia (Mexican torrent frogs) Rana johni Blair, 1965 Rana psilonota Webb, 2001 Rana pueblae Zweifel, 1955 Rana pustulosa (Boulenger, 1883) Rana sierramadrensis Taylor, 1939 Rana tarahumarae (Boulenger, 1917) – Tarahumara frog Rana zweifeli Hillis, Frost, and Webb, 1984 – Zweifel's frog Incertae sedis (no assigned subgenus) Rana dabieshanensis Wang et al., 2017 Rana luanchuanensis Zhao and Yuan, 2017 Rana sylvatica LeConte, 1825 – wood frog
Biology and health sciences
Frogs and toads
Animals
1634911
https://en.wikipedia.org/wiki/Taro
Taro
Taro (; Colocasia esculenta) is a root vegetable. It is the most widely cultivated species of several plants in the family Araceae that are used as vegetables for their corms, leaves, stems and petioles. Taro corms are a food staple in African, Oceanic, East Asian, Southeast Asian and South Asian cultures (similar to yams). Taro is believed to be one of the earliest cultivated plants. Common names The English term taro was borrowed from the Māori language when Captain Cook first observed Colocasia plantations in New Zealand in 1769. The form taro or talo is widespread among Polynesian languages: in Tahitian; in Samoan and Tongan; in Hawaiian; tao in Marquesan. All these forms originate from Proto-Polynesian *talo, which itself descended from Proto-Oceanic *talos (cf. in Fijian) and Proto-Austronesian *tales (cf. in Sundanese & in Javanese). However, irregularity in sound correspondences among the cognate forms in Austronesian suggests that the term may have been borrowed and spread from an Austroasiatic language perhaps in Borneo (cf. proto-Mon-Khmer *t2rawʔ, Khasi , Khmu sroʔ, Mlabri kwaaj,...). The Ancient Greek word (, lit. 'lotus root') is the origin of the Modern Greek word (), the word in both Greek and Turkish, and qulqas () in Arabic. It was borrowed by Latin as colocasia, thus becoming the genus name Colocasia. Taro is among the most widely grown species in the group of tropical perennial plants that are colloquially referred to as "elephant ears", when grown as ornamental plants. Other plants with the same nickname include certain species of related aroids possessing large, heart-shaped leaves, usually within such genera as Alocasia, Caladium, Monstera, Philodendron, Syngonium, Thaumatophyllum, and Xanthosoma. Other languages In Cyprus, Colocasia has been in use since the Roman Empire. Today it is known as (). It is usually fried or cooked with corn, pork, or chicken, in a tomato sauce in casserole. "Baby" kolokasi is called "poulles": after being fried dry, red wine and coriander seed are added, and then it is served with freshly squeezed lemon. Lately, some restaurants have begun serving thin slices of kolokasi deep fried, calling them "kolokasi chips". In the Caribbean and West Indies, taro is known as dasheen in Trinidad and Tobago, Saint Lucia, Saint Vincent and the Grenadines and Jamaica. The leaves are known as aruiya ke bhaji by Indo-Trinidadian and Tobagonians. In Portuguese, it is known simply as , as well as , , , , or matabala; in Spanish, it is called . In the Philippines, the whole plant is usually referred to as gabi, while the corm is called taro. Taro is very popular flavor for milk tea in the country, and just as popular ingredient in several Filipino savory dishes such as sinigang. Other names include idumbe in the KwaZulu-Natal region, and boina in the Wolaita language of Ethiopia. In Tanzania, it is called magimbi in the Swahili language. It is also called eddo in Liberia. Description Colocasia esculenta is a perennial, tropical plant primarily grown as a root vegetable for its edible, starchy corm. The plant has rhizomes of different shapes and sizes. Leaves are up to and sprout from the rhizome. They are dark green above and light green beneath. They are triangular-ovate, sub-rounded and mucronate at the apex, with the tip of the basal lobes rounded or sub-rounded. The petiole is high. The path can be up to long. The spadix is about three fifths as long as the spathe, with flowering parts up to in diameter. The female portion is at the fertile ovaries intermixed with sterile white ones. Neuters grow above the females, and are rhomboid or irregular orium lobed, with six or eight cells. The appendage is shorter than the male portion. Similar species Taro is related to Xanthosoma and Caladium, plants commonly grown ornamentally, and like them, it is sometimes loosely called elephant ear. Similar taro varieties include giant taro (Alocasia macrorrhizos), swamp taro (Cyrtosperma merkusii), and arrowleaf elephant's ear (Xanthosoma sagittifolium). Taxonomy 18th-century Swedish biologist Carl Linnaeus originally described two species, Colocasia esculenta and Colocasia antiquorum, but many later botanists consider them both to be members of a single, very variable species, the correct name for which is Colocasia esculenta. Etymology The specific epithet, , means "edible" in Latin. Distribution and habitat Colocasia esculenta is thought to be native to Southern India and Southeast Asia, but is widely naturalised. Colocasia is thought to have originated in the Indomalayan realm, perhaps in East India, Nepal, and Bangladesh. It spread by cultivation eastward into Southeast Asia, East Asia and the Pacific Islands; westward to Egypt and the eastern Mediterranean Basin; and then southward and westward from there into East Africa and West Africa, where it spread to the Caribbean and Americas. Taro was probably first native to the lowland wetlands of Malaysia, where it is called taloes. In Australia, C. esculenta var. aquatilis is thought to be native to the Kimberley region of Western Australia; the common variety esculenta is now naturalised and considered an invasive weed in Western Australia, the Northern Territory, Queensland and New South Wales. In Europe, C. esculenta is cultivated in Cyprus and it's called Colocasi, (Κολοκάσι in Greek) and it is certified as a PDO product. It is also found in the Greek island of Ikaria and cited as a vital source of food for the island during WW II. In Turkey, C. esculenta is locally known as gölevez and mainly grown on the Mediterranean coast, such as the Alanya district of Antalya Province and the Anamur district of Mersin Province. In Macaronesia this plant has become naturalized, probably as a result of the Portuguese discoveries and is frequently used in the Macaronesian diet as an important carbohydrate source. In the southeastern United States, this plant is recognized as an invasive species. Many populations can be commonly found growing near drain ditches and bayous in Houston, Texas. Cultivation History Taro is one of the most ancient cultivated crops. Taro is found widely in tropical and subtropical regions of South Asia, East Asia, Southeast Asia, and Papua New Guinea, and northern Australia and in Maldives. Taro is highly polymorphic, making taxonomy and distinction between wild and cultivated types difficult. It is believed that they were domesticated independently multiple times, with authors giving possible locations as New Guinea, Mainland Southeast Asia, and northeastern India, based largely on the assumed native range of the wild plants. However, more recent studies have pointed out that wild taro may have a much larger native distribution than previously believed, and wild breeding types may also likely be indigenous to other parts of Island Southeast Asia. Archaeological traces of taro exploitation have been recovered from numerous sites, though whether these were cultivated or wild types can not be ascertained. They include the Niah Caves of Borneo around 10,000 years ago, Ille Cave of Palawan, dated to at least 11,000 year ago; Kuk Swamp of New Guinea, dated to between 8250 BC and 7960 BC; and Kilu Cave in the Solomon Islands dated to around 28,000 to 20,000 years ago. In the case of Kuk Swamp, there is evidence of formalized agriculture emerging by about 10,000 years ago, with evidence of cultivated plots, though which plant was cultivated remains unknown. Taro were carried into the Pacific Islands by Austronesian peoples from around 1300 BC, where they became a staple crop of Polynesians, along with other types of "taros", like Alocasia macrorrhizos, Amorphophallus paeoniifolius, and Cyrtosperma merkusii. They are the most important and the most preferred among the four, because they were less likely to contain the irritating raphides present in the other plants. Taro is also identified as one of the staples of Micronesia, from archaeological evidence dating back to the pre-colonial Latte Period (c. 900 – 1521 AD), indicating that it was also carried by Micronesians when they colonized the islands. Taro pollen and starch residue have also been identified in Lapita sites, dated to between 1100 BC and 550 BC. Taro was later spread to Madagascar as early as the 1st century AD. Modern production In 2022, world production of taro was 18 million tonnes, led by Nigeria with 46% of the total (table). Taro has the fifth largest production among root and tuber crops worldwide. The average yield of taro is around 7 tons per hectare. Taro can be grown in paddy fields where water is abundant or in upland situations where water is supplied by rainfall or supplemental irrigation. Taro is one of the few crops (along with rice and lotus) that can be grown under flooded conditions. Flooded cultivation has some advantages over dry-land cultivation: higher yields (about double), out-of-season production (which may result in higher prices), and weed control (which flooding facilitates). Manmade floodplains particular to taro cultivation are commonly found throughout tropical Polynesian societies called repo. Like most root crops, taro and eddoes do well in deep, moist or even swampy soils where the annual rainfall exceeds . Eddoes are more resistant to drought and cold. The crop attains maturity within six to twelve months after planting in dry-land cultivation and after twelve to fifteen months in wetland cultivation. The crop is harvested when the plant height decreases and the leaves turn yellow. Quality control Taro generally commands a higher market price in comparison to other root crops, so the quality control measures throughout the production process are rather essential. The sizes found in most markets are 1–2 kg and 2–3 kg. The best size for packaging and for consumers is 1–2 kg. To guarantee the product meets the expected high standards upon reaching the consumer, there are some common grading standards for fresh corms: No excess soil, softness or decay No bruises or deep cuts Spherical to round shape No major abnormal deformities No roots Approximately 5 cm (under 2”) of petiole left attached to the corm No double-tops Due to the high moisture content of the corms, and the plant’s natural love of humidity, mold and disease can easily develop, causing root rot or decay. To prolong their shelf lives, the corms are usually stored at cooler temperatures, ranging from 10 to 15 degrees Celsius and maintained at a relative humidity of 80% to 90%. For packaging, the corms are commonly placed in polypropylene bags or ventilated wooden crates to minimize condensation and 'sweating.' During export, a weight allowance of approximately 5% above the net weight is included to account for potential shrinkage during transit. For commercial shipping and export purposes, refrigeration is used; for instance, corms with 5 to 10 centimeters of petiole remaining are exported from Fiji to New Zealand in wooden boxes. They are then transported via refrigerated container, chilled to around 5° Celsius. The corms can be maintained for up to six weeks in good condition; most good-quality corms may even be replanted and grown by the consumer, thanks to the species’ prolific nature and hardiness. Breeding In the early 1970’s, one of the earliest taro breeding programs was initiated in the Solomon Islands to create cultivars that were resistant to taro leaf blight. After taro leaf blight was introduced to Samoa in 1993, another breeding program was initiated. In this program Asian varieties that were resistant to TLB were used. The breeding program helped restore the taro export industry in Samoa. Corm yield and corm quality appear to be negatively correlated. In order to produce the uniform fresh healthy corms that the market desires, early maturing cultivars with a growth period of 5 to 7 months can be used. Selection methods and programs Cultivars grown in the Pacific regions produce good quality corms, as a result of selecting for corm quality and yield. However, the genetic bases of these cultivars is very narrow. Asian cultivars have agriculturally undesirable traits (such as suckers and stolon), but appear to be more genetically diverse. There needs to be an international exchange of taro germplasm with reliable quarantine procedures. There are thought to be 15,000 varieties of C. esculenta. Currently there are 6,000 accession from various institutes from across the world. The INEA (International Network for Edible Aroids) already has a core sample of 170 cultivars that have been distributed. These cultivars are maintained in vitro in a germplasm centre in Fiji, which is considered safer and cheaper than field conservation. Polyploidy breeding Taro exists as a diploid (2n=28) and a triploid (3n=42). Naturally occurring triploids in India were found to have significantly better yields. There have been attempts to artificially make triploids by crossing diploids with artificial tetraploids Nutrition Cooked taro is 64% water, 35% carbohydrates, and contains negligible protein and fat (table). In a reference amount of , taro supplies 142 calories of food energy, and is a rich source (20% or more of the Daily Value, DV) of vitamin B6 (25% DV), vitamin E (20% DV), and manganese (21% DV), while phosphorus and potassium are in moderate amounts (10–11% DV) (table). Raw taro leaves are 86% water, 7% carbohydrates, 5% protein, and 1% fat (table). The leaves are nutrient-rich, containing substantial amounts of vitamins and minerals, especially vitamin K at 103% of the DV (table). Uses Culinary Taro is a food staple in African, Oceanic and South Asian cultures. People usually consume its edible corm and leaves. The corms, which have a light purple color due to phenolic pigments, are roasted, baked or boiled. The natural sugars give a sweet, nutty flavor. The starch is easily digestible, and since the grains are fine and small it is often used for baby food. In its raw form, the plant is toxic due to the presence of calcium oxalate, and the presence of needle-shaped raphides in the plant cells. However, the toxin can be minimized and the tuber rendered palatable by cooking, or by steeping in cold water overnight. Corms of the small, round variety are peeled and boiled, then sold either frozen, bagged in their own liquids, or canned. Oceania Cook Islands Taro is the pre-eminent crop of the Cook Islands and surpasses all other crops in terms of land area devoted to production. The prominence of the crop there has led it to be a staple of the population's diet. Taro is grown across the country, but the method of cultivation depends on the nature of the island it is grown on. Taro also plays an important role in the country's export trade. The root is eaten boiled, as is standard across Polynesia. Taro leaves are also eaten, cooked with coconut milk, onion, and meat or fish. Fiji Taro (dalo in Fijian) has been a staple of the Fijian diet for centuries, and its cultural importance is celebrated on Taro Day. Its growth as an export crop began in 1993 when taro leaf blight devastated the taro industry in neighboring Samoa. Fiji filled the void and was soon supplying taro internationally. Almost 80% of Fiji's exported taro comes from the island of Taveuni where the taro beetle species Papuana uninodis is absent. The Fijian taro industry on the main islands of Viti Levu and Vanua Levu faces constant damage from the beetles. The Fiji Ministry of Agriculture and the Land Resources Division of the Secretariat of the Pacific Community (SPC) are researching pest control and instigating quarantine restrictions to prevent the spread of the pest. Taveuni now exports pest-damage-free crops. Hawaii Kalo is taro's Hawaiian name. The local crop plays an important role in Hawaiian culture and Indigenous religion. Taro is a traditional staple of the native cuisine of Hawaii. Some of the uses for taro include poi, table taro (steamed and served like a potato), taro chips, and lūʻau leaf (to make laulau). In Hawaii, kalo is farmed under either dryland or wetland conditions. Taro farming there is challenging because of the difficulties of accessing fresh water. Kalo is usually grown in "pond fields" known as loʻi. Typical dryland or "upland" varieties (varieties grown in watered but not flooded fields) are lehua maoli and bun long, the latter widely known as "Chinese taro". Bun long is used for making taro chips. Dasheen (also called "eddo") is another dryland variety cultivated for its corms or as an ornamental plant. A contemporary Hawaiian diet consists of many tuberous plants, particularly sweet potato and kalo. The Hawaii Agricultural Statistics Service determined the 10-year median production of kalo to be about 6.1 million pounds (2,800 t). However, 2003 taro production was only 5 million pounds (2,300 t), the lowest since record-keeping began in 1946. The previous low (1997) was 5.5 million pounds (2,500 t). Despite generally growing demand, production was even lower in 2005—only 4 million pounds, with kalo for processing into poi accounting for 97.5%. Urbanization is one cause driving down harvests from the 1948 high of 14.1 million pounds (6,400 t), but more recently, the decline has resulted from pests and diseases. A non-native apple snail (Pomacea canaliculata) is a major culprit along with a plant rot disease traced to a species of fungus in the genus Phytophthora that now damages kalo crops throughout Hawaii. Although pesticides could control both problems to some extent, pesticide use in the loʻi is banned because of the opportunity for chemicals to migrate quickly into streams, and then eventually the sea. Social roles Important aspects of Hawaiian culture revolve around kalo. For example, the newer name for a traditional Hawaiian feast, the lūʻau, comes from kalo. Young kalo tops baked with coconut milk and chicken meat or octopus arms are frequently served at luaus. By ancient Hawaiian custom, fighting is not allowed when a bowl of poi is "open". It is also disrespectful to fight in front of an elder and one should not raise their voice, speak angrily, or make rude comments or gestures. Loʻi A loʻi is a patch of wetland dedicated to growing kalo. Hawaiians have traditionally used irrigation to produce kalo. Wetland fields often produce more kalo per acre than dry fields. Wetland-grown kalo need a constant flow of water. About 300 varieties of kalo were originally brought to Hawaiʻi (about 100 remain). The kalo plant takes seven months to grow until harvest, so lo`i fields are used in rotation and the soil can be replenished while the loʻi in use has sufficient water. Stems are typically replanted in the lo`i for future harvests. History One mythological version of Hawaiian ancestry cites the taro plant as an ancestor to Hawaiians. Legend joins two siblings of high and divine rank: Papahānaumoku ("Papa from whom lands are born", or Earth mother) and Wākea (Sky father). Together they create the islands of Hawaii and a beautiful woman, Hoʻohokukalani (The Heavenly one who made the stars). The story of kalo begins when Wakea and Papa conceived their daughter, Hoʻohokukalani. Daughter and father then conceived a child together named Hāloanakalaukapalili (Long stalk trembling), but it was stillborn. After the father and daughter buried the child near their house, a kalo plant grew over the grave: The second child born of Wākea and Hoʻohokukalani was named Hāloa after his older brother. The kalo of the earth was the sustenance for the young brother and became the principal food for successive generations. The Hawaiian word for family, , is derived from ʻohā, the shoot that grows from the kalo corm. As young shoots grow from the corm of the kalo plant, so people, too, grow from their family. Papua New Guinea The taro corm is a traditional staple crop for large parts of Papua New Guinea, with a domestic trade extending its consumption to areas where it is not traditionally grown. Taro from some regions has developed particularly good reputations with (for instance) Lae taro being highly prized. Among the Urapmin people of Papua New Guinea, taro (known in Urap as ima) is the main source of sustenance along with the sweet potato (Urap: wan). In fact, the word for "food" in Urap is a compound of these two words. Polynesia Considered the staple starch of traditional Polynesian cuisine, taro is both a common and prestigious food item that was first introduced to the Polynesian islands by prehistoric seafarers of Southeast Asian derivation. The tuber itself is prepared in various ways, including baking, steaming in earth ovens (umu or imu), boiling, and frying. The famous Hawaiian staple poi is made by mashing steamed taro roots with water. Taro also features in traditional desserts such as Samoan fa'ausi, which consists of grated, cooked taro mixed with coconut milk and brown sugar. The leaves of the taro plant also feature prominently in Polynesian cooking, especially as edible wrappings for dishes such as Hawaiian laulau, Fijian and Samoan palusami (wrapped around onions and coconut milk), and Tongan lupulu (wrapped corned beef). Ceremonial presentations on occasion of chiefly rites or communal events (weddings, funerals, etc.) traditionally included the ritual presentation of raw and cooked taro roots/plants. The Hawaiian laulau traditionally contains pork, fish, and lu'au (cooked taro leaf). The wrapping is inedible ti leaves (Hawaiian: lau ki). Cooked taro leaf has the consistency of cooked spinach and is therefore unsuitable for use as a wrapping. Samoa In Samoa, the baby talo leaves and coconut milk are wrapped into parcels and cooked, along with other food, in an earth oven . The parcels are called palusami or lu'au. The resulting taste is smoky, sweet, savory and has a unique creamy texture. The root is also baked (Talo tao) in the umu or boiled with coconut cream (Faálifu Talo). It has a slightly bland and starchy flavor. It is sometimes called the Polynesian potato. Tonga Lū is the Tongan word for the edible leaves of the taro plant (called talo in Tonga), as well as the traditional dish made using them. This meal is still prepared for special occasions and especially on Sunday. The dish consists of chopped meat, onions, and coconut milk wrapped in a number of taro leaves (lū talo). This is then wrapped traditionally in a banana leaf (nowadays, aluminum foil is often used) and put in the ʻumu to cook. It has a number of named varieties, dependent on the filling: Lū pulu – lū with beef, commonly using imported corned beef (kapapulu) Lū sipi – lū with lamb Lū moa – lū with chicken Lū hoosi – lū with horse meat Oceanian Atolls The islands situated along the border of the three main parts of Oceania (Polynesia, Micronesia and Melanesia) are more prone to being atolls rather than volcanic islands (most prominently Tuvalu, Tokelau, and Kiribati). As a result of this, Taro was not a part of the traditional diet due to the infertile soil and have only become a staple today through importation from other islands (Taro and Cassava cultivars are usually imported from Fiji or Samoa). The traditional staple however is the Swamp Taro known as Pulaka or Babai, a distant relative of the Taro but with a very long growing phase (3–5 years), larger and denser corms and coarser leaves. It is grown in a patch of land dug out to give rise to the freshwater lense beneath the soil. The lengthy growing time of this crop usually confines it as a food during festivities much like Pork although it can be preserved by drying out in the sun and storing it somewhere cool and dry to be enjoyed out of harvesting season. East Asia China Taro () is commonly used as a main course as steamed taro with or without sugar, as a substitute for other cereals, in Chinese cuisine in a variety of styles and provinces steamed, boiled or stir-fried as a main dish and as a flavor-enhancing ingredient. In Northern China, it is often boiled or steamed then peeled and eaten with or without sugar much like a potato. It is commonly braised with pork or beef. It is used in the Cantonese dim sum to make a small plated dish called taro dumpling as well as a pan-fried dish called taro cake. It can also be shredded into long strips which are woven together to form a seafood birdsnest. In Fujian cuisine, it is steamed or boiled and mixed with starch to form a dough for dumpling. Taro cake is a delicacy traditionally eaten during Chinese New Year celebrations. As a dessert, it can be mashed into a purée or used as a flavoring in tong sui, ice cream, and other desserts such as Sweet Taro Pie. McDonald's sells taro-flavored pies in China. Taro is mashed in the dessert known as taro purée. Taro paste, a traditional Cantonese cuisine, which originated from the Chaoshan region in the eastern part of China's Guangdong Province is a dessert made primarily from taro. The taro is steamed and then mashed into a thick paste, which forms the base of the dessert. Lard or fried onion oil is then added for fragrance. The dessert is traditionally sweetened with water chestnut syrup, and served with ginkgo nuts. Modern versions of the dessert include the addition of coconut cream and sweet corn. The dessert is commonly served at traditional Teochew wedding banquet dinners as the last course, marking the end of the banquet. Japan A similar plant in Japan is called . The "child" and "grandchild" corms (cormels, cormlets) which bud from the parent satoimo, are called and , respectively, or more generally . Satoimo has been propagated in Southeast Asia since the late Jōmon period. It was a regional staple before rice became predominant. The tuber, satoimo, is often prepared through simmering in fish stock (dashi) and soy sauce. The stalk, , can also be prepared a number of ways, depending on the variety. Korea In Korea, taro is called toran (: "earth egg"), and the corm is stewed and the leaf stem is stir-fried. Taro roots can be used for medicinal purposes, particularly for treating insect bites. It is made into the Korean traditional soup toranguk (토란국). Taro stems are often used as an ingredient in yukgaejang (육개장). Taiwan In Taiwan, taro—yùtóu () in Mandarin, and ō͘-á () in Taiwanese—is well-adapted to Taiwanese climate and can grow almost anywhere in the country with minimal maintenance. Before the Taiwan Miracle made rice affordable to everyone, taro was one of the main staples in Taiwan. Nowadays taro is used more often in desserts. Supermarket varieties range from about the size and shape of a brussels sprout to longer, larger varieties the size of a football. Taro chips are often used as a potato-chip-like snack. Compared to potato chips, taro chips are harder and have a nuttier flavor. Another popular traditional Taiwanese snack is taro ball, served on ice or deep-fried. It is common to see taro as a flavor in desserts and drinks, such as bubble tea. The Taiwan Technical Mission launched a Taro ice cream making workshop for Micronesians in Nekken, Aimeliik. Southeast Asia Indonesia In Indonesia, taro is widely used for snacks, cakes, crackers, and even macarons, thus it can be easily found everywhere. Some varieties are specially cultivated in accordance with social or geographical traditions. Taro is usually known as "keladi", although other varieties are also known as "talas", among others. The vegetable soup, sayur asem and sayur lodeh may use taro and its leaves also lompong (taro stem) in Java. Chinese descendants in Indonesia often eat taro with stewed rice and dried shrimp. The taro is diced and cooked along with the rice, the shrimp, and sesame oil. In New Guinea, there are some traditional dishes made of taro as well its leaves such as keripik keladi (sweet spicy taro chips), , pounded taro with vegetables, and , anchovies mixed with slices of taro leaf. Mentawai people has a traditional food called lotlot, taro leaves cooked with tinimbok (smoked fish). Philippines In the Philippines taro is usually called gabi, abi, or avi and is widely available throughout the archipelago. Its adaptability to marshland and swamps make it one of the most common vegetables in the Philippines. The leaves, stems, and corms are all consumed and form part of the local cuisine. A popular recipe for taro is laing from the Bicol Region; the dish's main ingredients are taro leaves (at times including stems) cooked in coconut milk, and salted with fermented shrimp or fish bagoong. It is sometimes heavily spiced with red hot chilies called siling labuyo. Another dish in which taro is commonly used is the Philippine national stew, sinigang, although radish can be used if taro is not available. This stew is made with pork and beef, shrimp, or fish, a souring agent (tamarind fruit, kamias, etc.) with the addition of peeled and diced corms as thickener. The corm is also prepared as a basic ingredient for ginataan, a coconut milk and taro dessert. Thailand In Thai cuisine, taro (pheuak) is used in a variety of ways depending on the region. Boiled taro is readily available in the market packaged in small cellophane bags, already peeled and diced, and eaten as a snack. Pieces of boiled taro with coconut milk are a traditional Thai dessert. Raw taro is also often sliced and deep fried and sold in bags as chips (เผือกทอด). As in other Asian countries, taro is a popular flavor for ice cream in Thailand. Vietnam In Vietnam, there is a large variety of taro plants. One is called khoai môn, which is used as a filling in spring rolls, cakes, puddings and sweet soup desserts, smoothies and other desserts. Taro is used in the Tết dessert chè khoai môn, which is sticky rice pudding with taro roots. The stems are also used in soups such as canh chua. One is called khoai sọ, which is smaller in size than khoai môn. Another common taro plant grows roots in shallow waters and grows stems and leaves above the surface of the water. This taro plant has saponin-like substances that cause a hot, itchy feeling in the mouth and throat. Northern farmers used to plant them to cook the stems and leaves to feed their hogs. They re-grew quickly from their roots. After cooking, the saponin in the soup of taro stems and leaves is reduced to a level the hogs can eat. Today this practice is no longer popular in Vietnam agriculture. These taro plants are commonly called khoai ngứa, which literally means "itchy potato". South Asia Taro roots are commonly known as Arbi or Arvi in Urdu and Hindi language. It is a common dish in Northern India and Pakistan. Arbi Gosht (meat) Masala Recipe is a tangy mutton curry recipe with taro vegetable. Mutton and Arbi is cooked in whole spices and tomatoes which lends a wonderful taste to the dish. Bangladesh In Bangladesh taro is a very popular vegetable known as kochu (কচু) or mukhi (মুখি). Within the Sylheti language, it is called mukhi. It is usually cooked with small prawns or the ilish fish into a curry, but some dishes are cooked with dried fish. Its green leaves, kochu pata (কচু পাতা), and stem, kochu (কচু), are also eaten as a favorite dish and usually ground to a paste or finely chopped to make shak — but it must be boiled well beforehand. Taro stolons or stems, kochur loti (কচুর লতি), are also favored by Bangladeshis and cooked with shrimp, dried fish or the head of the ilish fish. Taro is available, either fresh or frozen, in the UK and US in most Asian stores and supermarkets specialising in Sylheti, Bangladeshi or South Asian food. Also, another variety called maan kochu is consumed and is a rich source of vitamins and nutrients. Maan Kochu is made into a paste and fried to prepare a food known as Kochu Bata. India In India, taro or eddoe is a common dish served in many ways. In Gujarat, it is called Patar Vel or Saryia Na Paan green leaves are used by making a roll, with besan (gram flour), salt, turmeric, red chili powder all put into paste form inside leaves. Then steamed and in small portions, as well as fried in the deep fryer. In Mizoram, in north-eastern India, it is called bäl; the leaves, stalks and corms are eaten as dawl bai. The leaves and stalks are often traditionally preserved to be eaten in dry season as dawl rëp bai. In Assam, a north-eastern state, taro is known as kosu (কচু). Various parts of the plant are eaten by making different dishes. The leaf buds called kosu loti (কচু লতি) are cooked with sour dried fruits and called thekera (থেকেৰা) or sometimes eaten alongside tamarind, elephant apple, a small amount of pulses, or fish. Similar dishes are prepared from the long root-like structures called kosu thuri. A sour fried dish is made from its flower (kosu kala). Porridges are made from the corms themselves, which may also be boiled, seasoned with salt and eaten as snacks. In Manipur, another north-eastern state, taro is known as pan. The Kukis calls it bal. Boiled bal is a snack at lunch along with chutney or hot chili-flakes besides being cooked as a main dish along with smoked or dried meat, beans, and mustard leaves. Sun-dried taro leaves are later used in broth and stews. It is widely available and is eaten in many forms, either baked, boiled, or cooked into a curry with hilsa or with fermented soybeans called hawai-zaar. The leaves are also used in a special traditional dish called utti, cooked with peas. It is called arbi in Urdu/Hindi and arvi in Punjabi in north India. It is called kəchu (कचु) in Sanskrit. In Himachal Pradesh, in northern India, taro corms are known as ghandyali, and the plant is known as kachalu in the Kangra and Mandi districts. The dish called patrodu is made using taro leaves rolled with corn or gram flour and boiled in water. Another dish, pujji is made with mashed leaves and the trunk of the plant and ghandyali or taro corms are prepared as a separate dish. In Shimla, a pancake-style dish, called patra or patid, is made using gram flour. In Uttarakhand and neighboring Nepal, taro is considered a healthy food and is cooked in a variety of ways. The delicate gaderi (taro variety) of Kumaon, especially from Lobanj, Bageshwar district, is much sought after. Most commonly it is boiled in tamarind water until tender, then diced into cubes which are stir-fried in mustard oil with fenugreek leaves. Another technique for preparation is boiling it in salt water till it is reduced to a porridge. The young leaves called gaaba, are steamed, sun-dried, and stored for later use. Taro leaves and stems are pickled. Crushed leaves and stems are mixed with de-husked urad daal (black lentils) and then dried as small balls called badi. These stems may also be sun-dried and stored for later use. On auspicious days, women worship saptarshi ("seven sages") and only eat rice with taro leaves. In Maharashtra, in western India, the leaves, called alu che paana, are de-veined and rolled with a paste of gram flour. Then seasoned with tamarind paste, red chili powder, turmeric, coriander, asafoetida and salt, and finally steamed. These can be eaten whole, cut into pieces, or shallow fried and eaten as a snack known as alu chi wadi. Alu chya panan chi patal bhaji a lentil and colocasia leaves curry, is also popular. In Goan as well as Konkani cuisine taro leaves are very popular. A tall-growing variety of taro is extensively used on the western coast of India to make patrode, patrade, or patrada (lit. "leaf-pancake") a dish with gram flour, tamarind and other spices. In Gujarat, it is called patar vel or saryia na paan. Gram flour, salt, turmeric, red chili powder made into paste and stuffed inside a roll of green taro leaves. Then steamed and in small portions and then fried. Sindhis call it kachaloo; they fry it, compress it, and re-fry it to make a dish called tuk which complements Sindhi curry. In Kerala, a state in southern India, taro corms are known as chembu kizhangu (ചേമ്പ് കിഴങ്ങ്) and are a staple food, a side dish, and an ingredient in various side dishes like sambar. As a staple food, it is steamed and eaten with a spicy chutney of green chilies, tamarind, and shallots. The leaves and stems of certain varieties of taro are also used as a vegetable in Kerala. In Dakshin Kannada in Karnataka, it is used as a breakfast dish, either made like fritters or steamed. In Tamil Nadu and Andhra Pradesh, taro corms are known as sivapan-kizhangu (seppankilangu or cheppankilangu), chamagadda, or in coastal Andhra districts as chaama dumpa. They can be prepared in a variety of ways, such as by deep-frying the steamed and sliced corms in oil known as chamadumpa chips to be eaten on the side with rice, or cooking in a tangy tamarind sauce with spices, onion, and tomato. In the east Indian state of West Bengal, taro corms are thinly sliced and fried to make chips called kochu bhaja(কচু ভাজা). The stem is used to cook kochur saag (কচুর শাগ) with fried hilsha (ilish) head or boiled chhola (chickpea), often eaten as a starter with hot rice. The corms are also made into a paste with spices and eaten with rice. The most popular dish is a spicy curry made with prawn and taro corms. Gathi kochu (গাঠি কচু) (taro variety) are very popular and used to make a thick curry called gathi kochur dal (গাঠি কচুর ডাল). Here kochur loti (কচুর লতি) (taro stolon) dry curry is a popular dish which is usually prepared with poppy seeds and mustard paste. Leaves and corms of shola kochu (শলা কচু) and maan kochu (মান কচু) are also used to make some popular traditional dishes. In Mithila, Bihar, taro corms are known as ədua (अडुआ) and its leaves are called ədikunch ke paat (अड़िकंच के पात). A curry of taro leaves is made with mustard paste and sour sun-dried mango pulp (आमिल; ). In Odisha, taro corms are known as saru. Dishes made of taro include saru besara (taro in mustard and garlic paste). It is also an indispensable ingredient in preparing dalma, an Odia cuisine staple (vegetables cooked with dal). Sliced taro corms, deep fried in oil and mixed with red chili powder and salt, are known as saru chips. Maldives Ala was widely grown in the southern atolls of Addu Atoll, Fuvahmulah, Huvadhu Atoll, and Laamu Atoll and is considered a staple even after rice was introduced. Ala and olhu ala are still widely eaten all over the Maldives, cooked or steamed with salt to taste, and eaten with grated coconut along with chili paste and fish soup. It is also prepared as a curry. The corms are sliced and fried to make chips and are also used to prepare varieties of sweets. Nepal Taro is grown in the Terai and the hilly regions of Nepal. The root (corm) of taro is known as pindalu (पिँडालु) and petioles with leaves are known as karkalo (कर्कलो), Gava (गाभा) and also Kaichu (केेेैचु) in Maithili. Almost all parts are eaten in different dishes. Boiled corm of Taro is commonly served with salt, spices, and chilies. Taro is a popular dish in the hilly region. Chopped leaves and petioles are mixed with Urad bean flour to make dried balls called maseura (मस्यौरा). Large taro leaves are used as an alternative to an umbrella when unexpected rain occurs. Popular attachment to taro since ancient times is reflected in popular culture, such as in songs and textbooks. Jivan hamro karkala ko pani jastai ho (जीवन हाम्रो कर्कलाको पानी जस्तै हो) means, "Our life is as vulnerable as water stuck in the leaf of taro". Taro is cultivated and eaten by the Tharu people in the Inner Terai as well. Roots are mixed with dried fish and turmeric, then dried in cakes called sidhara which are curried with radish, chili, garlic and other spices to accompany rice. The Tharu prepare the leaves in a fried vegetable side-dish that also shows up in Maithili cuisine. Pakistan In Pakistan, taro or eddoe or arvi is a very common dish served with or without gravy; a popular dish is arvi gosht, which includes beef, lamb or mutton. The leaves are rolled along with gram flour batter and then fried or steamed to make a dish called Pakora, which is finished by tempering with red chilies and carrom (ajwain) seeds. Taro or arvi is also cooked with chopped spinach. The dish called Arvi Palak is the second most renowned dish made of Taro. Sri Lanka Many varieties are recorded in Sri Lanka, several being edible, most being toxic to humans and, therefore, are not grown. Edible varieties (such as kiri ala, kolakana ala, gahala, and sevel ala) are grown for their corms and leaves. Sri Lankans eat corms after boiling them or making them into a curry with coconut milk. Some varieties of the leaves of , kolakana ala and kalu alakola are eaten. Middle East and Europe Taro was consumed by the early Romans in much the same way the potato is today. They called this root vegetable colocasia. The Roman cookbook Apicius mentions several methods for preparing taro, including boiling, preparing with sauces, and cooking with meat or fowl. After the fall of the Roman Empire, the use of taro dwindled in Europe. This was largely due to the decline of trade and commerce with Egypt, previously controlled by Rome. When the Spanish and Portuguese sailed to the new world, they brought taro along with them. Recently there has been renewed interest in exotic foods and consumption is increasing. Cyprus In Cyprus, taro has been in use since the time of the Roman Empire. Today it is known as kolokas in Turkish or kolokasi (κολοκάσι) in Greek, which comes from the Ancient Greek name κολοκάσιον (kolokasion) for lotus root. It is usually sauteed with celery and onion with pork, chicken or lamb, in a tomato sauce – a vegetarian version is also available. The cormlets are called poulles (sing. poulla), and they are prepared by first being sauteed, followed by decaramelising the vessel with dry red wine and coriander seeds, and finally served with freshly squeezed lemon. Greece In Greece, taro grows on Icaria. Icarians credit taro for saving them from famine during World War II. They boil it until tender and serve it as a salad. Lebanon In Lebanon, taro is known as kilkass and is grown mainly along the Mediterranean coast. The leaves and stems are not consumed in Lebanon and the variety grown produces round to slightly oblong tubers that vary in size from a tennis ball to a small cantaloupe. Kilkass is a very popular winter dish in Lebanon and is prepared in two ways: kilkass with lentils is a stew flavored with crushed garlic and lemon juice and ’il’as (Lebanese pronunciation of ) bi-tahini. Another common method of preparing taro is to boil, peel then slice it into thick slices, before frying and marinating in edible "red" sumac. In northern Lebanon, it is known as a potato with the name borshoushi (el-orse borshushi). It is also prepared as part of a lentil soup with crushed garlic and lemon juice. Also in the north, it is known by the name bouzmet, mainly around Menieh, where it is first peeled, and left to dry in the sun for a couple of days. After that, it is stir-fried in lots of vegetable oil in a casserole until golden brown, then a large amount of wedged, melted onions are added, in addition to water, chickpeas and some seasoning. These are all left to simmer for a few hours, and the result is a stew-like dish. It is considered a hard-to-make delicacy, not only because of the tedious preparation but the consistency and flavour that the taro must reach. The smaller variety of taro is more popular in the north due to its tenderness. Portugal In the Azores taro is known as inhame or inhame-coco and is commonly steamed with potatoes, vegetables and meats or fish. The leaves are sometimes cooked into soups and stews. It is also consumed as a dessert after first being steamed and peeled, then fried in vegetable oil or lard, and finally sprinkled with sugar, cinnamon and nutmeg. Taro grows abundantly in the fertile land of the Azores, as well as in creeks that are fed by mineral springs. Through migration to other countries, the inhame is found in the Azorean diaspora. Turkey Taro () is grown in the south coast of Turkey, especially in Mersin, Bozyazı, Anamur and Antalya. It is boiled in a tomato sauce or cooked with meat, beans and chickpeas. It is often used as a substitute for potato. Africa Egypt In Egypt, taro is known as qolqas (, ). The corms are larger than what would be found in North American supermarkets. After being peeled completely, it is cooked in one of two ways: cut into small cubes and cooked in broth with fresh coriander and chard and served as an accompaniment to meat stew, or sliced and cooked with minced meat and tomato sauce. Canarias Taro has remained popular in the Canary Islands where it is known as ñame and is often used in thick vegetable stews, like potaje de berros (cress potage) or simply boiled and seasoned with mojo or honey. In Canarian Spanish the word Ñame refers to Taro, while in other variants of Castilian is normally used to designate yams. East Africa In Kenya, Uganda and Tanzania, taro is commonly known as arrow root, yam, amayuni (plural) or ejjuni (singular), , or and in some local Bantu languages. There are several varieties and each variety has its own local name. It is usually boiled and eaten with tea or other beverages, or as the main starch of a meal. It is also cultivated in Madagascar, Malawi, Mozambique, and Zimbabwe. South Africa It is known as (plural) or (singular) in the Zulu language of Southern Africa. West Africa Taro is consumed as a staple crop in West Africa, particularly in Ghana, Nigeria and Cameroon. It is called cocoyam in Nigeria, Ghana and Anglophone Cameroon, macabo in Francophone Cameroon, in Democratic Republic of Congo or Republic of Congo mbálá ya makoko, mankani in Hausa language, and in Yoruba, and in Igbo language. Cocoyam is often boiled, fried, or roasted and eaten with a sauce. In Ghana, it substitutes for plantain in making fufu when plantains are out of season. It is also cut into small pieces to make a soupy baby food and appetizer called mpotompoto. It is also common in Ghana to find cocoyam chips (deep-fried slices, about thick). Cocoyam leaves, locally called kontomire in Ghana, are a popular vegetable for local sauces such as palaver sauce and egusi/agushi stew. It is also commonly consumed in Guinea and parts of Senegal, as a leaf sauce or as a vegetable side, and is referred to as jaabere in the local Pulaar dialect. Americas Brazil In Lusophone countries, inhame (pronounced , or , literally "yam") and cará are the common names for various plants with edible parts of the genera Alocasia, Colocasia (family Araceae) and Dioscorea (family Dioscoreaceae), and its respective starchy edible parts, generally tubers, with the exception of Dioscorea bulbifera, called cará-moela (pronounced , literally, "gizzard yam"), in Brazil and never deemed to be an inhame. Definitions of what constitutes an inhame and a cará vary regionally, but the common understanding in Brazil is that carás are potato-like in shape, while inhames are more oblong. In the Brazilian Portuguese of the hotter and drier Northeastern region, both inhames and carás are called batata (literally, "potato"). For differentiation, potatoes are called batata-inglesa (literally, "English potato"), a name used in other regions and sociolects to differentiate it from the batata-doce, "sweet potato", ironic names since both were first cultivated by the indigenous peoples of South America, their native continent, and only later introduced in Europe by the colonizers. Taros are often prepared like potatoes, eaten boiled, stewed or mashed, generally with salt and sometimes garlic as a condiment, as part of a meal (most often lunch or dinner). Central America In Belize, Costa Rica, Nicaragua and Panama, taro is eaten in soups, as a replacement for potatoes, and as chips. It is known locally as malanga (also malanga coco), a word of Bantu origin, and dasheen in Belize and Costa Rica, quiquizque in Nicaragua, and as otoe in Panama. Haiti In Haiti, it is usually called malanga, or taro. The corm is grated into a paste and deep-fried to make a fritter called Acra. Acra is a very popular street food in Haiti. Jamaica In Jamaica, taro is known as coco, cocoyam and dasheen. Corms with flesh which is white throughout are referred to as minty-coco. The leaves are also used to make Pepper Pot Soup which may include callaloo. Suriname In Suriname it is called tayer, taya, pomtayer or pongtaya. The taro root is called aroei by the indigenous Surinamese and is commonly known as "Chinese tayer". The variety known as eddoe is also called Chinese tayer. It is a popular cultivar among the Maroon population in the interior, also because it is not adversely affected by high water levels. The dasheen variety, commonly planted in swamps, is rare, although appreciated for its taste. The closely related Xanthosoma species is the base for the popular Surinamese dish pom. The cooked taro leaf (taya-wiri, or tayerblad) is also a well-known green leafy vegetable. Trinidad and Tobago In Trinidad and Tobago, it is called dasheen. The leaves of the taro plant are used to make the Trinidadian variant of the Caribbean dish known as callaloo (which is made with okra, dasheen/taro leaves, coconut milk or creme and aromatic herbs) and it is also prepared similarly to steamed spinach. The root of the taro plant is often served boiled, accompanied by stewed fish or meat, curried, often with peas and eaten with roti, or in soups. The leaves are also sauteed with onions, hot pepper and garlic til they are melted to make a dish called "bhaji". This dish is popular with Indo-Trinidadian people. The leaves are also fried in a split pea batter to make "saheena", a fritter of Indian origin. United States Taro has been grown for centuries in the United States. William Bartram observed South Carolina Sea Islands residents [clarification needed: were these people Indigenous?] eating roasted roots of the plant, which they called tanya, in 1791, and by the 19th century it was common as a food crop from Charleston to Louisiana. In the 1920s, dasheen, as it was known, was highly touted by the Secretary of the Florida Department of Agriculture as a valuable crop for growth in muck fields. Fellsmere, Florida, near the east coast, was a farming area deemed perfect for growing dasheen. It was used in place of potatoes and dried to make flour. Dasheen flour was said to make excellent pancakes when mixed with wheat flour. Poi is a Hawaiian cuisine staple food made from taro. Traditional poi is produced by mashing cooked starch on a wooden pounding board (), with a carved pestle () made from basalt, calcite, coral, or wood. Modern methods use an industrial food processor to produce large quantities for retail distribution. This initial paste is called . Water is added to the paste during mashing, and again just before eating, to achieve the desired consistency, which can range from highly viscous to liquid. In Hawaii, this is informally classified as either "one-finger", "two-finger", or "three-finger", alluding to how many fingers are required to scoop it up (the thicker the poi, the fewer fingers required to scoop a sufficient mouthful). Since the late 20th century, taro chips have been available in many supermarkets and natural food stores, and taro is often used in American Chinatowns, in Chinese cuisine. Venezuela In Venezuela, taro is called ocumo chino or chino and used in soups and sancochos. Soups contain large chunks of several kinds of tubers, including ocumo chino, especially in the eastern part of the country, where West Indian influence is present. It is also used to accompany meats in parrillas (barbecue) or fried cured fish where yuca is not available. Ocumo is an indigenous name; chino means "Chinese", an adjective for produce that is considered exotic. Ocumo without the Chinese denomination is a tuber from the same family, but without taro's inside purplish color. Ocumo is the Venezuelan name for malanga, so ocumo chino means "Chinese malanga". Taro is always prepared boiled. No porridge form is known in the local cuisine. West Indies Taro is called dasheen, in contrast to the smaller variety of corms called eddo, or tanya in the English speaking countries of the West Indies, and is cultivated and consumed as a staple crop in the region. There are differences among the roots mentioned above: taro or dasheen is mostly blue when cooked, tanya is white and very dry, and eddoes are small and very slimy. In the Spanish-speaking countries of the Spanish West Indies taro is called ñame, the Portuguese variant of which (inhame) is used in former Portuguese colonies where taro is still cultivated, including the Azores and Brazil. In Puerto Rico and Cuba, and the Dominican Republic it is sometimes called malanga or yautia. In some countries, such as Trinidad and Tobago, Saint Vincent and the Grenadines, and Dominica, the leaves and stem of the dasheen, or taro, are most often cooked and pureed into a thick liquid called callaloo, which is served as a side dish similar to creamed spinach. Callaloo is sometimes prepared with crab legs, coconut milk, pumpkin, and okra. It is usually served alongside rice or made into a soup along with various other roots. Ornamental It is also sold as an ornamental plant, often by the name of elephant ears. It can be grown indoors or outdoors with high humidity. In the UK, it has gained the Royal Horticultural Society's Award of Garden Merit. Laboratory It is also used for anthocyanin study experiments, especially with reference to abaxial and adaxial anthocyanic concentration. A recent study has revealed honeycomb-like microstructures on the taro leaf that make the leaves superhydrophobic. The measured contact angle on the leaf in this study is around 148°. In Melissa K. Nelson's article Protecting the Sanctity of Native Foods, scientists at the University of Hawaii attempted to patent and genetically alter taro before being dissuaded by activists and farmers, "In 2006, the University of Hawaii withdrew its patents on the three varieties and agreed to stop genetically modifying Hawaii forms of taro. Researchers continue to experiment with modifying a Chinese form of taro, however." In culture In Meitei mythology and Meitei folklore of Manipur, Taro () plants are mentioned. One significant instance is the Meitei folktale of the . In this story, an old man and an old woman, were deceived by some monkeys regarding the planting of the Taro plants in a very different way. The old man and woman followed the monkeys' advice, peeling off the best tubers of the plants, then boiling them in a pot until softened and after cooling them off, wrapping them in banana leaves and putting them inside the soils of the grounds. In the middle of the night, the monkeys secretly came into the farm and ate all the well cooked plants. After their eating, they (monkeys) planted some inedible giant wild plants in the place where the old couple had placed the cooked plant tubers. In the morning, the old couple were amazed to see the plants getting fully grown up just after one day of planting the tubers. They were unaware of the tricks of the monkeys. So, the old couple cooked and ate the inedible wild Taro plants. As a reaction of eating the wild plants, they suffered from the unbearable tingling sensation in their throats. Native Hawaiians believe that the taro plant (kalo) grew out of the still-born body of one of the first two humans conceived by gods Hoʻohokukalani and Wākea; thus is connected to humans more than just providing sustenance. Thus, it is often a part of sacred offerings given in ceremonies.
Biology and health sciences
Monocots
null
1634969
https://en.wikipedia.org/wiki/Astacus%20astacus
Astacus astacus
Astacus astacus, the European crayfish, noble crayfish, or broad-fingered crayfish, is the most common species of crayfish in Europe, and a traditional food source. Like other true crayfish, A. astacus is restricted to fresh water, living only in unpolluted streams, rivers, and lakes. It is found from France throughout Central Europe, to the Balkan Peninsula, and north as far as Scandinavia and Finland, and Eastern Europe. Males may grow up to 16 cm long, and females up to 12 cm. Ecology European crayfish feed on worms, aquatic insects, molluscs, and plants. They are nocturnal, spending the day resting in a burrow. They prefer habitats with high levels of shelter availability. The waters they are found in tend to be soft-bottomed with some sand, and they do not tend to be found in muddy water. A. astacus become sexually mature after three to four years and a series of moults, and breed in October and November. Fertilised eggs are carried by the female, attached to her pleopods, until the following May, when they hatch and disperse. The main predators of A. astacus, both as juveniles and adults, are European mink, eels, perch, pike, Eurasian otters, and muskrats. There is also some risk of predation via cannibalism. A. astacus is sensitive to dips in oxygen levels in the water it inhabits, which makes it particularly vulnerable to eutrophication. However, they are capable of tolerating lower calcium levels than most other species of crayfish. A. astacus is regarded as a keystone species in the environments it inhabits. Crayfish are an important part of the freshwater food web as they provide a source of food to many aquatic species and boost primary productivity by foraging on freshwater plants. The loss of crayfish in a freshwater environment is known to cause macrophyte growth, which can be a cause for eutrophication and an overall degradation in water quality. Consumption This species was once abundant in Europe, although it was expensive to buy, and is considered to be the finest edible crayfish. It is, however, susceptible to the crayfish plague carried by the invasive North American species signal crayfish (Pacifastacus leniusculus), so is listed as a vulnerable species on the IUCN Red List. Since the introduction of the plague, A. astacus has dropped to about 5% of its preexisting population. Documentation of the consumption of A. astacus dates back to the Middle Ages, when it was popular among the Swedish nobility, spreading to all social classes by the 17th and 18th centuries due to its ready availability. The crayfish are collected from the wild in traps, a practice which is being replaced by more intensive aquaculture of the signal crayfish in man-made ponds. The consumption of crayfish is an important part of traditional Nordic culture, including the crayfish party (; ), a feast to mark the end of summer. Hundreds of smaller or larger lakes were once found in the northern Moldavia, used for growing A. astacus meant for consumption during the extended fasting periods of the Orthodox Christian calendar. The area of the former Dorohoi County was one such area, and this legacy was visible in the county's historical coat of arms, featuring an A. astacus (). Astacin Astacins are a family of digestive enzymes, discovered in the 1990s, which were first isolated from A. astacus. More than 20 enzymes of this group have since been discovered in animals from Hydra to humans.
Biology and health sciences
Crayfishes and lobsters
Animals
1635424
https://en.wikipedia.org/wiki/Asiatic%20lion
Asiatic lion
The Asiatic lion is a lion population of the subspecies Panthera leo leo. Until the 19th century, it occurred in Saudi Arabia, eastern Turkey, Iran, Mesopotamia, and from east of the Indus River in Pakistan to the Bengal region and the Narmada River in Central India. Since the turn of the 20th century, its range has been restricted to Gir National Park and the surrounding areas in the Indian state of Gujarat. The first scientific description of the Asiatic lion was published in 1826 by the Austrian zoologist Johann N. Meyer, who named it Felis leo persicus. The population has steadily increased since 2010. In 2015, the 14th Asiatic Lion Census was conducted over an area of about ; the lion population was estimated at 523 individuals, and in 2017 at 650 individuals. Taxonomy Felis leo persicus was the scientific name proposed by Johann N. Meyer in 1826 who described an Asiatic lion skin from Persia. In the 19th century, several zoologists described lion zoological specimen from other parts of Asia that used to be considered synonyms of P. l. persica: Felis leo bengalensis proposed by Edward Turner Bennett in 1829 was a lion kept in the menagerie of the Tower of London. Bennett's essay contains a drawing titled 'Bengal lion'. Felis leo goojratensis proposed by Walter Smee in 1833 was based on two skins of maneless lions from Gujarat that Smee exhibited in a meeting of the Zoological Society of London. Leo asiaticus proposed by Sir William Jardine, 7th Baronet in 1834 was a lion from India. Felis leo indicus proposed by Henri Marie Ducrotay de Blainville in 1843 was based on an Asiatic lion skull. In 2017, the Asiatic lion was subsumed to P. l. leo due to close morphological and molecular genetic similarities with Barbary lion specimens. However, several scientists continue using P. l. persica for the Asiatic lion. A standardised haplogroup phylogeny supports that the Asiatic lion is not a distinct subspecies, and that it represents a haplogroup of the northern P. l. leo. Evolution Lions first left Africa at least 700,000 years ago, giving rise to the Eurasian Panthera fossilis which later evolved into Panthera spelaea (commonly known as the cave lion), which became extinct around 14,000 years ago. Genetic analysis of P. spelaea indicates that it represented a distinct species from the modern lion that diverged from them around 500,000 years ago and unrelated to modern Asian lions. Pleistocene fossils assigned as belonging or probably belonging to the modern lion have been reported from several sites in the Middle East, such as Shishan Marsh in the Azraq Basin, Jordan, dating to around 250,000 years ago, and Wezmeh Cave in the Zagros Mountains of western Iran, dating to around 70–10,000 years ago, with other reports from Pleistocene deposits in Nadaouiyeh Ain Askar and Douara Cave, Syria. In 1976, fossil lion remains were reported from Pleistocene deposits in West Bengal. A fossil carnassial excavated from Batadomba Cave indicates that lions inhabited Sri Lanka during the Late Pleistocene. This population may have become extinct around 39,000 years ago, before the arrival of humans in Sri Lanka. Phylogeography Results of a phylogeographic analysis based on mtDNA sequences of lions from across the global range, including now extinct populations like Barbary lions, indicates that sub-Saharan African lions are phylogenetically basal to all modern lions. These findings support an African origin of modern lion evolution with a probable centre in East and Southern Africa. It is likely that lions migrated from there to West Africa, eastern North Africa and via the periphery of the Arabian Peninsula into Turkey, southern Europe and northern India during the last 20,000 years. The Sahara, Congolian rainforests and the Great Rift Valley are natural barriers to lion dispersal. Genetic markers of 357 samples from captive and wild lions from Africa and India were examined. Results indicate four lineages of lion populations: one in Central and North Africa to Asia, one in Kenya, one in Southern Africa, and one in Southern and East Africa; the first wave of lion expansion probably occurred about 118,000 years ago from East Africa into West Asia, and the second wave in the late Pleistocene or early Holocene periods from Southern Africa towards East Africa. The Asiatic lion is genetically closer to North and West African lions than to the group comprising East and Southern African lions. The two groups probably diverged about 186,000–128,000 years ago. It is thought that the Asiatic lion remained connected to North and Central African lions until gene flow was interrupted due to extinction of lions in Western Eurasia and the Middle East during the Holocene. Asiatic lions are less genetically diverse than African lions, which may be the result of a founder effect in the recent history of the remnant population in the Gir Forest. Characteristics The Asiatic lion's fur ranges in colour from ruddy-tawny, heavily speckled with black, to sandy or buffish grey, sometimes with a silvery sheen in certain lighting. Males have only moderate mane growth at the top of the head, so that their ears are always visible. The mane is scanty on the cheeks and throat, where it is only long. About half of Asiatic lions' skulls from the Gir forest have divided infraorbital foramina, whereas African lions have only one foramen on either side. The sagittal crest is more strongly developed, and the post-orbital area is shorter than in African lions. Skull length in adult males ranges from , and in females, from . It differs from the African lion by a larger tail tuft and less inflated auditory bullae. The most striking morphological character of the Asiatic lion is a longitudinal fold of skin running along its belly. Males have a shoulder height of up to , and females of . Two lions in Gir Forest measured from head to body with a long tail of and total lengths of . The Gir lion is similar in size to the Central African lion, and smaller than large African lions. An adult male Asiatic lion weighs on average with the limit being ; a wild female weighs . Manes Colour and development of manes in male lions varies between regions, among populations and with age of lions. In general, the Asiatic lion differs from the African lion by a less developed mane. The manes of most lions in ancient Greece and Asia Minor were also less developed and did not extend to below the belly, sides or ulnas. Lions with such smaller manes were also known in the Syrian region, Arabian Peninsula and Egypt. Exceptionally sized lions The confirmed record total length of a male Asiatic lion is , including the tail. Emperor Jahangir allegedly speared a lion in the 1620s that measured and weighed . In 1841, English traveller Austen Henry Layard accompanied hunters in Khuzestan, Iran, and sighted a lion which "had done much damage in the plain of Ram Hormuz," before one of his companions killed it. He described it as being "unusually large and of very dark brown colour", with some parts of its body being almost black. In 1935, a British admiral claimed to have sighted a maneless lion near Quetta in Pakistan. He wrote "It was a large lion, very stocky, light tawny in colour, and I may say that no one of us three had the slightest doubt of what we had seen until, on our arrival at Quetta, many officers expressed doubts as to its identity, or to the possibility of there being a lion in the district." Distribution and habitat In Saurashtra's Gir forest, an area of was declared as a sanctuary for Asiatic lion conservation in 1965. This sanctuary and the surrounding areas are the only habitats supporting the Asiatic lion. After 1965, a national park was established covering an area of where human activity is not allowed. In the surrounding sanctuary only Maldharis have the right to take their livestock for grazing. Lions inhabit remnant forest habitats in the two hill systems of Gir and Girnar that comprise Gujarat's largest tracts of tropical and subtropical dry broadleaf forests, thorny forest and savanna, and provide valuable habitat for a diverse flora and fauna. Five protected areas currently exist to protect the Asiatic lion: Gir Sanctuary, Gir National Park, Pania Sanctuary, Mitiyala Sanctuary, and Girnar Sanctuary. The first three protected areas form the Gir Conservation Area, a large forest block that represents the core habitat of the lion population. The other two sanctuaries Mitiyala and Girnar protect satellite areas within dispersal distance of the Gir Conservation Area. An additional sanctuary is being established in the nearby Barda Wildlife Sanctuary to serve as an alternative home for lions. The drier eastern part is vegetated with acacia thorn savanna and receives about annual rainfall; rainfall in the west is higher at about per year. The lion population recovered from the brink of extinction to 411 individuals by 2010. In that year, approximately 105 lions lived outside the Gir forest, representing a quarter of the entire lion population. Dispersing sub-adults established new territories outside their natal prides, and as a result the satellite lion population has been increasing since 1995. By 2015, the total population had grown to an estimated 523 individuals, inhabiting an area of in the Saurashtra region., comprising 109 adult males, 201 adult females and 213 cubs. The Asiatic Lion Census conducted in 2017 revealed about 650 individuals. By 2020, at least six satellite populations had spread to eight districts in Gujarat and live in human-dominated areas outside the protected area network. 104 lived near the coastline. Lions living along the coast, as well as those between the coastline and the Gir forest, have larger individual ranges. Former range During the Holocene, from around 6,500 years ago and possibly as early as 8,000 years ago, modern lions colonised Southeast Europe (including modern Bulgaria and Greece in the Balkans), as well as parts of Central Europe like Hungary and Ukraine in Eastern Europe. Analysis of remains of these European lions suggests that they do not differ from those of modern Asiatic lions, and they should be assigned to this population. Historical records suggest that lions became extinct in Europe during Classical antiquity, though it has been suggested that they may have survived as late as the Middle Ages in Ukraine. The Asiatic lion used to occur in Arabia, the Levant, Mesopotamia and Baluchistan. In South Caucasia, it was known since the Holocene and became extinct in the 10th century. Until the middle of the 19th century, it survived in regions adjoining Mesopotamia and Syria, and was still sighted in the upper reaches of the Euphrates River in the early 1870s. By the late 19th century, it had become extinct in Saudi Arabia and Turkey. The last known lion in Iraq was killed on the lower Tigris in 1918. Historical records in Iran indicate that it ranged from the Khuzestan Plain to Fars province at elevations below in steppe vegetation and pistachio-almond woodlands. It was widespread in the country, but in the 1870s, it was sighted only on the western slopes of the Zagros Mountains, and in the forest regions south of Shiraz. It served as the national emblem and appeared on the country's flag. Some of the country's last lions were sighted in 1941 between Shiraz and Jahrom in Fars province, and in 1942, a lion was spotted about northwest of Dezful. In 1944, the corpse of a lioness was found on the banks of the Karun River in Iran's Khuzestan province. In India, the Asiatic lion occurred in Sind, Bahawalpur, Punjab, Gujarat, Rajasthan, Haryana, Bihar and eastward as far as Palamau and Rewa, Madhya Pradesh in the early 19th century. It once ranged to Bangladesh in the east and up to Narmada River in the south. Because of the lion's restricted distribution in India, Reginald Innes Pocock assumed that it arrived from Europe, entering southwestern Asia through Balochistan only recently, before humans started limiting its dispersal in the country. The advent and increasing availability of firearms led to its local extirpation over large areas. Heavy hunting by British colonial officers and Indian rulers caused a steady and marked decline of lion numbers in the country. Lions were exterminated in Palamau by 1814, in Baroda State, Hariana and Ahmedabad district in the 1830s, in Kot Diji and Damoh district in the 1840s. During the Indian Rebellion of 1857, a British officer shot 300 lions. The last lions of Gwalior and Rewah were shot in the 1860s. One lion was killed near Allahabad in 1866. The last lion of Mount Abu in Rajasthan was spotted in 1872. By the late 1870s, lions were extinct in Rajasthan. By 1880, no lion survived in Guna, Deesa and Palanpur districts, and only about a dozen lions were left in Junagadh district. By the turn of the century, the Gir Forest held the only Asiatic lion population in India, which was protected by the Nawab of Junagarh in his private hunting grounds. Ecology and behaviour Male Asiatic lions are solitary, or associate with up to three males, forming a loose pride. Pairs of males rest, hunt and feed together, and display marking behaviour at the same sites. Females associate with up to twelve other females, forming a stronger pride together with their cubs. They share large carcasses among each other, but seldom with males. Female and male lions usually associate only for a few days when mating, but rarely live and feed together. Results of a radio telemetry study indicate that annual home ranges of male lions vary from in dry and wet seasons. Home ranges of females are smaller, varying between . During hot and dry seasons, they favour densely vegetated and shady riverine habitats, where prey species also congregate. Coalitions of males defend home ranges containing one or more female prides. Together, they hold a territory for a longer time than single lions. Males in coalitions of three to four individuals exhibit a pronounced hierarchy with one male dominating the others. The lions in Gir National Park are active at twilight and by night, showing a high temporal overlap with sambar (Rusa unicolor), wild boar (Sus scrofa) and nilgai (Boselaphus tragocamelus). Feeding ecology In general, lions prefer large prey species within a weight range of , irrespective of their availability. Domestic cattle have historically been a major component of the Asiatic lions' diet in the Gir Forest. Inside Gir Forest National Park, lions predominantly kill chital (Axis axis), sambar deer, nilgai, cattle (Bos taurus), domestic water buffalo (Bubalus bubalis), and less frequently wild boar. They most commonly kill chital, which weighs only around . They prey on sambar deer when the latter descend from the hills during summer. Outside the protected area where wild prey species do not occur, lions prey on water buffalo and cattle, and rarely on dromedary (Camelus dromedarius). They generally kill most prey less than away from water bodies, charge prey from close range and drag carcasses into dense cover. They regularly visit specific sites within the protected area to scavenge on dead livestock dumped by Maldhari livestock herders. During dry, hot months, they also prey on mugger crocodiles (Crocodylus palustris) on the banks of Kamleshwar Dam. In 1974, the Forest Department estimated the wild ungulate population at 9,650 individuals. In the following decades, the wild ungulate population has grown consistently to 31,490 in 1990 and 64,850 in 2010, including 52,490 chital, 4,440 wild boar, 4,000 sambar, 2,890 nilgai, 740 chinkara (Gazella bennetti), and 290 four-horned antelope (Tetracerus quadricornis). In contrast, populations of domestic buffalo and cattle declined following resettlement, largely due to direct removal of resident livestock from the Gir Conservation Area. The population of 24,250 domestic livestock in the 1970s declined to 12,500 by the mid-1980s, but increased to 23,440 animals in 2010. Following changes in both predator and prey communities, Asiatic lions shifted their predation patterns. Today, very few livestock kills occur within the sanctuary, and instead most occur in peripheral villages. Depredation records indicate that in and around the Gir Forest, lions killed on average 2,023 livestock annually between 2005 and 2009, and an additional 696 individuals in satellite areas. Dominant males consume about 47% more from kills than their coalition partners. Aggression between partners increases when coalitions are large, but kills are small. Reproduction Asiatic lions mate foremost between October and November. Mating lasts three to six days. During these days, they usually do not hunt, but only drink water. Gestation lasts about 110 days. Litters comprise one to four cubs. The average interval between births is 24 months, unless cubs die due to infanticide by adult males or because of diseases and injuries. Cubs become independent at the age of about two years. Subadult males leave their natal pride latest at the age of three years and become nomads until they establish their own territory. Dominant males mate more frequently than their coalition partners. During a study carried out between December 2012 and December 2016, three females were observed switching mating partners in favour of the dominant male. Monitoring of more than 70 mating events showed that females mated with males of several rivaling prides that shared their home ranges, and that these males were tolerant toward the same cubs. Only new males that entered the female territories killed unfamiliar cubs. Young females mated foremost with males within their home ranges. Older females selected males at the periphery of their home ranges. Threats The Asiatic lion currently exists as a single subpopulation, and is thus vulnerable to extinction from unpredictable events, such as an epidemic or large forest fire. There are indications of poaching incidents in recent years, as well as reports that organized poacher gangs have switched attention from local Bengal tigers to the Gujarat lions. There have also been a number of drowning incidents, after lions fell into wells. Prior to the resettlement of Maldharis, the Gir forest was heavily degraded and used by livestock, which competed with and restricted the population sizes of native ungulates. Various studies reveal tremendous habitat recovery and increases in wild ungulate populations following the resettlement of Maldharis since the 1970s. Nearly 25 lions in the vicinity of Gir Forest were found dead in October 2018. Four of them had died because of canine distemper virus, the same virus that had also killed several lions in the Serengeti. Conflicts with humans Since the mid-1990s, the Asiatic lion population has increased to an extent that by 2015, about a third resided outside the protected area. Hence, conflict between local residents and wildlife also increased. Local people protect their crops from nilgai, wild boar, and other herbivores by using electrical fences that are powered with high voltage. Some consider the presence of predators a benefit, as they keep the herbivore population in check. But some also fear the lions, and killed several in retaliation for attacks on livestock. In July 2012, a lion dragged a man from the veranda of his house and killed him about from Gir Forest National Park. This was the second attack by a lion in this area, six months after a 25-year-old man was attacked and killed in Dhodadar. Conservation Panthera leo persica was included on CITES Appendix I, and is fully protected in India, where it is considered endangered. Reintroduction India In the 1950s, biologists advised the Indian government to re-establish at least one wild population in the Asiatic lion's former range to ensure the population's reproductive health and to prevent it from being affected by an outbreak of an epidemic. In 1956, the Indian Board for Wildlife accepted a proposal by the Government of Uttar Pradesh to establish a new sanctuary for the envisaged reintroduction, Chandra Prabha Wildlife Sanctuary, covering in eastern Uttar Pradesh, where climate, terrain and vegetation is similar to the conditions in the Gir Forest. In 1957, one male and two female wild-caught Asiatic lions were set free in the sanctuary. This population comprised 11 animals in 1965, which all disappeared thereafter. The Asiatic Lion Reintroduction Project to find an alternative habitat for reintroducing Asiatic lions was pursued in the early 1990s. Biologists from the Wildlife Institute of India assessed several potential translocation sites for their suitability regarding existing prey population and habitat conditions. The Palpur-Kuno Wildlife Sanctuary in northern Madhya Pradesh was ranked as the most promising location, followed by Sita Mata Wildlife Sanctuary and Darrah National Park. Until 2000, 1,100 families from 16 villages had been resettled from the Palpur-Kuno Wildlife Sanctuary, and another 500 families from eight villages were expected to be resettled. With this resettlement scheme the protected area was expanded by . Gujarat state officials resisted the relocation, since it would make the Gir Sanctuary lose its status as the world's only home of the Asiatic lion. Gujarat raised a number of objections to the proposal, and thus the matter went before the Indian Supreme Court. In April 2013, the Indian Supreme Court ordered the Gujarat state to send some of their Gir lions to Madhya Pradesh to establish a second population there. The court had given wildlife authorities six months to complete the transfer. The number of lions and which ones to be transported will be decided at a later date. As of now, the plan to shift lions to Kuno is in jeopardy, with Madhya Pradesh having apparently given up on acquiring lions from Gujarat. Iran In 1977, Iran attempted to restore its lion population by transporting Gir lions to Arzhan National Park, but the project met resistance from the local population, and thus it was not implemented. However, this did not stop Iran from seeking to bring back the lion. In February 2019, Tehran Zoological Garden obtained a male Asiatic lion from Bristol Zoo in the United Kingdom, followed in June by a female from Dublin Zoo. There are hopes for them to successfully reproduce. In captivity Until the late 1990s, captive Asiatic lions in Indian zoos were haphazardly interbred with African lions confiscated from circuses, leading to genetic pollution in the captive Asiatic lion stock. Once discovered, this led to the complete shutdown of the European and American endangered species breeding programs for Asiatic lions, as its founder animals were captive-bred Asiatic lions originally imported from India and were ascertained to be intraspecific hybrids of African and Asian lions. In North American zoos, several Indian-African lion crosses were inadvertently bred, and researchers noted that "the fecundity, reproductive success, and spermatozoal development improved dramatically." DNA fingerprinting studies of Asiatic lions have helped in identifying individuals with high genetic variability, which can be used for conservation breeding programs. In 2006, the Central Zoo Authority of India stopped breeding Indian-African cross lions stating that "hybrid lions have no conservation value and it is not worth to spend resources on them". Now only pure native Asiatic lions are bred in India. In 1972 the Sakkarbaug Zoo sold a pair of young pure-stock lions to the Fauna Preservation Society; which decided they would be accommodated at the Jersey Wildlife Trust where it was hoped to begin a captive breeding programme. The Asiatic lion International Studbook was initiated in 1977, followed in 1983 by the North American Species Survival Plan (SSP). The North American population of captive Asiatic lions was composed of descendants of five founder lions, three of which were pure Asian and two were African or African-Asian hybrids. The lions kept in the framework of the SSP consisted of animals with high inbreeding coefficients. In the early 1990s, three European zoos imported pure Asiatic lions from India: London Zoo obtained two pairs; the Zürich Zoologischer Garten one pair; and the Korkeasaari Zoo in Helsinki one male and two females. In 1994, the European Endangered Species Programme (EEP) for Asiatic lions was initiated. The European Association of Zoos and Aquaria (EAZA) published the first European Studbook in 1999. By 2005, there were 80 Asiatic lions kept in the EEP – the only captive population outside of India. As of 2009, more than 100 Asiatic lions were kept within the EEP. The SSP had not resumed; pure-bred Asiatic lions are needed to form a new founder population for breeding in American zoos. In culture South and East Asia Neolithic cave paintings of lions were found in Bhimbetka rock shelters in central India, which are at least 30,000 years old. The Sanskrit word for 'lion' is , which is also a name of Shiva and signifies the Leo of the Zodiac. The Sanskrit name of Sri Lanka is Sinhala meaning 'Abode of Lions'. Singapore derives its name from the Malay words 'lion' and 'city', which in turn is from the Sanskrit and , latter also meaning 'fortified town'. In Hindu mythology, the half man half lion avatar Narasimha is the fourth incarnation of Vishnu. Simhamukha is a lion-faced protector and dakini in Tibetan Buddhism. In the 18th book of the Mahabharata, Bharata deprives lions of their prowess. The lion plays a prominent role in The Fables of Pilpay that were translated into Persian, Greek and Hebrew languages between the 8th and 12th centuries. The lion is the symbol of Mahavira, the 24th and last Tirthankara in Jainism. The lion is the third animal of the Burmese zodiac and the sixth animal of the Sinhalese zodiac. The earliest known Chinese stone sculptures of lions date to the Han dynasty at the turn of the first millennium. The lion dance is a traditional dance in Chinese culture that is strongly associated with Buddhism and known since at least the Han dynasty. Cambodia has a native martial art called Bokator (, pounding a lion). West Asia and Europe Lions are depicted on vases dating to about 2600 BCE that were excavated near Lake Urmia in Iran. The lion was an important symbol in Ancient Iraq and is depicted in a stone relief at Nineveh in the Mesopotamian Plain. The lion makes repeated appearances in the Bible, most notably as having fought Samson in the Book of Judges. Having occurred in the Arab world, particularly the Arabian Peninsula, the Asiatic lion has significance in Arab and Islamic culture. For example, Surah al-Muddaththir of the Quran criticizes people who were averse to the Islamic Prophet Muhammad's teachings, such as that the rich have an obligation to donate wealth to the poor, comparing their attitude to itself, with the response of prey to a qaswarah (, meaning "lion", "beast of prey", or "hunter"). Other Arabic words for 'lion' include asad () and sabaʿ (), and they can be used as names of places, or titles of people. An Arabic toponym for the Levantine City of Beersheba () can mean "Spring of the Lion." Ali ibn Abi Talib and Hamzah ibn Abdul-Muttalib, who were loyal kinsmen of Muhammad, were given titles like Asad Allah (). The lion of Babylon is a statue at the Ishtar Gate in Babylon The lion has an important association with the figure Gilgamesh, as demonstrated in his epic. The Iraqi national football team is nicknamed "Lions of Mesopotamia." The symbol of the lion is closely tied to the Persian people. Achaemenid kings were known to carry the symbol of the lion on their thrones and garments. The name 'Shir' (also pronounced 'Sher') () is a part of the names of many places in Iran and Central Asia, like those of city of Shiraz and the Sherabad River, and had been adopted into other languages, like Hindi. The Shir-va-Khorshid (, "Lion and Sun") is one of the most prominent symbols of Iran, dating back to the Safavid dynasty, and was used on the flag of Iran until 1979. The lion was an objective of hunting in the Caucasus, by both locals and foreigners. The locals were called 'Shirvanshakhs'. The Nemean lion of pre-literate Greek myth is associated with the Labours of Hercules. A Bronze Age statue of a lion from either Southern Italy or southern Spain from around 1000–1200 years BCE, the "Mari-Cha Lion", was exhibited at the Louvre Abu Dhabi.
Biology and health sciences
Felines
Animals
1635762
https://en.wikipedia.org/wiki/Calipers
Calipers
Caliper(s) or calliper(s) are an instrument used to measure the linear dimensions of an object or hole; namely, the length, width, thickness, diameter or depth of an object or hole. The word "caliper" comes from a corrupt form of caliber. Many types of calipers permit reading out a measurement on a ruled scale, a dial, or an electronic digital display. A common association is to calipers using a sliding vernier scale. Some calipers can be as simple as a compass with inward or outward-facing points, but with no scale (measurement indication). The tips of the caliper are adjusted to fit across the points to be measured, and then kept at that span while moved to separate measuring device, such as a ruler. Calipers are used in many fields such as mechanical engineering, metalworking, forestry, woodworking, science and medicine. Terminology Caliper is the American spelling, while calliper (double "L") is the British spelling. A single tool might be referred to as a caliper or as calipers — a plural only (plurale tantum) form, like scissors or glasses. Colloquially, the phrase "pair of verniers" or just "vernier" might refer to a vernier caliper. In loose colloquial usage, these phrases may also refer to other kinds of calipers, although they involve no vernier scale. In machine-shop usage, the term "caliper" is often used in contradistinction to micrometer, even though outside micrometers are technically a form of caliper. In this usage, caliper implies only the form factor of the instrument. History The earliest caliper has been found in the Greek Giglio wreck near the Italian coast. The ship's find dates to the 6th century BC. The wooden piece already featured a fixed and a movable jaw. Although rare finds, calipers remained in use by the Greeks and Romans. A bronze caliper, dating from 9 AD, was used for minute measurements during the Chinese Xin dynasty. The caliper had an inscription stating that it was "made on the gui-you day, the first day of the first month of the first year of Shijianguo." The calipers included a "slot and pin" and "graduated in inches and tenths of an inch." The modern vernier caliper was invented by Pierre Vernier, as an improvement of the nonius of Pedro Nunes. Types Inside caliper Inside calipers are used to measure the internal size of an object. The upper caliper in the image (on the right) requires manual adjustment prior to fitting. Fine setting of this caliper type is performed by tapping the caliper legs lightly on a handy surface until they will almost pass over the object. A light push against the resistance of the central pivot screw then spreads the legs to the correct dimension and provides the required, consistent feel that ensures a repeatable measurement. The lower caliper in the image has an adjusting screw that permits it to be carefully adjusted without removal of the tool from the workpiece. Outside caliper Outside calipers are used to measure the external size of an object. The same observations and technique apply to this type of caliper, as for the inside caliper. With some understanding of their limitations and usage, these instruments can provide a high degree of accuracy and repeatability. They are especially useful when measuring over very large distances; consider if the calipers are used to measure a large-diameter pipe. A vernier caliper does not have the depth capacity to straddle this large diameter and at the same time reach the outermost points of the pipe's diameter. They are made from high-carbon steel. Divider caliper In the metalworking field, a divider caliper, popularly called a compass, is used to mark out locations. The points are sharpened so that they act as scribers; one leg can then be placed in the dimple created by a center or prick punch and the other leg pivoted so that it scribes a line on the workpiece's surface, thus forming an arc or circle. Their namesake use is in dividing a workpiece of arbitrary width into equal-width sections: by "walking" the tool from one end to the other by pivoting it from one point to the next until reaching the other end, then adjusting the gap between the points until the "walk" ends directly on the end point, equal divisions can be easily marked out without any measuring. A divider caliper is also used to measure a distance between two points on a map. The two caliper ends are brought to the two points whose distance is being measured. The caliper's opening is then either measured on a separate ruler and then converted to the actual distance, or measured directly on a scale drawn on the map. On a nautical chart the distance is often measured on the latitude scale appearing on the sides of the map: one minute of arc along any great circle, e.g. any longitude meridian, is approximately one nautical mile or 1852 meters. Dividers are also used in the medical profession. An ECG (also EKG) caliper transfers distance on an electrocardiogram; in conjunction with the appropriate scale, the heart rate can be determined. A pocket caliper version was invented by cardiologist Robert A. Mackin. Oddleg caliper Oddleg calipers, Hermaphrodite calipers, or Oddleg Jennys, as pictured on the left, are generally used to scribe a line at a set distance from the edge of a workpiece. The bent leg is used to run along the workpiece edge while the scriber makes its mark at a predetermined distance, this ensures a line parallel to the edge. In the diagram at left, the uppermost caliper has a slight shoulder in the bent leg allowing it to sit on the edge more securely. The lower caliper lacks this feature but has a renewable scriber that can be adjusted for wear, as well as being replaced when excessively worn. Vernier caliper The labelled parts are The calipers in the diagram show a primary reading on the metric scale of about 2.475 cm (2.4 cm read from the main scale plus about 0.075 cm from the vernier scale). Calipers often have a "zero point error": meaning that the calipers do not read 0.000 cm when the jaws are closed. The zero point error must always be subtracted from the primary reading. Let us assume these calipers have a zero-point error of 0.013 cm. This would give us a length reading of 2.462 cm. For any measurement, reporting the error on the measurement is also important. Ignoring the possibility of interpolation by eye, both the primary reading and the zero point reading are bounded by plus/minus half the length corresponding to the width of the smallest interval on the vernier scale (0.0025 cm). These are "absolute" errors and absolute errors add, so the length reading is then bounded by plus/minus the length corresponding to the full width of the smallest interval on the vernier scale (0.005 cm). Assuming no systematics affect the measurement (the instrument works perfectly), a complete measurement would then read 2.462 cm ± 0.005 cm. The vernier, dial, and digital calipers directly read the distance measured with high accuracy and precision. They are functionally identical, with different ways of reading the result. These calipers comprise a calibrated scale with a fixed jaw, and another jaw, with a pointer, that slides along the scale. The distance between the jaws is then read in different ways for the three types. The simplest method is to read the position of the pointer directly on the scale. When the pointer is between two markings, the user can mentally interpolate to improve the precision of the reading. This would be a simply calibrated caliper, but the addition of a vernier scale allows more accurate interpolation and is the universal practice; this is the vernier caliper. Vernier, dial, and digital calipers can measure internal dimensions (using the uppermost jaws in the picture at right), external dimensions using the pictured lower jaws, and in many cases depth by the use of a probe that is attached to the movable head and slides along the centre of the body. This probe is slender and can get into deep grooves that may prove difficult for other measuring tools. The vernier scales may include metric measurements on the lower part of the scale and inch measurements on the upper, or vice versa, in countries that use inches. Vernier calipers commonly used in industry provide a precision to 0.01 mm (10 micrometres), or one thousandth of an inch. They are available in sizes that can measure up to 1828 mm (72 in). Dial caliper Instead of using a vernier mechanism, which requires some practice to use, the dial caliper reads the final fraction of a millimeter or inch on a simple dial. In this instrument, a small, precise rack and pinion drives a pointer on a circular dial, allowing direct reading without the need to read a vernier scale. Typically, the pointer rotates once every inch, tenth of an inch, or 1 millimeter. This measurement must be added to the coarse whole inches or centimeters read from the slide. The dial is usually arranged to be rotatable beneath the pointer, allowing for "differential" measurements (the measuring of the difference in size between two objects, or the setting of the dial using a master object and subsequently being able to read directly the plus-or-minus variance in the size of subsequent objects relative to the master object). The slide of a dial caliper can usually be locked at a setting using a small lever or screw; this allows simple go/no-go checks of part sizes. Digital caliper Rather than a rack and pinion, digital calipers use a linear encoder. A liquid-crystal display shows the measurement, which often can switch units between millimeters and fractional or decimal inches. All provide for zeroing the display at any point along the slide, allowing the same sort of differential measurements as with the dial caliper. Digital calipers may contain a "reading hold" feature, allowing the reading of dimensions after use in awkward locations where the display cannot be seen. Like analog calipers, the slide of many digital calipers can be locked using a lever or screw. Resolution and accuracy Ordinary 150 mm (6 in) digital calipers made of stainless steel have a rated accuracy of 0.02 mm (0.001 in) and a resolution of 0.01 mm (0.0005 in). The same technology is used for longer calipers, but accuracy declines to 0.03 mm (0.001 in) for 100–200 mm (4–8 in) and 0.04 mm (0.0015 in) for 200–300 mm (8–12 in) measurements. Measurement method Many digital calipers contain a capacitive linear encoder. Inexpensive Chinese models have 56 narrow emitter plates and one long receiver plate etched on the sliding display's printed circuit board, which intersect with a repeating pattern of T-shaped plates in the longer "stator" board. The top of the "T" plates intersect with the receiver plate, while the vertical bars of each "T" intersect with the emitter plates. The pitch of each "T" in the stator is slightly less than 8 times the pitch of each emitter plate, so their intersecting capacitive area is not perfectly aligned but rather forms an interference pattern. As the slider moves, these variable capacitances change in a repeating linear fashion. The slider's circuitry counts these repetitions as it slides and achieves finer resolution using linear interpolation of the capacitances. One model sends 8 periodic pulse-width modulation voltage signals (which appear identical but out of phase by of the period), each connected to 7 emitter plates, and the resulting analog signal is read through a single receiver plate. The 1983 German patent DE3340782C2 (see figure) is said to describe the workings. Other digital calipers contain an inductive linear encoder, which allows robust performance in the presence of contamination such as coolants. Magnetic linear encoders are used in yet other digital calipers. Serial data output Digital calipers nowadays offer serial data output to expedite repeated measurements, avoid human error, and allow direct data entry into a digital recorder, spreadsheet, statistical process control program, or similar software on a personal computer. Interfacing devices based on RS-232, Universal Serial Bus, or wireless can be built or purchased. The serial digital output varies among manufacturers, but common options are: Mitutoyo's Digimatic interface. This is the dominant name brand interface. Format is 52 bits arranged as 13 nibbles. Sylvac interface. This is the common protocol for inexpensive, non-name brand, calipers. Format is 24-bit 90 kHz synchronous. Starrett Brown & Sharpe Federal Tesa Aldi. Format is 7 BCD digits. Mahr (Digimatic, RS232C, Wireless FM Radio, Infrared and USB) Micrometer screw caliper A caliper using a calibrated screw for measurement, rather than a slide, is called an external micrometer caliper gauge, a micrometer caliper or, more often, simply a micrometer. (Sometimes the term caliper, referring to any other type in this article, is held in contradistinction to micrometer.) Comparison Each of the above types of calipers has its relative merits and faults. Vernier calipers are rugged and have long-lasting accuracy, are coolant proof, are not affected by magnetic fields, and are largely shockproof. They may have both centimeter and inch scales. However, vernier calipers require good eyesight or a magnifying glass to read and can be difficult to read from a distance or from awkward angles. It is relatively easy to misread the last digit. In production environments, reading vernier calipers all day long is error-prone and is annoying to the workers. Dial calipers are comparatively easy to read, especially when seeking the exact center by rocking and observing the needle movement. They can be set to 0 at any point for comparisons. They are usually fairly susceptible to shock damage. They are also very prone to getting dirt in the gears, which can cause accuracy problems. Digital calipers switch easily between centimeter and inch systems. They can be set to zero easily at any point with a full count in either direction and can take measurements even if the display is completely hidden, either by using a "hold" key, or by zeroing the display and closing the jaws, showing the correct measurement, but negative. They can be mechanically and electronically fragile. Most also require batteries and do not resist coolant well. They are also only moderately shockproof and can be vulnerable to dirt. Calipers may read to a resolution of 0.01 mm or 0.0005 in, but accuracy may not be better than about ±0.02 mm or 0.001 in for 150 mm (6 in) calipers, and worse for longer ones. Use A caliper must be properly applied against the part in order to take the desired measurement. For example, when measuring the thickness of a plate, a vernier caliper must be held at right angles to the piece. Some practice may be needed to measure round or irregular objects correctly. Accuracy of measurement when using a caliper is highly dependent on the skill of the operator. Regardless of type, a caliper's jaws must be forced into contact with the part being measured. As both part and caliper are always to some extent elastic, the amount of force used affects the indication. A consistent, firm touch is correct. Too much force results in an under indication as part and tool distort; too little force gives insufficient contact and an over indication. This is a greater problem with a caliper incorporating a wheel, which lends mechanical advantage. This is especially the case with digital calipers, calipers out of adjustment, or calipers with a poor quality beam. Simple calipers are uncalibrated; the measurement taken must be compared against a scale. Whether the scale is part of the caliper or not, all analog calipers—verniers and dials—require good eyesight in order to achieve the highest precision. Digital calipers have an advantage in this area. Calibrated calipers may be mishandled, leading to loss of zero. When a caliper's jaws are fully closed, it should, of course, indicate zero. If it does not, it must be recalibrated or repaired. A vernier caliper does not easily lose its calibration, but a sharp impact or accidental damage to the measuring surface in the caliper jaw can be significant enough to displace zero. Digital calipers have zero set buttons, for quick recalibration. Vernier, dial and digital calipers can be used with accessories that extend their usefulness. Examples are a base that extends their usefulness as a depth gauge and a jaw attachment that all allows measuring the center distance between holes. Since the 1970s, a clever modification of the moveable jaw on the back side of any caliper allows for step or depth measurements in addition to external caliper measurements, similarly to a universal micrometer (e.g., Starrett Mul-T-Anvil or Mitutoyo Uni-Mike). Zero error The method to use a vernier scale or caliper with zero error is to use the formula "actual reading = main scale + vernier scale − (zero error)". Zero error may arise due to knocks that affect the calibration at 0.00 mm when the jaws are perfectly closed or just touching each other. Positive zero error refers to the fact that when the jaws of the vernier caliper are just closed, the reading is a positive reading away from the actual reading of 0.00 mm. If the reading is 0.10 mm, the zero error is referred to as +0.10 mm. Negative zero error refers to the fact that when the jaws of the vernier caliper are just closed, the reading is a negative reading away from the actual reading of 0.00 mm. If the reading is −0.08 mm, the zero error is referred to as −0.08 mm. Abbe error Calipers with measurement axes displaced from the object being measured suffer from Abbe error if the jaws are not perpendicular due to manufacturing tolerances. Unlike zero error, the amount of Abbe error depends on the offset.
Technology
Measuring instruments
null
15066
https://en.wikipedia.org/wiki/Insulator%20%28electricity%29
Insulator (electricity)
An electrical insulator is a material in which electric current does not flow freely. The atoms of the insulator have tightly bound electrons which cannot readily move. Other materials—semiconductors and conductors—conduct electric current more easily. The property that distinguishes an insulator is its resistivity; insulators have higher resistivity than semiconductors or conductors. The most common examples are non-metals. A perfect insulator does not exist because even insulators contain small numbers of mobile charges (charge carriers) which can carry current. In addition, all insulators become electrically conductive when a sufficiently large voltage is applied that the electric field tears electrons away from the atoms. This is known as electrical breakdown, and the voltage at which it occurs is called the breakdown voltage of an insulator. Some materials such as glass, paper and PTFE, which have high resistivity, are very good electrical insulators. A much larger class of materials, even though they may have lower bulk resistivity, are still good enough to prevent significant current from flowing at normally used voltages, and thus are employed as insulation for electrical wiring and cables. Examples include rubber-like polymers and most plastics which can be thermoset or thermoplastic in nature. Insulators are used in electrical equipment to support and separate electrical conductors without allowing current through themselves. An insulating material used in bulk to wrap electrical cables or other equipment is called insulation. The term insulator is also used more specifically to refer to insulating supports used to attach electric power distribution or transmission lines to utility poles and transmission towers. They support the weight of the suspended wires without allowing the current to flow through the tower to ground. Physics of conduction in solids Electrical insulation is the absence of electrical conduction. Electronic band theory (a branch of physics) explains that electric charge flows when quantum states of matter are available into which electrons can be excited. This allows electrons to gain energy and thereby move through a conductor, such as a metal, if an electric potential difference is applied to the material. If no such states are available, the material is an insulator. Most insulators have a large band gap. This occurs because the "valence" band containing the highest energy electrons is full, and a large energy gap separates this band from the next band above it. There is always some voltage (called the breakdown voltage) that gives electrons enough energy to be excited into this band. Once this voltage is exceeded, electrical breakdown occurs, and the material ceases being an insulator, passing charge. This is usually accompanied by physical or chemical changes that permanently degrade the material and its insulating properties. When the electric field applied across an insulating substance exceeds in any location the threshold breakdown field for that substance, the insulator suddenly becomes a conductor, causing a large increase in current, an electric arc through the substance. Electrical breakdown occurs when the electric field in the material is strong enough to accelerate free charge carriers (electrons and ions, which are always present at low concentrations) to a high enough velocity to knock electrons from atoms when they strike them, ionizing the atoms. These freed electrons and ions are in turn accelerated and strike other atoms, creating more charge carriers, in a chain reaction. Rapidly the insulator becomes filled with mobile charge carriers, and its resistance drops to a low level. In a solid, the breakdown voltage is proportional to the band gap energy. When corona discharge occurs, the air in a region around a high-voltage conductor can break down and ionise without a catastrophic increase in current. However, if the region of air breakdown extends to another conductor at a different voltage it creates a conductive path between them, and a large current flows through the air, creating an electric arc. Even a vacuum can suffer a sort of breakdown, but in this case the breakdown or vacuum arc involves charges ejected from the surface of metal electrodes rather than produced by the vacuum itself. In addition, all insulators become conductors at very high temperatures as the thermal energy of the valence electrons is sufficient to put them in the conduction band. In certain capacitors, shorts between electrodes formed due to dielectric breakdown can disappear when the applied electric field is reduced. Uses A flexible coating of an insulator is often applied to electric wire and cable; this assembly is called insulated wire. Wires sometimes don't use an insulating coating, just air, when a solid (e.g. plastic) coating may be impractical. Wires that touch each other produce cross connections, short circuits, and fire hazards. In coaxial cable the center conductor must be supported precisely in the middle of the hollow shield to prevent electro-magnetic wave reflections. Wires that expose high voltages can cause human shock and electrocution hazards. Most insulated wire and cable products have maximum ratings for voltage and conductor temperature. The product may not have an ampacity (current-carrying capacity) rating, since this is dependent on the surrounding environment (e.g. ambient temperature). In electronic systems, printed circuit boards are made from epoxy plastic and fibreglass. The nonconductive boards support layers of copper foil conductors. In electronic devices, the tiny and delicate active components are embedded within nonconductive epoxy or phenolic plastics, or within baked glass or ceramic coatings. In microelectronic components such as transistors and ICs, the silicon material is normally a conductor because of doping, but it can easily be selectively transformed into a good insulator by the application of heat and oxygen. Oxidised silicon is quartz, i.e. silicon dioxide, the primary component of glass. In high voltage systems containing transformers and capacitors, liquid insulator oil is the typical method used for preventing arcs. The oil replaces air in spaces that must support significant voltage without electrical breakdown. Other high voltage system insulation materials include ceramic or glass wire holders, gas, vacuum, and simply placing wires far enough apart to use air as insulation. Insulation in electrical apparatus The most important insulation material is air. A variety of solid, liquid, and gaseous insulators are also used in electrical apparatus. In smaller transformers, generators, and electric motors, insulation on the wire coils consists of up to four thin layers of polymer varnish film. Film-insulated magnet wire permits a manufacturer to obtain the maximum number of turns within the available space. Windings that use thicker conductors are often wrapped with supplemental fiberglass insulating tape. Windings may also be impregnated with insulating varnishes to prevent electrical corona and reduce magnetically induced wire vibration. Large power transformer windings are still mostly insulated with paper, wood, varnish, and mineral oil; although these materials have been used for more than 100 years, they still provide a good balance of economy and adequate performance. Busbars and circuit breakers in switchgear may be insulated with glass-reinforced plastic insulation, treated to have low flame spread and to prevent tracking of current across the material. In older apparatus made up to the early 1970s, boards made of compressed asbestos may be found; while this is an adequate insulator at power frequencies, handling or repairs to asbestos material can release dangerous fibers into the air and must be carried out cautiously. Wire insulated with felted asbestos was used in high-temperature and rugged applications from the 1920s. Wire of this type was sold by General Electric under the trade name "Deltabeston." Live-front switchboards up to the early part of the 20th century were made of slate or marble. Some high voltage equipment is designed to operate within a high pressure insulating gas such as sulfur hexafluoride. Insulation materials that perform well at power and low frequencies may be unsatisfactory at radio frequency, due to heating from excessive dielectric dissipation. Electrical wires may be insulated with polyethylene, crosslinked polyethylene (either through electron beam processing or chemical crosslinking), PVC, Kapton, rubber-like polymers, oil impregnated paper, Teflon, silicone, or modified ethylene tetrafluoroethylene (ETFE). Larger power cables may use compressed inorganic powder, depending on the application. Flexible insulating materials such as PVC (polyvinyl chloride) are used to insulate the circuit and prevent human contact with a 'live' wire – one having voltage of 600 volts or less. Alternative materials are likely to become increasingly used due to EU safety and environmental legislation making PVC less economic. In electrical apparatus such as motors, generators, and transformers, various insulation systems are used, classified by their maximum recommended working temperature to achieve acceptable operating life. Materials range from upgraded types of paper to inorganic compounds. Class I and Class II insulation All portable or hand-held electrical devices are insulated to protect their user from harmful shock. Class I insulation requires that the metal body and other exposed metal parts of the device be connected to earth via a grounding wire that is earthed at the main service panel—but only needs basic insulation on the conductors. This equipment needs an extra pin on the power plug for the grounding connection. Class II insulation means that the device is double insulated. This is used on some appliances such as electric shavers, hair dryers and portable power tools. Double insulation requires that the devices have both basic and supplementary insulation, each of which is sufficient to prevent electric shock. All internal electrically energized components are totally enclosed within an insulated body that prevents any contact with "live" parts. In the EU, double insulated appliances all are marked with a symbol of two squares, one inside the other. Telegraph and power transmission insulators Conductors for overhead high-voltage electric power transmission are bare, and are insulated by the surrounding air. Conductors for lower voltages in distribution may have some insulation but are often bare as well. Insulating supports are required at the points where they are supported by utility poles or transmission towers. Insulators are also required where wire enters buildings or electrical devices, such as transformers or circuit breakers, for insulation from the case. Often these are bushings, which are hollow insulators with the conductor inside them. Materials Insulators used for high-voltage power transmission are made from glass, porcelain or composite polymer materials. Porcelain insulators are made from clay, sapphire (A Diamond Cubic Carbon), boron nitride, quartz or alumina and feldspar, and are covered with a smooth glaze to shed water. Insulators made from porcelain rich in alumina are used where high mechanical strength is a criterion. Porcelain has a dielectric strength of about 4–10 kV/mm. Glass has a higher dielectric strength, but it attracts condensation and the thick irregular shapes needed for insulators are difficult to cast without internal strains. Some insulator manufacturers stopped making glass insulators in the late 1960s, switching to ceramic materials. Some electric utilities use polymer composite materials for some types of insulators. These are typically composed of a central rod made of fibre reinforced plastic and an outer weathershed made of silicone rubber or ethylene propylene diene monomer rubber (EPDM). Composite insulators are less costly, lighter in weight, and have excellent hydrophobic properties. This combination makes them ideal for service in polluted areas. However, these materials do not yet have the long-term proven service life of glass and porcelain. Design The electrical breakdown of an insulator due to excessive voltage can occur in one of two ways: A puncture arc is a breakdown and conduction of the material of the insulator, causing an electric arc through the interior of the insulator. The heat resulting from the arc usually damages the insulator irreparably. Puncture voltage is the voltage across the insulator (when installed in its normal manner) that causes a puncture arc. A flashover arc is a breakdown and conduction of the air around or along the surface of the insulator, causing an arc along the outside of the insulator. Insulators are usually designed to withstand flashover without damage. Flashover voltage is the voltage that causes a flash-over arc. Most high voltage insulators are designed with a lower flashover voltage than puncture voltage, so they flash over before they puncture, to avoid damage. Dirt, pollution, salt, and particularly water on the surface of a high voltage insulator can create a conductive path across it, causing leakage currents and flashovers. The flashover voltage can be reduced by more than 50% when the insulator is wet. High voltage insulators for outdoor use are shaped to maximise the length of the leakage path along the surface from one end to the other, called the creepage length, to minimise these leakage currents. To accomplish this the surface is moulded into a series of corrugations or concentric disc shapes. These usually include one or more sheds; downward facing cup-shaped surfaces that act as umbrellas to ensure that the part of the surface leakage path under the 'cup' stays dry in wet weather. Minimum creepage distances are 20–25 mm/kV, but must be increased in high pollution or airborne sea-salt areas. Types Insulators are characterized in several common classes: Pin insulator - The pin-type insulator is mounted on a pin affixed on the cross-arm of the pole. The insulator has a groove near the top just below the crown. The conductor passes through this groove and is tied to the insulator with annealed wire of the same material as the conductor. Pin-type insulators are used for transmission and distribution of communication signals, and electric power at voltages up to 33 kV. Insulators made for operating voltages between 33 kV and 69 kV tend to be bulky and have become uneconomical. Post insulator - A type of insulator in the 1930s that is more compact than traditional pin-type insulators and which has rapidly replaced many pin-type insulators on lines up to 69 kV and in some configurations, can be made for operation at up to 115 kV. Suspension insulator - For voltages greater than 33 kV, it is a usual practice to use suspension type insulators, consisting of a number of glass or porcelain discs connected in series by metal links in the form of a string. The conductor is suspended at the bottom end of this string while the top end is secured to the cross-arm of the tower. The number of disc units used depends on the voltage. Strain insulator - A dead end or anchor pole or tower is used where a straight section of line ends, or angles off in another direction. These poles must withstand the lateral (horizontal) tension of the long straight section of wire. To support this lateral load, strain insulators are used. For low voltage lines (less than 11 kV), shackle insulators are used as strain insulators. However, for high voltage transmission lines, strings of cap-and-pin (suspension) insulators are used, attached to the crossarm in a horizontal direction. When the tension load in lines is exceedingly high, such as at long river spans, two or more strings are used in parallel. Shackle insulator - In early days, the shackle insulators were used as strain insulators. But nowaday, they are frequently used for low voltage distribution lines. Such insulators can be used either in a horizontal position or in a vertical position. They can be directly fixed to the pole with a bolt or to the cross arm. Bushing - enables one or several conductors to pass through a partition such as a wall or a tank, and insulates the conductors from it. Line post insulator Station post insulator Cut-out Sheath insulator An insulator that protects a full-length of bottom-contact third rail. Suspension insulators Pin-type insulators are unsuitable for voltages greater than about 69 kV line-to-line. Higher voltage transmission lines usually use modular suspension insulator designs. The wires are suspended from a 'string' of identical disc-shaped insulators that attach to each other with metal clevis pin or ball-and-socket links. The advantage of this design is that insulator strings with different breakdown voltages, for use with different line voltages, can be constructed by using different numbers of the basic units. String insulators can be made for any practical transmission voltage by adding insulator elements to the string. Also, if one of the insulator units in the string breaks, it can be replaced without discarding the entire string. Each unit is constructed of a ceramic or glass disc with a metal cap and pin cemented to opposite sides. To make defective units obvious, glass units are designed so that an overvoltage causes a puncture arc through the glass instead of a flashover. The glass is heat-treated so it shatters, making the damaged unit visible. However the mechanical strength of the unit is unchanged, so the insulator string stays together. Standard suspension disc insulator units are in diameter and long, can support a load of , have a dry flashover voltage of about 72 kV, and are rated at an operating voltage of 10–12 kV. However, the flashover voltage of a string is less than the sum of its component discs, because the electric field is not distributed evenly across the string but is strongest at the disc nearest to the conductor, which flashes over first. Metal grading rings are sometimes added around the disc at the high voltage end, to reduce the electric field across that disc and improve flashover voltage. In very high voltage lines the insulator may be surrounded by corona rings. These typically consist of toruses of aluminium (most commonly) or copper tubing attached to the line. They are designed to reduce the electric field at the point where the insulator is attached to the line, to prevent corona discharge, which results in power losses. History The first electrical systems to make use of insulators were telegraph lines; direct attachment of wires to wooden poles was found to give very poor results, especially during damp weather. The first glass insulators used in large quantities had an unthreaded pinhole. These pieces of glass were positioned on a tapered wooden pin, vertically extending upwards from the pole's crossarm (commonly only two insulators to a pole and maybe one on top of the pole itself). Natural contraction and expansion of the wires tied to these "threadless insulators" resulted in insulators unseating from their pins, requiring manual reseating. Amongst the first to produce ceramic insulators were companies in the United Kingdom, with Stiff and Doulton using stoneware from the mid-1840s, Joseph Bourne (later renamed Denby) producing them from around 1860 and Bullers from 1868. Utility patent number 48,906 was granted to Louis A. Cauvet on 25 July 1865 for a process to produce insulators with a threaded pinhole: pin-type insulators still have threaded pinholes. The invention of suspension-type insulators made high-voltage power transmission possible. As transmission line voltages reached and passed 60,000 volts, the insulators required become very large and heavy, with insulators made for a safety margin of 88,000 volts being about the practical limit for manufacturing and installation. Suspension insulators, on the other hand, can be connected into strings as long as required for the line's voltage. A large variety of telephone, telegraph and power insulators have been made; some people collect them, both for their historic interest and for the aesthetic quality of many insulator designs and finishes. One collectors organisation is the US National Insulator Association, which has over 9,000 members. Insulation of antennas Often a broadcasting radio antenna is built as a mast radiator, which means that the entire mast structure is energised with high voltage and must be insulated from the ground. Steatite mountings are used. They have to withstand not only the voltage of the mast radiator to ground, which can reach values up to 400 kV at some antennas, but also the weight of the mast construction and dynamic forces. Arcing horns and lightning arresters are necessary because lightning strikes to the mast are common. Guy wires supporting antenna masts usually have strain insulators inserted in the cable run, to keep the high voltages on the antenna from short circuiting to ground or creating a shock hazard. Often guy cables have several insulators, placed to break up the cable into lengths that prevent unwanted electrical resonances in the guy. These insulators are usually ceramic and cylindrical or egg-shaped (see picture). This construction has the advantage that the ceramic is under compression rather than tension, so it can withstand greater load, and that if the insulator breaks, the cable ends are still linked. These insulators also have to be equipped with overvoltage protection equipment. For the dimensions of the guy insulation, static charges on guys have to be considered. For high masts, these can be much higher than the voltage caused by the transmitter, requiring guys divided by insulators in multiple sections on the highest masts. In this case, guys which are grounded at the anchor basements via a coil - or if possible, directly - are the better choice. Feedlines attaching antennas to radio equipment, particularly twin-lead type, often must be kept at a distance from metal structures. The insulated supports used for this purpose are called standoff insulators.
Physical sciences
Electrical circuits
Physics
15069
https://en.wikipedia.org/wiki/Identity%20function
Identity function
In mathematics, an identity function, also called an identity relation, identity map or identity transformation, is a function that always returns the value that was used as its argument, unchanged. That is, when is the identity function, the equality is true for all values of to which can be applied. Definition Formally, if is a set, the identity function on is defined to be a function with as its domain and codomain, satisfying In other words, the function value in the codomain is always the same as the input element in the domain . The identity function on is clearly an injective function as well as a surjective function (its codomain is also its range), so it is bijective. The identity function on is often denoted by . In set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or diagonal of . Algebraic properties If is any function, then , where "∘" denotes function composition. In particular, is the identity element of the monoid of all functions from to (under function composition). Since the identity element of a monoid is unique, one can alternately define the identity function on to be this identity element. Such a definition generalizes to the concept of an identity morphism in category theory, where the endomorphisms of need not be functions. Properties The identity function is a linear operator when applied to vector spaces. In an -dimensional vector space the identity function is represented by the identity matrix , regardless of the basis chosen for the space. The identity function on the positive integers is a completely multiplicative function (essentially multiplication by 1), considered in number theory. In a metric space the identity function is trivially an isometry. An object without any symmetry has as its symmetry group the trivial group containing only this isometry (symmetry type ). In a topological space, the identity function is always continuous. The identity function is idempotent.
Mathematics
Specific functions
null
15097
https://en.wikipedia.org/wiki/Ionosphere
Ionosphere
The ionosphere () is the ionized part of the upper atmosphere of Earth, from about to above sea level, a region that includes the thermosphere and parts of the mesosphere and exosphere. The ionosphere is ionized by solar radiation. It plays an important role in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on Earth. Travel through this layer also impacts GPS signals, resulting in effects such as deflection in their path and delay in the arrival of the signal. History of discovery As early as 1839, the German mathematician and physicist Carl Friedrich Gauss postulated that an electrically conducting region of the atmosphere could account for observed variations of Earth's magnetic field. Sixty years later, Guglielmo Marconi received the first trans-Atlantic radio signal on December 12, 1901, in St. John's, Newfoundland (now in Canada) using a kite-supported antenna for reception. The transmitting station in Poldhu, Cornwall, used a spark-gap transmitter to produce a signal with a frequency of approximately 500 kHz and a power of 100 times more than any radio signal previously produced. The message received was three dits, the Morse code for the letter S. To reach Newfoundland the signal would have to bounce off the ionosphere twice. Dr. Jack Belrose has contested this, however, based on theoretical and experimental work. However, Marconi did achieve transatlantic wireless communications in Glace Bay, Nova Scotia, one year later. In 1902, Oliver Heaviside proposed the existence of the Kennelly–Heaviside layer of the ionosphere which bears his name. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. Also in 1902, Arthur Edwin Kennelly discovered some of the ionosphere's radio-electrical properties. In 1912, the U.S. Congress imposed the Radio Act of 1912 on amateur radio operators, limiting their operations to frequencies above 1.5 MHz (wavelength 200 meters or smaller). The government thought those frequencies were useless. This led to the discovery of HF radio propagation via the ionosphere in 1923. In 1925, observations during a solar eclipse in New York by Dr. Alfred N. Goldsmith and his team demonstrated the influence of sunlight on radio wave propagation, revealing that short waves became weak or inaudible while long waves steadied during the eclipse, thus contributing to the understanding of the ionosphere's role in radio transmission. In 1926, Scottish physicist Robert Watson-Watt introduced the term ionosphere in a letter published only in 1969 in Nature: In the early 1930s, test transmissions of Radio Luxembourg inadvertently provided evidence of the first radio modification of the ionosphere; HAARP ran a series of experiments in 2017 using the eponymous Luxembourg Effect. Edward V. Appleton was awarded a Nobel Prize in 1947 for his confirmation in 1927 of the existence of the ionosphere. Lloyd Berkner first measured the height and density of the ionosphere. This permitted the first complete theory of short-wave radio propagation. Maurice V. Wilkes and J. A. Ratcliffe researched the topic of radio propagation of very long radio waves in the ionosphere. Vitaly Ginzburg has developed a theory of electromagnetic wave propagation in plasmas such as the ionosphere. In 1962, the Canadian satellite Alouette 1 was launched to study the ionosphere. Following its success were Alouette 2 in 1965 and the two ISIS satellites in 1969 and 1971, further AEROS-A and -B in 1972 and 1975, all for measuring the ionosphere. On July 26, 1963, the first operational geosynchronous satellite Syncom 2 was launched. On board radio beacons on this satellite (and its successors) enabled – for the first time – the measurement of total electron content (TEC) variation along a radio beam from geostationary orbit to an earth receiver. (The rotation of the plane of polarization directly measures TEC along the path.) Australian geophysicist Elizabeth Essex-Cohen from 1969 onwards was using this technique to monitor the atmosphere above Australia and Antarctica. Geophysics The ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth, stretching from a height of about to more than . It exists primarily due to ultraviolet radiation from the Sun. The lowest part of the Earth's atmosphere, the troposphere, extends from the surface to about . Above that is the stratosphere, followed by the mesosphere. In the stratosphere incoming solar radiation creates the ozone layer. At heights of above , in the thermosphere, the atmosphere is so thin that free electrons can exist for short periods of time before they are captured by a nearby positive ion. The number of these free electrons is sufficient to affect radio propagation. This portion of the atmosphere is partially ionized and contains a plasma which is referred to as the ionosphere. Ultraviolet (UV), X-ray and shorter wavelengths of solar radiation are ionizing, since photons at these frequencies contain sufficient energy to dislodge an electron from a neutral gas atom or molecule upon absorption. In this process the light electron obtains a high velocity so that the temperature of the created electronic gas is much higher (of the order of thousand K) than the one of ions and neutrals. The reverse process to ionization is recombination, in which a free electron is "captured" by a positive ion. Recombination occurs spontaneously, and causes the emission of a photon carrying away the energy produced upon recombination. As gas density increases at lower altitudes, the recombination process prevails, since the gas molecules and ions are closer together. The balance between these two processes determines the quantity of ionization present. Ionization depends primarily on the Sun and its Extreme Ultraviolet (EUV) and X-ray irradiance which varies strongly with solar activity. The more magnetically active the Sun is, the more sunspot active regions there are on the Sun at any one time. Sunspot active regions are the source of increased coronal heating and accompanying increases in EUV and X-ray irradiance, particularly during episodic magnetic eruptions that include solar flares that increase ionization on the sunlit side of the Earth and solar energetic particle events that can increase ionization in the polar regions. Thus the degree of ionization in the ionosphere follows both a diurnal (time of day) cycle and the 11-year solar cycle. There is also a seasonal dependence in ionization degree since the local winter hemisphere is tipped away from the Sun, thus there is less received solar radiation. Radiation received also varies with geographical location (polar, auroral zones, mid-latitudes, and equatorial regions). There are also mechanisms that disturb the ionosphere and decrease the ionization. Sydney Chapman proposed that the region below the ionosphere be called neutrosphere (the neutral atmosphere). Layers of ionization At night the F layer is the only layer of significant ionization present, while the ionization in the E and D layers is extremely low. During the day, the D and E layers become much more heavily ionized, as does the F layer, which develops an additional, weaker region of ionisation known as the F layer. The F layer persists by day and night and is the main region responsible for the refraction and reflection of radio waves. D layer The D layer is the innermost layer, above the surface of the Earth. Ionization here is due to Lyman series-alpha hydrogen radiation at a wavelength of 121.6 nanometre (nm) ionizing nitric oxide (NO). In addition, solar flares can generate hard X-rays (wavelength ) that ionize N and O. Recombination rates are high in the D layer, so there are many more neutral air molecules than ions. Medium frequency (MF) and lower high frequency (HF) radio waves are significantly attenuated within the D layer, as the passing radio waves cause electrons to move, which then collide with the neutral molecules, giving up their energy. Lower frequencies experience greater absorption because they move the electrons farther, leading to greater chance of collisions. This is the main reason for absorption of HF radio waves, particularly at 10 MHz and below, with progressively less absorption at higher frequencies. This effect peaks around noon and is reduced at night due to a decrease in the D layer's thickness; only a small part remains due to cosmic rays. A common example of the D layer in action is the disappearance of distant AM broadcast band stations in the daytime. During solar proton events, ionization can reach unusually high levels in the D-region over high and polar latitudes. Such very rare events are known as Polar Cap Absorption (or PCA) events, because the increased ionization significantly enhances the absorption of radio signals passing through the region. In fact, absorption levels can increase by many tens of dB during intense events, which is enough to absorb most (if not all) transpolar HF radio signal transmissions. Such events typically last less than 24 to 48 hours. E layer The E layer is the middle layer, above the surface of the Earth. Ionization is due to soft X-ray (1–10 nm) and far ultraviolet (UV) solar radiation ionization of molecular oxygen (O). Normally, at oblique incidence, this layer can only reflect radio waves having frequencies lower than about 10 MHz and may contribute a bit to absorption on frequencies above. However, during intense sporadic E events, the E layer can reflect frequencies up to 50 MHz and higher. The vertical structure of the E layer is primarily determined by the competing effects of ionization and recombination. At night the E layer weakens because the primary source of ionization is no longer present. After sunset an increase in the height of the E layer maximum increases the range to which radio waves can travel by reflection from the layer. This region is also known as the Kennelly–Heaviside layer or simply the Heaviside layer. Its existence was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British physicist Oliver Heaviside (1850–1925). In 1924 its existence was detected by Edward V. Appleton and Miles Barnett. E layer The E layer (sporadic E-layer) is characterized by small, thin clouds of intense ionization, which can support reflection of radio waves, frequently up to 50 MHz and rarely up to 450 MHz. Sporadic-E events may last for just a few minutes to many hours. Sporadic E propagation makes VHF-operating by radio amateurs very exciting when long-distance propagation paths that are generally unreachable "open up" to two-way communication. There are multiple causes of sporadic-E that are still being pursued by researchers. This propagation occurs every day during June and July in northern hemisphere mid-latitudes when high signal levels are often reached. The skip distances are generally around . Distances for one hop propagation can be anywhere from . Multi-hop propagation over is also common, sometimes to distances of or more. F layer The F layer or region, also known as the Appleton–Barnett layer, extends from about to more than above the surface of Earth. It is the layer with the highest electron density, which implies signals penetrating this layer will escape into space. Electron production is dominated by extreme ultraviolet (UV, 10–100 nm) radiation ionizing atomic oxygen. The F layer consists of one layer (F) at night, but during the day, a secondary peak (labelled F) often forms in the electron density profile. Because the F layer remains by day and night, it is responsible for most skywave propagation of radio waves and long distance high frequency (HF, or shortwave) radio communications. Above the F layer, the number of oxygen ions decreases and lighter ions such as hydrogen and helium become dominant. This region above the F layer peak and below the plasmasphere is called the topside ionosphere. From 1972 to 1975 NASA launched the AEROS and AEROS B satellites to study the F region. Ionospheric model An ionospheric model is a mathematical description of the ionosphere as a function of location, altitude, day of year, phase of the sunspot cycle and geomagnetic activity. Geophysically, the state of the ionospheric plasma may be described by four parameters: electron density, electron and ion temperature and, since several species of ions are present, ionic composition. Radio propagation depends uniquely on electron density. Models are usually expressed as computer programs. The model may be based on basic physics of the interactions of the ions and electrons with the neutral atmosphere and sunlight, or it may be a statistical description based on a large number of observations or a combination of physics and observations. One of the most widely used models is the International Reference Ionosphere (IRI), which is based on data and specifies the four parameters just mentioned. The IRI is an international project sponsored by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI). The major data sources are the worldwide network of ionosondes, the powerful incoherent scatter radars (Jicamarca, Arecibo, Millstone Hill, Malvern, St Santin), the ISIS and Alouette topside sounders, and in situ instruments on several satellites and rockets. IRI is updated yearly. IRI is more accurate in describing the variation of the electron density from bottom of the ionosphere to the altitude of maximum density than in describing the total electron content (TEC). Since 1999 this model is "International Standard" for the terrestrial ionosphere (standard TS16457). Persistent anomalies to the idealized model Ionograms allow deducing, via computation, the true shape of the different layers. Nonhomogeneous structure of the electron/ion-plasma produces rough echo traces, seen predominantly at night and at higher latitudes, and during disturbed conditions. Winter anomaly At mid-latitudes, the F2 layer daytime ion production is higher in the summer, as expected, since the Sun shines more directly on the Earth. However, there are seasonal changes in the molecular-to-atomic ratio of the neutral atmosphere that cause the summer ion loss rate to be even higher. The result is that the increase in the summertime loss overwhelms the increase in summertime production, and total F2 ionization is actually lower in the local summer months. This effect is known as the winter anomaly. The anomaly is always present in the northern hemisphere, but is usually absent in the southern hemisphere during periods of low solar activity. Equatorial anomaly Within approximately ± 20 degrees of the magnetic equator, is the equatorial anomaly. It is the occurrence of a trough in the ionization in the F2 layer at the equator and crests at about 17 degrees in magnetic latitude. The Earth's magnetic field lines are horizontal at the magnetic equator. Solar heating and tidal oscillations in the lower ionosphere move plasma up and across the magnetic field lines. This sets up a sheet of electric current in the E region which, with the horizontal magnetic field, forces ionization up into the F layer, concentrating at ± 20 degrees from the magnetic equator. This phenomenon is known as the equatorial fountain. Equatorial electrojet The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (ionospheric dynamo region) ( altitude). Resulting from this current is an electrostatic field directed west–east (dawn–dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current flow within ± 3 degrees of the magnetic equator, known as the equatorial electrojet. Ephemeral ionospheric perturbations X-rays: sudden ionospheric disturbances (SID) When the Sun is active, strong solar flares can occur that hit the sunlit side of Earth with hard X-rays. The X-rays penetrate to the D-region, releasing electrons that rapidly increase absorption, causing a high frequency (3–30 MHz) radio blackout that can persist for many hours after strong flares. During this time very low frequency (3–30 kHz) signals will be reflected by the D layer instead of the E layer, where the increased atmospheric density will usually increase the absorption of the wave and thus dampen it. As soon as the X-rays end, the sudden ionospheric disturbance (SID) or radio black-out steadily declines as the electrons in the D-region recombine rapidly and propagation gradually returns to pre-flare conditions over minutes to hours depending on the solar flare strength and frequency. Protons: polar cap absorption (PCA) Associated with solar flares is a release of high-energy protons. These particles can hit the Earth within 15 minutes to 2 hours of the solar flare. The protons spiral around and down the magnetic field lines of the Earth and penetrate into the atmosphere near the magnetic poles increasing the ionization of the D and E layers. PCA's typically last anywhere from about an hour to several days, with an average of around 24 to 36 hours. Coronal mass ejections can also release energetic protons that enhance D-region absorption in the polar regions. Storms Geomagnetic storms and ionospheric storms are temporary and intense disturbances of the Earth's magnetosphere and ionosphere. During a geomagnetic storm the F₂ layer will become unstable, fragment, and may even disappear completely. In the Northern and Southern polar regions of the Earth aurorae will be observable in the night sky. Lightning Lightning can cause ionospheric perturbations in the D-region in one of two ways. The first is through VLF (very low frequency) radio waves launched into the magnetosphere. These so-called "whistler" mode waves can interact with radiation belt particles and cause them to precipitate onto the ionosphere, adding ionization to the D-region. These disturbances are called "lightning-induced electron precipitation" (LEP) events. Additional ionization can also occur from direct heating/ionization as a result of huge motions of charge in lightning strikes. These events are called early/fast. In 1925, C. T. R. Wilson proposed a mechanism by which electrical discharge from lightning storms could propagate upwards from clouds to the ionosphere. Around the same time, Robert Watson-Watt, working at the Radio Research Station in Slough, UK, suggested that the ionospheric sporadic E layer (Es) appeared to be enhanced as a result of lightning but that more work was needed. In 2005, C. Davis and C. Johnson, working at the Rutherford Appleton Laboratory in Oxfordshire, UK, demonstrated that the Es layer was indeed enhanced as a result of lightning activity. Their subsequent research has focused on the mechanism by which this process can occur. Applications Radio communication Due to the ability of ionized atmospheric gases to refract high frequency (HF, or shortwave) radio waves, the ionosphere can reflect radio waves directed into the sky back toward the Earth. Radio waves directed at an angle into the sky can return to Earth beyond the horizon. This technique, called "skip" or "skywave" propagation, has been used since the 1920s to communicate at international or intercontinental distances. The returning radio waves can reflect off the Earth's surface into the sky again, allowing greater ranges to be achieved with multiple hops. This communication method is variable and unreliable, with reception over a given path depending on time of day or night, the seasons, weather, and the 11-year sunspot cycle. During the first half of the 20th century it was widely used for transoceanic telephone and telegraph service, and business and diplomatic communication. Due to its relative unreliability, shortwave radio communication has been mostly abandoned by the telecommunications industry, though it remains important for high-latitude communication where satellite-based radio communication is not possible. Shortwave broadcasting is useful in crossing international boundaries and covering large areas at low cost. Automated services still use shortwave radio frequencies, as do radio amateur hobbyists for private recreational contacts and to assist with emergency communications during natural disasters. Armed forces use shortwave so as to be independent of vulnerable infrastructure, including satellites, and the low latency of shortwave communications make it attractive to stock traders, where milliseconds count. Mechanism of refraction When a radio wave reaches the ionosphere, the electric field in the wave forces the electrons in the ionosphere into oscillation at the same frequency as the radio wave. Some of the radio-frequency energy is given up to this resonant oscillation. The oscillating electrons will then either be lost to recombination or will re-radiate the original wave energy. Total refraction can occur when the collision frequency of the ionosphere is less than the radio frequency, and if the electron density in the ionosphere is great enough. A qualitative understanding of how an electromagnetic wave propagates through the ionosphere can be obtained by recalling geometric optics. Since the ionosphere is a plasma, it can be shown that the refractive index is less than unity. Hence, the electromagnetic "ray" is bent away from the normal rather than toward the normal as would be indicated when the refractive index is greater than unity. It can also be shown that the refractive index of a plasma, and hence the ionosphere, is frequency-dependent, see Dispersion (optics). The critical frequency is the limiting frequency at or below which a radio wave is reflected by an ionospheric layer at vertical incidence. If the transmitted frequency is higher than the plasma frequency of the ionosphere, then the electrons cannot respond fast enough, and they are not able to re-radiate the signal. It is calculated as shown below: where N = electron density per m3 and fcritical is in Hz. The Maximum Usable Frequency (MUF) is defined as the upper frequency limit that can be used for transmission between two points at a specified time. where = angle of arrival, the angle of the wave relative to the horizon, and sin is the sine function. The cutoff frequency is the frequency below which a radio wave fails to penetrate a layer of the ionosphere at the incidence angle required for transmission between two specified points by refraction from the layer. GPS/GNSS ionospheric correction There are a number of models used to understand the effects of the ionosphere on global navigation satellite systems. The Klobuchar model is currently used to compensate for ionospheric effects in GPS. This model was developed at the US Air Force Geophysical Research Laboratory circa 1974 by John (Jack) Klobuchar. The Galileo navigation system uses the NeQuick model. GALILEO broadcasts 3 coefficients to compute the effective ionization level, which is then used by the NeQuick model to compute a range delay along the line-of-sight. Other applications The open system electrodynamic tether, which uses the ionosphere, is being researched. The space tether uses plasma contactors and the ionosphere as parts of a circuit to extract energy from the Earth's magnetic field by electromagnetic induction. Measurements Overview Scientists explore the structure of the ionosphere by a wide variety of methods. They include: passive observations of optical and radio emissions generated in the ionosphere bouncing radio waves of different frequencies from it incoherent scatter radars such as the EISCAT, Sondre Stromfjord, Millstone Hill, Arecibo, Advanced Modular Incoherent Scatter Radar (AMISR) and Jicamarca radars coherent scatter radars such as the Super Dual Auroral Radar Network (SuperDARN) radars special receivers to detect how the reflected waves have changed from the transmitted waves. A variety of experiments, such as HAARP (High Frequency Active Auroral Research Program), involve high power radio transmitters to modify the properties of the ionosphere. These investigations focus on studying the properties and behavior of ionospheric plasma, with particular emphasis on being able to understand and use it to enhance communications and surveillance systems for both civilian and military purposes. HAARP was started in 1993 as a proposed twenty-year experiment, and is currently active near Gakona, Alaska. The SuperDARN radar project researches the high- and mid-latitudes using coherent backscatter of radio waves in the 8 to 20 MHz range. Coherent backscatter is similar to Bragg scattering in crystals and involves the constructive interference of scattering from ionospheric density irregularities. The project involves more than 11 countries and multiple radars in both hemispheres. Scientists are also examining the ionosphere by the changes to radio waves, from satellites and stars, passing through it. The Arecibo Telescope located in Puerto Rico, was originally intended to study Earth's ionosphere. Ionograms Ionograms show the virtual heights and critical frequencies of the ionospheric layers and which are measured by an ionosonde. An ionosonde sweeps a range of frequencies, usually from 0.1 to 30 MHz, transmitting at vertical incidence to the ionosphere. As the frequency increases, each wave is refracted less by the ionization in the layer, and so each penetrates further before it is reflected. Eventually, a frequency is reached that enables the wave to penetrate the layer without being reflected. For ordinary mode waves, this occurs when the transmitted frequency just exceeds the peak plasma, or critical, frequency of the layer. Tracings of the reflected high frequency radio pulses are known as ionograms. Reduction rules are given in: "URSI Handbook of Ionogram Interpretation and Reduction", edited by William Roy Piggott and Karl Rawer, Elsevier Amsterdam, 1961 (translations into Chinese, French, Japanese and Russian are available). Incoherent scatter radars Incoherent scatter radars operate above the critical frequencies. Therefore, the technique allows probing the ionosphere, unlike ionosondes, also above the electron density peaks. The thermal fluctuations of the electron density scattering the transmitted signals lack coherence, which gave the technique its name. Their power spectrum contains information not only on the density, but also on the ion and electron temperatures, ion masses and drift velocities. Incoherent scatter radars can also measure neutral atmosphere movements, such as atmospheric tides, after making assumptions about ion-neutral collision frequency across the ionospheric dynamo region. GNSS radio occultation Radio occultation is a remote sensing technique where a GNSS signal tangentially scrapes the Earth, passing through the atmosphere, and is received by a Low Earth Orbit (LEO) satellite. As the signal passes through the atmosphere, it is refracted, curved and delayed. An LEO satellite samples the total electron content and bending angle of many such signal paths as it watches the GNSS satellite rise or set behind the Earth. Using an Inverse Abel's transform, a radial profile of refractivity at that tangent point on earth can be reconstructed. Major GNSS radio occultation missions include the GRACE, CHAMP, and COSMIC. Indices of the ionosphere In empirical models of the ionosphere such as Nequick, the following indices are used as indirect indicators of the state of the ionosphere. Solar intensity F10.7 and R12 are two indices commonly used in ionospheric modelling. Both are valuable for their long historical records covering multiple solar cycles. F10.7 is a measurement of the intensity of solar radio emissions at a frequency of 2800 MHz made using a ground radio telescope. R12 is a 12 months average of daily sunspot numbers. The two indices have been shown to be correlated with each other. However, both indices are only indirect indicators of solar ultraviolet and X-ray emissions, which are primarily responsible for causing ionization in the Earth's upper atmosphere. We now have data from the GOES spacecraft that measures the background X-ray flux from the Sun, a parameter more closely related to the ionization levels in the ionosphere. Geomagnetic disturbances The A- and K-indices are a measurement of the behavior of the horizontal component of the geomagnetic field. The K-index uses a semi-logarithmic scale from 0 to 9 to measure the strength of the horizontal component of the geomagnetic field. The Boulder K-index is measured at the Boulder Geomagnetic Observatory. The geomagnetic activity levels of the Earth are measured by the fluctuation of the Earth's magnetic field in SI units called teslas (or in non-SI gauss, especially in older literature). The Earth's magnetic field is measured around the planet by many observatories. The data retrieved is processed and turned into measurement indices. Daily measurements for the entire planet are made available through an estimate of the Ap-index, called the planetary A-index (PAI). Ionospheres of other planets and natural satellites Objects in the Solar System that have appreciable atmospheres (i.e., all of the major planets and many of the larger natural satellites) generally produce ionospheres. Planets known to have ionospheres include Venus, Mars, Jupiter, Saturn, Uranus, and Neptune. The atmosphere of Titan includes an ionosphere that ranges from about in altitude and contains carbon compounds. Ionospheres have also been observed at Io, Europa, Ganymede, Triton, and Pluto.
Physical sciences
Geophysics
Earth science
15112
https://en.wikipedia.org/wiki/Wave%20interference
Wave interference
In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their phase difference. The resultant wave may have greater intensity (constructive interference) or lower amplitude (destructive interference) if the two waves are in phase or out of phase, respectively. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves as well as in loudspeakers as electrical waves. Etymology The word interference is derived from the Latin words inter which means "between" and fere which means "hit or strike", and was used in the context of wave superposition by Thomas Young in 1801. Mechanisms The principle of superposition of waves states that when two or more propagating waves of the same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point, then the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave, then the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference. In ideal mediums (water, air are almost ideal) energy is always conserved, at points of destructive interference, the wave amplitudes cancel each other out, and the energy is redistributed to other areas. For example, when two pebbles are dropped in a pond, a pattern is observable; but eventually waves continue, and only when they reach the shore is the energy absorbed away from the medium. Constructive interference occurs when the phase difference between the waves is an even multiple of (180°), whereas destructive interference occurs when the difference is an odd multiple of . If the difference between the phases is intermediate between these two extremes, then the magnitude of the displacement of the summed waves lies between the minimum and maximum values. Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations. Each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, and will produce a maximum displacement. In other places, the waves will be in anti-phase, and there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre. Interference of light is a unique phenomenon in that we can never observe superposition of the EM field directly as we can, for example, in water. Superposition in the EM field is an assumed phenomenon and necessary to explain how two light beams pass through each other and continue on their respective paths. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers. In addition to classical wave model for understanding optical interference, quantum matter waves also demonstrate interference. Real-valued wave functions The above can be demonstrated in one dimension by deriving the formula for the sum of two waves. The equation for the amplitude of a sinusoidal wave traveling to the right along the x-axis is where is the peak amplitude, is the wavenumber and is the angular frequency of the wave. Suppose a second wave of the same frequency and amplitude but with a different phase is also traveling to the right where is the phase difference between the waves in radians. The two waves will superpose and add: the sum of the two waves is Using the trigonometric identity for the sum of two cosines: this can be written This represents a wave at the original frequency, traveling to the right like its components, whose amplitude is proportional to the cosine of . Constructive interference: If the phase difference is an even multiple of : then , so the sum of the two waves is a wave with twice the amplitude Destructive interference: If the phase difference is an odd multiple of : then , so the sum of the two waves is zero Between two plane waves A simple form of interference pattern is obtained if two plane waves of the same frequency intersect at an angle. One wave is travelling horizontally, and the other is travelling downwards at an angle θ to the first wave. Assuming that the two waves are in phase at the point B, then the relative phase changes along the x-axis. The phase difference at the point A is given by It can be seen that the two waves are in phase when and are half a cycle out of phase when Constructive interference occurs when the waves are in phase, and destructive interference when they are half a cycle out of phase. Thus, an interference fringe pattern is produced, where the separation of the maxima is and is known as the fringe spacing. The fringe spacing increases with increase in wavelength, and with decreasing angle . The fringes are observed wherever the two waves overlap and the fringe spacing is uniform throughout. Between two spherical waves A point source produces a spherical wave. If the light from two point sources overlaps, the interference pattern maps out the way in which the phase difference between the two waves varies in space. This depends on the wavelength and on the separation of the point sources. The figure to the right shows interference between two spherical waves. The wavelength increases from top to bottom, and the distance between the sources increases from left to right. When the plane of observation is far enough away, the fringe pattern will be a series of almost straight lines, since the waves will then be almost planar. Multiple beams Interference occurs when several waves are added together provided that the phase differences between them remain constant over the observation time. It is sometimes desirable for several waves of the same frequency and amplitude to sum to zero (that is, interfere destructively, cancel). This is the principle behind, for example, 3-phase power and the diffraction grating. In both of these cases, the result is achieved by uniform spacing of the phases. It is easy to see that a set of waves will cancel if they have the same amplitude and their phases are spaced equally in angle. Using phasors, each wave can be represented as for waves from to , where To show that one merely assumes the converse, then multiplies both sides by The Fabry–Pérot interferometer uses interference between multiple reflections. A diffraction grating can be considered to be a multiple-beam interferometer; since the peaks which it produces are generated by interference between the light transmitted by each of the elements in the grating; see interference vs. diffraction for further discussion. Complex valued wave functions Mechanical and gravity waves can be directly observed: they are real-valued wave functions; optical and matter waves cannot be directly observed: they are complex valued wave functions. Some of the differences between real valued and complex valued wave interference include: The interference involves different types of mathematical functions: A classical wave is a real function representing the displacement from an equilibrium position; an optical or quantum wavefunction is a complex function. A classical wave at any point can be positive or negative; the quantum probability function is non-negative. Any two different real waves in the same medium interfere; complex waves must be coherent to interfere. In practice this means the wave must come from the same source and have similar frequencies Real wave interference is obtained simply by adding the displacements from equilibrium (or amplitudes) of the two waves; In complex wave interference, we measure the modulus of the wavefunction squared. Optical wave interference Because the frequency of light waves (~1014 Hz) is too high for currently available detectors to detect the variation of the electric field of the light, it is possible to observe only the intensity of an optical interference pattern. The intensity of the light at a given point is proportional to the square of the average amplitude of the wave. This can be expressed mathematically as follows. The displacement of the two waves at a point is: where represents the magnitude of the displacement, represents the phase and represents the angular frequency. The displacement of the summed waves is The intensity of the light at is given by This can be expressed in terms of the intensities of the individual waves as Thus, the interference pattern maps out the difference in phase between the two waves, with maxima occurring when the phase difference is a multiple of 2. If the two beams are of equal intensity, the maxima are four times as bright as the individual beams, and the minima have zero intensity. Classically the two waves must have the same polarization to give rise to interference fringes since it is not possible for waves of different polarizations to cancel one another out or add together. Instead, when waves of different polarization are added together, they give rise to a wave of a different polarization state. Quantum mechanically the theories of Paul Dirac and Richard Feynman offer a more modern approach. Dirac showed that every quanta or photon of light acts on its own which he famously stated as "every photon interferes with itself". Richard Feynman showed that by evaluating a path integral where all possible paths are considered, that a number of higher probability paths will emerge. In thin films for example, film thickness which is not a multiple of light wavelength will not allow the quanta to traverse, only reflection is possible. Light source requirements The discussion above assumes that the waves which interfere with one another are monochromatic, i.e. have a single frequency—this requires that they are infinite in time. This is not, however, either practical or necessary. Two identical waves of finite duration whose frequency is fixed over that period will give rise to an interference pattern while they overlap. Two identical waves which consist of a narrow spectrum of frequency waves of finite duration (but shorter than their coherence time), will give a series of fringe patterns of slightly differing spacings, and provided the spread of spacings is significantly less than the average fringe spacing, a fringe pattern will again be observed during the time when the two waves overlap. Conventional light sources emit waves of differing frequencies and at different times from different points in the source. If the light is split into two waves and then re-combined, each individual light wave may generate an interference pattern with its other half, but the individual fringe patterns generated will have different phases and spacings, and normally no overall fringe pattern will be observable. However, single-element light sources, such as sodium- or mercury-vapor lamps have emission lines with quite narrow frequency spectra. When these are spatially and colour filtered, and then split into two waves, they can be superimposed to generate interference fringes. All interferometry prior to the invention of the laser was done using such sources and had a wide range of successful applications. A laser beam generally approximates much more closely to a monochromatic source, and thus it is much more straightforward to generate interference fringes using a laser. The ease with which interference fringes can be observed with a laser beam can sometimes cause problems in that stray reflections may give spurious interference fringes which can result in errors. Normally, a single laser beam is used in interferometry, though interference has been observed using two independent lasers whose frequencies were sufficiently matched to satisfy the phase requirements. This has also been observed for widefield interference between two incoherent laser sources. It is also possible to observe interference fringes using white light. A white light fringe pattern can be considered to be made up of a 'spectrum' of fringe patterns each of slightly different spacing. If all the fringe patterns are in phase in the centre, then the fringes will increase in size as the wavelength decreases and the summed intensity will show three to four fringes of varying colour. Young describes this very elegantly in his discussion of two slit interference. Since white light fringes are obtained only when the two waves have travelled equal distances from the light source, they can be very useful in interferometry, as they allow the zero path difference fringe to be identified. Optical arrangements To generate interference fringes, light from the source has to be divided into two waves which then have to be re-combined. Traditionally, interferometers have been classified as either amplitude-division or wavefront-division systems. In an amplitude-division system, a beam splitter is used to divide the light into two beams travelling in different directions, which are then superimposed to produce the interference pattern. The Michelson interferometer and the Mach–Zehnder interferometer are examples of amplitude-division systems. In wavefront-division systems, the wave is divided in space—examples are Young's double slit interferometer and Lloyd's mirror. Interference can also be seen in everyday phenomena such as iridescence and structural coloration. For example, the colours seen in a soap bubble arise from interference of light reflecting off the front and back surfaces of the thin soap film. Depending on the thickness of the film, different colours interfere constructively and destructively. Quantum interference Quantum interference – the observed wave-behavior of matter – resembles optical interference. Let be a wavefunction solution of the Schrödinger equation for a quantum mechanical object. Then the probability of observing the object at position is where * indicates complex conjugation. Quantum interference concerns the issue of this probability when the wavefunction is expressed as a sum or linear superposition of two terms : Usually, and correspond to distinct situations A and B. When this is the case, the equation indicates that the object can be in situation A or situation B. The above equation can then be interpreted as: The probability of finding the object at is the probability of finding the object at when it is in situation A plus the probability of finding the object at when it is in situation B plus an extra term. This extra term, which is called the quantum interference term, is in the above equation. As in the classical wave case above, the quantum interference term can add (constructive interference) or subtract (destructive interference) from in the above equation depending on whether the quantum interference term is positive or negative. If this term is absent for all , then there is no quantum mechanical interference associated with situations A and B. The best known example of quantum interference is the double-slit experiment. In this experiment, matter waves from electrons, atoms or molecules approach a barrier with two slits in it. One slit becomes and the other becomes . The interference pattern occurs on the far side, observed by detectors suitable to the particles originating the matter wave. The pattern matches the optical double slit pattern. Applications Beat In acoustics, a beat is an interference pattern between two sounds of slightly different frequencies, perceived as a periodic variation in volume whose rate is the difference of the two frequencies. With tuning instruments that can produce sustained tones, beats can be readily recognized. Tuning two tones to a unison will present a peculiar effect: when the two tones are close in pitch but not identical, the difference in frequency generates the beating. The volume varies like in a tremolo as the sounds alternately interfere constructively and destructively. As the two tones gradually approach unison, the beating slows down and may become so slow as to be imperceptible. As the two tones get further apart, their beat frequency starts to approach the range of human pitch perception, the beating starts to sound like a note, and a combination tone is produced. This combination tone can also be referred to as a missing fundamental, as the beat frequency of any two tones is equivalent to the frequency of their implied fundamental frequency. Interferometry Interferometry has played an important role in the advancement of physics, and also has a wide range of applications in physical and engineering measurement. The impact on physics and the applications span various types of waves. Optical interferometry Thomas Young's double slit interferometer in 1803 demonstrated interference fringes when two small holes were illuminated by light from another small hole which was illuminated by sunlight. Young was able to estimate the wavelength of different colours in the spectrum from the spacing of the fringes. The experiment played a major role in the general acceptance of the wave theory of light. In quantum mechanics, this experiment is considered to demonstrate the inseparability of the wave and particle natures of light and other quantum particles (wave–particle duality). Richard Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment. The results of the Michelson–Morley experiment are generally considered to be the first strong evidence against the theory of a luminiferous aether and in favor of special relativity. Interferometry has been used in defining and calibrating length standards. When the metre was defined as the distance between two marks on a platinum-iridium bar, Michelson and Benoît used interferometry to measure the wavelength of the red cadmium line in the new standard, and also showed that it could be used as a length standard. Sixty years later, in 1960, the metre in the new SI system was defined to be equal to 1,650,763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum. This definition was replaced in 1983 by defining the metre as the distance travelled by light in vacuum during a specific time interval. Interferometry is still fundamental in establishing the calibration chain in length measurement. Interferometry is used in the calibration of slip gauges (called gauge blocks in the US) and in coordinate-measuring machines. It is also used in the testing of optical components. Radio interferometry In 1946, a technique called astronomical interferometry was developed. Astronomical radio interferometers usually consist either of arrays of parabolic dishes or two-dimensional arrays of omni-directional antennas. All of the telescopes in the array are widely separated and are usually connected together using coaxial cable, waveguide, optical fiber, or other type of transmission line. Interferometry increases the total signal collected, but its primary purpose is to vastly increase the resolution through a process called Aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas farthest apart in the array. Acoustic interferometry An acoustic interferometer is an instrument for measuring the physical characteristics of sound waves in a gas or liquid, such velocity, wavelength, absorption, or impedance. A vibrating crystal creates ultrasonic waves that are radiated into the medium. The waves strike a reflector placed parallel to the crystal, reflected back to the source and measured.
Physical sciences
Waves
null
15145
https://en.wikipedia.org/wiki/ISO%209660
ISO 9660
ISO 9660 (also known as ECMA-119) is a file system for optical disc media. The file system is an international standard available from the International Organization for Standardization (ISO). Since the specification is available for anybody to purchase, implementations have been written for many operating systems. ISO 9660 traces its roots to the High Sierra Format, which arranged file information in a dense, sequential layout to minimize nonsequential access by using a hierarchical (eight levels of directories deep) tree file system arrangement, similar to UNIX and FAT. To facilitate cross platform compatibility, it defined a minimal set of common file attributes (directory or ordinary file and time of recording) and name attributes (name, extension, and version), and used a separate system use area where future optional extensions for each file may be specified. High Sierra was adopted in December 1986 (with changes) as an international standard by Ecma International as ECMA-119 and submitted for fast tracking to the ISO, where it was eventually accepted as ISO 9660:1988. Subsequent amendments to the standard were published in 2013 and 2020. The first 16 sectors of the file system are empty and reserved for other uses. The rest begins with a volume descriptor set (a header block which describes the subsequent layout) and then the path tables, directories and files on the disc. An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator which is a volume descriptor that marks the end of the descriptor set. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain metadata such as the volume's name and creator, along with the size and number of logical blocks used by the file system. Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry. There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (file characteristics specific to the classic Mac OS and macOS, such as resource forks, file backup date and more). History Compact discs were originally developed for recording musical data, but soon were used for storing additional digital data types because they were equally effective for archival mass data storage. Called CD-ROMs, the lowest level format for these type of compact discs was defined in the Yellow Book specification in 1983. However, this book did not define any format for organizing data on CD-ROMs into logical units such as files, which led to every CD-ROM maker creating its own format. In order to develop a CD-ROM file system standard (Z39.60 - Volume and File Structure of CDROM for Information Interchange), the National Information Standards Organization (NISO) set up Standards Committee SC EE (Compact Disc Data Format) in July 1985. In September/ October 1985 several companies invited experts to participate in the development of a working paper for such a standard. In November 1985, representatives of computer hardware manufacturers gathered at the High Sierra Hotel and Casino (currently called the Golden Nugget Lake Tahoe) in Stateline, Nevada. This group became known as the High Sierra Group (HSG). Present at the meeting were representatives from Apple Computer, AT&T, Digital Equipment Corporation (DEC), Hitachi, LaserData, Microware, Microsoft, 3M, Philips, Reference Technology Inc., Sony Corporation, TMS Inc., VideoTools (later Meridian), Xebec, and Yelick. The meeting report evolved from the Yellow Book CD-ROM standard, which was so open ended it was leading to diversification and creation of many incompatible data storage methods. The High Sierra Group Proposal (HSGP) was released in May 1986, defining a file system for CD-ROMs commonly known as the High Sierra Format. A draft version of this proposal was submitted to the European Computer Manufacturers Association (ECMA) for standardization. With some changes, this led to the issue of the initial edition of the ECMA-119 standard in December 1986. The ECMA submitted their standard to the International Standards Organization (ISO) for fast tracking, where it was further refined into the ISO 9660 standard. For compatibility the second edition of ECMA-119 was revised to be equivalent to ISO 9660 in December 1987. ISO 9660:1988 was published in 1988. The main changes from the High Sierra Format in the ECMA-119 and ISO 9660 standards were international extensions to allow the format to work better on non-US markets. In order not to create incompatibilities, NISO suspended further work on Z39.60, which had been adopted by NISO members on 28 May 1987. It was withdrawn before final approval, in favour of ISO 9660. JIS X 0606:1998 was passed in Japan in 1998 with much-relaxed file name rules using a new "enhanced volume descriptor" data structure. The standard was submitted for ISO 9660:1999 and supposedly fast-tracked, but nothing came out of it. Nevertheless, several operating systems and disc authoring tools (such as Nero Burning ROM, mkisofs and ImgBurn) now support the addition, under such names as "ISO 9660:1999", "ISO 9660 v2", or "ISO 9660 Level 4". In 2013, the proposal was finally formalized in the form of ISO 9660/Amendment 1, intended to "bring harmonization between ISO 9660 and widely used 'Joliet Specification'." In December 2017, a 3rd Edition of ECMA-119 was published that is technically identical with ISO 9660, Amendment 1. In 2019, ECMA published a 4th version of ECMA-119, integrating the Joliet text as "Annex C". In 2020, ISO published Amendment 2, which adds some minor clarifying matter, but does not add or correct any technical information of the standard. Specifications The following is the rough overall structure of the ISO 9660 file system. Multi-byte values can be stored in three different formats: little-endian, big-endian, and in a concatenation of both types in what the specification calls "both-byte" order. Both-byte order is required in several fields in the volume descriptors and directory records, while path tables can be either little-endian or big-endian. Top level The system area, the first 32,768 data bytes of the disc (16 sectors of 2,048 bytes each), is unused by ISO 9660 and therefore available for other uses. While it is suggested that they are reserved for use by bootable media, a CD-ROM may contain an alternative file system descriptor in this area, and it is often used by hybrid CDs to offer classic Mac OS-specific and macOS-specific content. Volume descriptor set The data area begins with the volume descriptor set, a set of one or more volume descriptors terminated with a volume descriptor set terminator. These collectively act as a header for the data area, describing its content (similar to the BIOS parameter block used by FAT, HPFS and NTFS formatted disks). Each volume descriptor is 2048 bytes in size, fitting perfectly into a single Mode 1 or Mode 2 Form 1 sector. They have the following structure: The data field of a volume descriptor may be subdivided into several fields, with the exact content depending on the type. Redundant copies of each volume descriptor can also be included in case the first copy of the descriptor becomes corrupt. Standard volume descriptor types are the following: An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator for indicating the end of the descriptor sequence. The volume descriptor set terminator is simply a particular type of volume descriptor with the purpose of marking the end of this set of structures. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain the description or name of the volume, and information about who created it and with which application. The size of the logical blocks which the file system uses to segment the volume is also stored in a field inside the primary volume descriptor, as well as the amount of space occupied by the volume (measured in number of logical blocks). In addition to the primary volume descriptor(s), supplementary volume descriptors or enhanced volume descriptors may be present. Supplementary volume descriptors describe the same volume as the primary volume descriptor does, and are normally used for providing additional code page support when the standard code tables are insufficient. The standard specifies that ISO 2022 is used for managing code sets that are wider than 8 bytes, and that ISO 2375 escape sequences are used to identify each particular code page used. Consequently, ISO 9660 supports international single-byte and multi-byte character sets, provided they fit into the framework of the referenced standards. However, ISO 9660 does not specify any code pages that are guaranteed to be supported: all use of code tables other than those defined in the standard itself are subject to agreement between the originator and the recipient of the volume. Enhanced volume descriptors were introduced in ISO 9660, Amendment 1. They relax some of the requirements of the other volume descriptors and the directory records referenced by them: for example, the directory depth can exceed eight, file identifiers need not contain '.' or file version number, the length of a file and directory identifier is maximized to 207. Path tables Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry. The parent directory number is a 16-bit number, limiting its range from 1 to 65,535. Directories and files Directory entries are stored following the location of the root directory entry, where evaluation of filenames is begun. Both directories and files are stored as extents, which are sequential series of sectors. Files and directories are differentiated only by a file attribute that indicates its nature (similar to Unix). The attributes of a file are stored in the directory entry that describes the file, and optionally in the extended attribute record. To locate a file, the directory names in the file's path can be checked sequentially, going to the location of each directory to obtain the location of the subsequent subdirectory. However, a file can also be located through the path table provided by the file system. This path table stores information about each directory, its parent, and its location on disc. Since the path table is stored in a contiguous region, it can be searched much faster than jumping to the particular locations of each directory in the file's path, thus reducing seek time. The standard specifies three nested levels of interchange (paraphrased from section 10): Level 1: File names are limited to eight characters with a three-character extension. Directory names are limited to eight characters. Files may contain one single file section. Level 2: File Name + '.' + File Name extension or Directory Name may not exceed 31 characters in length (sections 7.5 and 7.6). Files may contain one single file section. Level 3: No additional restrictions than those stipulated in the main body of the standard. Files are also allowed to consist of multiple non-contiguous sections (with some restrictions as to order). Additional restrictions in the body of the standard: The depth of the directory hierarchy must not exceed 8 (root directory being at level 1), and the path length of any file must not exceed 255. (section 6.8.2.1). The standard also specifies the following name restrictions (sections 7.5 and 7.6): All levels restrict file names in the mandatory file hierarchy to upper case letters, digits, underscores ("_"), and a dot. (
Technology
Data storage and memory
null
15150
https://en.wikipedia.org/wiki/Integrated%20circuit
Integrated circuit
An integrated circuit (IC), also known as a microchip or simply chip, is a set of electronic circuits, consisting of various electronic components (such as transistors, resistors, and capacitors) and their interconnections. These components are etched onto a small, flat piece ("chip") of semiconductor material, usually silicon. Integrated circuits are used in a wide range of electronic devices, including computers, smartphones, and televisions, to perform various functions such as processing and storing information. They have greatly impacted the field of electronics by enabling device miniaturization and enhanced functionality. Integrated circuits are orders of magnitude smaller, faster, and less expensive than those constructed of discrete components, allowing a large transistor count. The IC's mass production capability, reliability, and building-block approach to integrated circuit design have ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other home appliances are now essential parts of the structure of modern societies, made possible by the small size and low cost of ICs such as modern computer processors and microcontrollers. Very-large-scale integration was made practical by technological advancements in semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more transistors on chips of the same size – a modern chip may have many billions of transistors in an area the size of a human fingernail. These advances, roughly following Moore's law, make the computer chips of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s. ICs have three main advantages over circuits constructed out of discrete components: size, cost and performance. The size and cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume comparatively little power because of their small size and proximity. The main disadvantage of ICs is the high initial cost of designing them and the enormous capital cost of factory construction. This high initial cost means ICs are only commercially viable when high production volumes are anticipated. Terminology An integrated circuit is defined as: A circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce. In strict usage, integrated circuit refers to the single-piece circuit construction originally known as a monolithic integrated circuit, which comprises a single piece of silicon. In general usage, circuits not meeting this strict definition are sometimes referred to as ICs, which are constructed using many different technologies, e.g. 3D IC, 2.5D IC, MCM, thin-film transistors, thick-film technologies, or hybrid integrated circuits. The choice of terminology frequently appears in discussions related to whether Moore's Law is obsolete. History An early attempt at combining several components in one device (like modern ICs) was the Loewe 3NF vacuum tube first made in 1926. Unlike ICs, it was designed with the purpose of tax avoidance, as in Germany, radio receivers had a tax that was levied depending on how many tube holders a radio receiver had. It allowed radio receivers to have a single tube holder. One million were manufactured, and were "a first step in integration of radioelectronic devices". The device contained an amplifier, composed of three triodes, two capacitors and four resistors in a six-pin device. Radios with the Loewe 3NF were less expensive than other radios, showing one of the advantages of integration over using discrete components, that would be seen decades later with ICs. Early concepts of an integrated circuit go back to 1949, when German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate in a three-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported. Another early proponent of the concept was Geoffrey Dummer (1909–2002), a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. He gave many symposia publicly to propagate his ideas and unsuccessfully attempted to build such a circuit in 1956. Between 1953 and 1957, Sidney Darlington and Yasuo Tarui (Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other. The monolithic integrated circuit chip was enabled by the inventions of the planar process by Jean Hoerni and p–n junction isolation by Kurt Lehovec. Hoerni's invention was built on Carl Frosch and Lincoln Derick's work on surface protection and passivation by silicon dioxide masking and predeposition, as well as Fuller, Ditzenberger's and others work on the diffusion of impurities into silicon. The first integrated circuits A precursor idea to the IC was to create small ceramic substrates (so-called micromodules), each containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC. Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working example of an integrated circuit on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated". The first customer for the new invention was the US Air Force. Kilby won the 2000 Nobel Prize in physics for his part in the invention of the integrated circuit. However, Kilby's invention was not a true monolithic integrated circuit chip since it had external gold-wire connections, which would have made it difficult to mass-produce. Half a year after Kilby, Robert Noyce at Fairchild Semiconductor invented the first true monolithic IC chip. More practical than Kilby's implementation, Noyce's chip was made of silicon, whereas Kilby's was made of germanium, and Noyce's was fabricated using the planar process, developed in early 1959 by his colleague Jean Hoerni and included the critical on-chip aluminum interconnecting lines. Modern IC chips are based on Noyce's monolithic IC, rather than Kilby's. NASA's Apollo Program was the largest single consumer of integrated circuits between 1961 and 1965. TTL integrated circuits Transistor–transistor logic (TTL) was developed by James L. Buie in the early 1960s at TRW Inc. TTL became the dominant integrated circuit technology during the 1970s to early 1980s. Dozens of TTL integrated circuits were a standard method of construction for the processors of minicomputers and mainframe computers. Computers such as IBM 360 mainframes, PDP-11 minicomputers and the desktop Datapoint 2200 were built from bipolar integrated circuits, either TTL or the even faster emitter-coupled logic (ECL). MOS integrated circuits Nearly all modern IC chips are metal–oxide–semiconductor (MOS) integrated circuits, built from MOSFETs (metal–oxide–silicon field-effect transistors). The MOSFET invented at Bell Labs between 1955 and 1960, made it possible to build high-density integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was pointed out by Dawon Kahng in 1961. The list of IEEE milestones includes the first integrated circuit by Kilby in 1958, Hoerni's planar process and Noyce's planar IC in 1959. The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuit in 1964, a 120-transistor shift register developed by Robert Norman. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. Following the development of the self-aligned gate (silicon-gate) MOSFET by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC technology with self-aligned gates, the basis of all modern CMOS integrated circuits, was developed at Fairchild Semiconductor by Federico Faggin in 1968. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. This led to the inventions of the microprocessor and the microcontroller by the early 1970s. During the early 1970s, MOS integrated circuit technology enabled the very large-scale integration (VLSI) of more than 10,000 transistors on a single chip. At first, MOS-based computers only made sense when high density was required, such as aerospace and pocket calculators. Computers built entirely from TTL, such as the 1970 Datapoint 2200, were much faster and more powerful than single-chip MOS microprocessors such as the 1972 Intel 8008 until the early 1980s. Advances in IC technology, primarily smaller features and larger chips, have allowed the number of MOS transistors in an integrated circuit to double every two years, a trend known as Moore's law. Moore originally stated it would double every year, but he went on to change the claim to every two years in 1975. This increased capacity has been used to decrease cost and increase functionality. In general, as the feature size shrinks, almost every aspect of an IC's operation improves. The cost per transistor and the switching power consumption per transistor goes down, while the memory capacity and speed go up, through the relationships defined by Dennard scaling (MOSFET scaling). Because speed, capacity, and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. Over the years, transistor sizes have decreased from tens of microns in the early 1970s to 10 nanometers in 2017 with a corresponding million-fold increase in transistors per unit area. As of 2016, typical chip areas range from a few square millimeters to around 600 mm2, with up to 25 million transistors per mm2. The expected shrinking of feature sizes and the needed progress in related areas was forecast for many years by the International Technology Roadmap for Semiconductors (ITRS). The final ITRS was issued in 2016, and it is being replaced by the International Roadmap for Devices and Systems. Initially, ICs were strictly electronic devices. The success of ICs has led to the integration of other technologies, in an attempt to obtain the same advantages of small size and low cost. These technologies include mechanical devices, optics, and sensors. Charge-coupled devices, and the closely related active-pixel sensors, are chips that are sensitive to light. They have largely replaced photographic film in scientific, medical, and consumer applications. Billions of these devices are now produced each year for applications such as cellphones, tablets, and digital cameras. This sub-field of ICs won the Nobel Prize in 2009. Very small mechanical devices driven by electricity can be integrated onto chips, a technology known as microelectromechanical systems (MEMS). These devices were developed in the late 1980s and are used in a variety of commercial and military applications. Examples include DLP projectors, inkjet printers, and accelerometers and MEMS gyroscopes used to deploy automobile airbags. Since the early 2000s, the integration of optical functionality (optical computing) into silicon chips has been actively pursued in both academic research and in industry resulting in the successful commercialization of silicon based integrated optical transceivers combining optical devices (modulators, detectors, routing) with CMOS based electronics. Photonic integrated circuits that use light such as Lightelligence's PACE (Photonic Arithmetic Computing Engine) also being developed, using the emerging field of physics known as photonics. Integrated circuits are also being developed for sensor applications in medical implants or other bioelectronic devices. Special sealing techniques have to be applied in such biogenic environments to avoid corrosion or biodegradation of the exposed semiconductor materials. , the vast majority of all transistors are MOSFETs fabricated in a single layer on one side of a chip of silicon in a flat two-dimensional planar process. Researchers have produced prototypes of several promising alternatives, such as: various approaches to stacking several layers of transistors to make a three-dimensional integrated circuit (3DIC), such as through-silicon via, "monolithic 3D", stacked wire bonding, and other methodologies. transistors built from other materials: graphene transistors, molybdenite transistors, carbon nanotube field-effect transistor, gallium nitride transistor, transistor-like nanowire electronic devices, organic field-effect transistor, etc. fabricating transistors over the entire surface of a small sphere of silicon. modifications to the substrate, typically to make "flexible transistors" for a flexible display or other flexible electronics, possibly leading to a roll-away computer. As it becomes more difficult to manufacture ever smaller transistors, companies are using multi-chip modules/chiplets, three-dimensional integrated circuits, package on package, High Bandwidth Memory and through-silicon vias with die stacking to increase performance and reduce size, without having to reduce the size of the transistors. Such techniques are collectively known as advanced packaging. Advanced packaging is mainly divided into 2.5D and 3D packaging. 2.5D describes approaches such as multi-chip modules while 3D describes approaches where dies are stacked in one way or another, such as package on package and high bandwidth memory. All approaches involve 2 or more dies in a single package. Alternatively, approaches such as 3D NAND stack multiple layers on a single die. A technique has been demonstrated to include microfluidic cooling on integrated circuits, to improve cooling performance as well as peltier thermoelectric coolers on solder bumps, or thermal solder bumps used exclusively for heat dissipation, used in flip-chip. Design The cost of designing and developing a complex integrated circuit is quite high, normally in the multiple tens of millions of dollars. Therefore, it only makes economic sense to produce integrated circuit products with high production volume, so the non-recurring engineering (NRE) costs are spread across typically millions of production units. Modern semiconductor chips have billions of components, and are far too complex to be designed by hand. Software tools to help the designer are essential. Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems, including integrated circuits. The tools work together in a design flow that engineers use to design, verify, and analyze entire semiconductor chips. Some of the latest EDA tools use artificial intelligence (AI) to help engineers save time and improve chip performance. Types Integrated circuits can be broadly classified into analog, digital and mixed signal, consisting of analog and digital signaling on the same IC. Digital integrated circuits can contain billions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and microcontrollers, use boolean algebra to process "one" and "zero" signals. Among the most advanced integrated circuits are the microprocessors or "cores", used in personal computers, cell-phones, etc. Several cores may be integrated together in a single IC or chip. Digital memory chips and application-specific integrated circuits (ASICs) are examples of other families of integrated circuits. In the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a chip to be programmed to do various LSI-type functions such as logic gates, adders and registers. Programmability comes in various forms – devices that can be programmed only once, devices that can be erased and then re-programmed using UV light, devices that can be (re)programmed using flash memory, and field-programmable gate arrays (FPGAs) which can be programmed at any time, including during operation. Current FPGAs can (as of 2016) implement the equivalent of millions of gates and operate at frequencies up to 1 GHz. Analog ICs, such as sensors, power management circuits, and operational amplifiers (op-amps), process continuous signals, and perform analog functions such as amplification, active filtering, demodulation, and mixing. ICs can combine analog and digital circuits on a chip to create functions such as analog-to-digital converters and digital-to-analog converters. Such mixed-signal circuits offer smaller size and lower cost, but must account for signal interference. Prior to the late 1990s, radios could not be fabricated in the same low-cost CMOS processes as microprocessors. But since 1998, radio chips have been developed using RF CMOS processes. Examples include Intel's DECT cordless phone, or 802.11 (Wi-Fi) chips created by Atheros and other companies. Modern electronic component distributors often further sub-categorize integrated circuits: Digital ICs are categorized as logic ICs (such as microprocessors and microcontrollers), memory chips (such as MOS memory and floating-gate memory), interface ICs (level shifters, serializer/deserializer, etc.), power management ICs, and programmable devices. Analog ICs are categorized as linear integrated circuits and RF circuits (radio frequency circuits). Mixed-signal integrated circuits are categorized as data acquisition ICs (including A/D converters, D/A converters, digital potentiometers), clock/timing ICs, switched capacitor (SC) circuits, and RF CMOS circuits. Three-dimensional integrated circuits (3D ICs) are categorized into through-silicon via (TSV) ICs and Cu-Cu connection ICs. Manufacturing Fabrication The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, monocrystalline silicon is the main substrate used for ICs although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals with minimal defects in semiconducting materials' crystal structure. Semiconductor ICs are fabricated in a planar process which includes three key process steps photolithography, deposition (such as chemical vapor deposition), and etching. The main process steps are supplemented by doping and cleaning. More recent or high-performance ICs may instead use multi-gate FinFET or GAAFET transistors instead of planar ones, starting at the 22 nm node (Intel) or 16/14 nm nodes. Mono-crystal silicon wafers are used in most applications (or for special applications, other semiconductors such as gallium arsenide are used). The wafer need not be entirely silicon. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminium or copper) tracks deposited on them. Dopants are impurities intentionally introduced to a semiconductor to modulate its electronic properties. Doping is the process of adding dopants to a semiconductor material. Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors. Some layers mark where various dopants are diffused into the substrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors (doped polysilicon or metal layers), and some define the connections between the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers. In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer (this is called "the self-aligned gate"). Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formed according to the area of the "plates", with insulating material between the plates. Capacitors of a wide range of sizes are common on ICs. Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most logic circuits do not need any resistors. The ratio of the length of the resistive structure to its width, combined with its sheet resistivity, determines the resistance. More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators. Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar junction transistor devices. A random-access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process. Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminium (or gold) bond wires which are thermosonically bonded to pads, usually found around the edge of the die. Thermosonic bonding was first introduced by A. Coucoulas which provided a reliable means of forming these vital electrical connections to the outside world. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower-cost products, but can be negligible on low-yielding, larger, or higher-cost devices. , a fabrication facility (commonly known as a semiconductor fab) can cost over US$12 billion to construct. The cost of a fabrication facility rises over time because of increased complexity of new products; this is known as Rock's law. Such a facility features: The wafers up to 300 mm in diameter (wider than a common dinner plate). , 5 nm transistors. Copper interconnects where copper wiring replaces aluminum for interconnects. Low-κ dielectric insulators. Silicon on insulator (SOI). Strained silicon in a process used by IBM known as Strained silicon directly on insulator (SSDOI). Multigate devices such as tri-gate transistors. ICs can be manufactured either in-house by integrated device manufacturers (IDMs) or using the foundry model. IDMs are vertically integrated companies (like Intel and Samsung) that design, manufacture and sell their own ICs, and may offer design and/or manufacturing (foundry) services to other companies (the latter often to fabless companies). In the foundry model, fabless companies (like Nvidia) only design and sell ICs and outsource all manufacturing to pure play foundries such as TSMC. These foundries may offer IC design services. Packaging The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic, which is commonly cresol-formaldehyde-novolac. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by the small-outline integrated circuit (SOIC) package – a carrier which occupies an area about 30–50% less than an equivalent DIP and is typically 70% thinner. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches. In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still used for high-end microprocessors. Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for a much higher pin count than other package types, were developed in the 1990s. In an FCBGA package, the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery. BGA devices have the advantage of not needing a dedicated socket but are much harder to replace in case of device failure. Intel transitioned away from PGA to land grid array (LGA) and BGA beginning in 2004, with the last PGA socket released in 2014 for mobile platforms. , AMD uses PGA packages on mainstream desktop processors, BGA packages on mobile processors, and high-end desktop and server microprocessors use LGA packages. Electrical signals leaving the die must pass through the material electrically connecting the die to the package, through the conductive traces (paths) in the package, through the leads connecting the package to the conductive traces on the printed circuit board. The materials and structures used in the path these electrical signals must travel have very different electrical properties, compared to those that travel to different parts of the same die. As a result, they require special design techniques to ensure the signals are not corrupted, and much more electric power than signals confined to the die itself. When multiple dies are put in one package, the result is a system in package, abbreviated . A multi-chip module (), is created by combining multiple dies on a small substrate often made of ceramic. The distinction between a large MCM and a small printed circuit board is sometimes fuzzy. Packaged integrated circuits are usually large enough to include identifying information. Four common sections are the manufacturer's name or logo, the part number, a part production batch number and serial number, and a four-digit date-code to identify when the chip was manufactured. Extremely small surface-mount technology parts often bear only a number used in a manufacturer's lookup table to find the integrated circuit's characteristics. The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983. Intellectual property The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is a reason for the introduction of legislation for the protection of layout designs. The US Semiconductor Chip Protection Act of 1984 established intellectual property protection for photomasks used to produce integrated circuits. A diplomatic conference held at Washington, D.C., in 1989 adopted a Treaty on Intellectual Property in Respect of Integrated Circuits, also called the Washington Treaty or IPIC Treaty. The treaty is currently not in force, but was partially integrated into the TRIPS agreement. There are several United States patents connected to the integrated circuit, which include patents by J.S. Kilby , , and by R.F. Stewart . National laws protecting IC layout designs have been adopted in a number of countries, including Japan, the EC, the UK, Australia, and Korea. The UK enacted the Copyright, Designs and Patents Act, 1988, c. 48, § 213, after it initially took the position that its copyright law fully protected chip topographies. See British Leyland Motor Corp. v. Armstrong Patents Co. Criticisms of inadequacy of the UK copyright approach as perceived by the US chip industry are summarized in further chip rights developments. Australia passed the Circuit Layouts Act of 1989 as a sui generis form of chip protection. Korea passed the Act Concerning the Layout-Design of Semiconductor Integrated Circuits in 1992. Generations In the early days of simple integrated circuits, the technology's large scale limited each chip to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. As metal–oxide–semiconductor (MOS) technology progressed, millions and then billions of MOS transistors could be placed on one chip, and good designs required thorough planning, giving rise to the field of electronic design automation, or EDA. Some SSI and MSI chips, like discrete transistors, are still mass-produced, both to maintain old equipment and build new devices that require only a few gates. The 7400 series of TTL chips, for example, has become a de facto standard and remains in production. Small-scale integration (SSI) The first integrated circuits contained only a few transistors. Early digital circuits containing tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit has increased dramatically since then. The term "large scale integration" (LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical concept; that term gave rise to the terms "small-scale integration" (SSI), "medium-scale integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration" (ULSI). The early integrated circuits were SSI. SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems. Although the Apollo Guidance Computer led and motivated integrated-circuit technology, it was the Minuteman missile that forced it into mass-production. The Minuteman missile program and various other United States Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government spending on space and defense still accounted for 37% of the $312 million total production. The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow IC firms to penetrate the industrial market and eventually the consumer market. The average price per integrated circuit dropped from $50 in 1962 to $2.33 in 1968. Integrated circuits began to appear in consumer products by the turn of the 1970s decade. A typical application was FM inter-carrier sound processing in television receivers. The first application MOS chips were small-scale integration (SSI) chips. Following Mohamed M. Atalla's proposal of the MOS integrated circuit chip in 1960, the earliest experimental MOS chip to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. The first practical application of MOS SSI chips was for NASA satellites. Medium-scale integration (MSI) The next step in the development of integrated circuits introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI). MOSFET scaling technology made it possible to build high-density chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with a then-incredible 120 MOS transistors on a single chip. The same year, General Microelectronics introduced the first commercial MOS integrated circuit chip, consisting of 120 p-channel MOS transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to chips with hundreds of MOSFETs on a chip by the late 1960s. Large-scale integration (LSI) Further development, driven by the same MOSFET scaling technology and economic factors, led to "large-scale integration" (LSI) by the mid-1970s, with tens of thousands of transistors per chip. The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such as the microprocessors of the early 1970s) were mostly created by hand, often using Rubylith-tape or similar. For large or complex ICs (such as memories or processors), this was often done by specially hired professionals in charge of circuit layout, placed under the supervision of a team of engineers, who would also, along with the circuit designers, inspect and verify the correctness and completeness of each mask. Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4,000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors. Very-large-scale integration (VLSI) "Very-large-scale integration" (VLSI) is a development that started with hundreds of thousands of transistors in the early 1980s. As of 2023, maximum transistor counts continue to grow beyond 5.3 trillion transistors per chip. Multiple developments were required to achieve this increased density. Manufacturers moved to smaller MOSFET design rules and cleaner fabrication facilities. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS), which has since been succeeded by the International Roadmap for Devices and Systems (IRDS). Electronic design tools improved, making it practical to finish designs in a reasonable time. The more energy-efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. The complexity and density of modern VLSI devices made it no longer feasible to check the masks or do the original design by hand. Instead, engineers use tools to perform most functional verification work. In 1986, one-megabit random-access memory (RAM) chips were introduced, containing more than one million transistors. Microprocessor chips passed the million-transistor mark in 1989, and the billion-transistor mark in 2005. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors. ULSI, WSI, SoC and 3D-IC To reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale integration" was proposed for chips of more than 1 million transistors. Wafer-scale integration (WSI) is a means of building very large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed. A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and whilst performance benefits can be had from integrating all needed components on one die, the cost of licensing and developing a one-die machine still outweigh having separate devices. With appropriate licensing, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging). Further, signal sources and destinations are physically closer on die, reducing the length of wiring and therefore latency, transmission power costs and waste heat from communication between modules on the same chip. This has led to an exploration of so-called Network-on-Chip (NoC) devices, which apply system-on-chip design methodologies to digital communication networks as opposed to traditional bus architectures. A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation. Silicon labeling and graffiti To allow identification during production, most silicon Silicon chips will have a serial number in one corner. It is also common to add the manufacturer's logo. Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These artistic additions, often created with great attention to detail, showcase the designers' creativity and add a touch of personality to otherwise utilitarian components. These are sometimes referred to as chip art, silicon art, silicon graffiti or silicon doodling. ICs and IC families The 555 timer IC The Operational amplifier 7400-series integrated circuits 4000-series integrated circuits, the CMOS counterpart to the 7400 series (see also: 74HC00 series) Intel 4004, generally regarded as the first commercially available microprocessor, which led to the 8008, the famous 8080 CPU, the 8086, 8088 (used in the original IBM PC), and the fully-backward compatible (with the 8088/8086) 80286, 80386/i386, i486, etc. The MOS Technology 6502 and Zilog Z80 microprocessors, used in many home computers of the early 1980s The Motorola 6800 series of computer-related chips, leading to the 68000 and 88000 series (the 68000 series was very successful and was used in the Apple Lisa and pre-PowerPC-based Macintosh, Commodore Amiga, Atari ST/TT/Falcon030, and NeXT families of computers, along with many models of workstations and servers from many manufacturers in the 80s, along with many other systems and devices) The LM-series of analog integrated circuits
Technology
Electronics
null