id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
1640343
https://en.wikipedia.org/wiki/Kreutz%20sungrazer
Kreutz sungrazer
The Kreutz sungrazers ( ) are a family of sungrazing comets, characterized by orbits taking them extremely close to the Sun at perihelion. At the far extreme of their orbits, aphelion, Kreutz sungrazers can be a hundred times farther from the Sun than the Earth is, while their distance of closest approach can be less than twice the Sun's radius. They are believed to be fragments of one large comet that broke up several centuries ago and are named for German astronomer Heinrich Kreutz, who first demonstrated that they were related. These sungrazers make their way from the distant outer Solar System to the inner Solar System, to their perihelion point near the Sun, and then leave the inner Solar System in their return trip to their aphelion. Several members of the Kreutz family have become great comets, occasionally visible near the Sun in the daytime sky. The most recent of these was Comet Ikeya–Seki in 1965, which may have been one of the brightest comets in the last millennium. It has been suggested that another cluster of bright Kreutz system comets may begin to arrive in the inner Solar System in the next few decades. More than 5,000 members of the family have been discovered since the launch of the SOHO satellite in 1995. None of these smaller comets have survived its perihelion passage. Larger sungrazers such as the Great Comet of 1843 and C/2011 W3 (Lovejoy) have survived their perihelion passage. Amateur astronomers have been successful at discovering Kreutz comets in the data available in real time via the internet. Discovery and historical observations The first comet whose orbit had been found to take it extremely close to the Sun was the Great Comet of 1680. This comet was found to have passed just (0.0013 AU) above the Sun's surface, equivalent to around a seventh of Sun's diameter, or about half the distance between the Earth and the Moon. Astronomers at the time, including Edmond Halley, speculated that this comet was a return of a bright comet seen close to the Sun in the sky in 1106. 163 years later, the Great Comet of 1843 appeared and also passed extremely close to the Sun. Despite orbital calculations showing that it had a period of several centuries, some astronomers wondered if it was a return of the 1680 comet. A bright comet seen in 1880 was found to be travelling on an almost identical orbit to that of 1843, as was the subsequent Great Comet of 1882. Some astronomers suggested that perhaps they were all one comet, whose orbital period was somehow being drastically shortened at each perihelion passage, perhaps by retardation by some dense material surrounding the Sun. An alternative suggestion was that the comets were all fragments of an earlier Sun-grazing comet. This idea was first proposed in 1880, and its plausibility was amply demonstrated when the Great Comet of 1882 broke up into several fragments after its perihelion passage. In 1888, Heinrich Kreutz published a paper showing that the comets of 1843 (C/1843 D1, the Great March Comet), 1880 (C/1880 C1, the Great Southern Comet), and 1882 (C/1882 R1, Great September Comet) were probably fragments of a giant comet that had broken up several orbits before. The comet of 1680 proved to be unrelated to this family of comets. After another Kreutz sungrazer was seen in 1887 (C/1887 B1, the Great Southern Comet of 1887), the next one did not appear until 1945. Two further sungrazers appeared in the 1960s, Comet Pereyra in 1963 and Comet Ikeya–Seki, which became extremely bright in 1965, and broke into three pieces after its perihelion. It is probably the most famous among the Kreutz sungrazers. The appearance of two Kreutz sungrazers in quick succession inspired further study of the dynamics of the group. Initially, the name "sungrazer" was applied exclusively to the Kreutz group. Physical traits Most sungrazing comets are part of the Kreutz family. The group generally has an eccentricity approaching 1, orbital inclination of 139–144° (precluding close encounters with planets), a perihelion distance of less than 0.01 AU (less than the diameter of the Sun), an aphelion distance of about 100 AU and an orbital period of about 500–1,000 years. Erosion of the comets by solar energy during close passages leads to progressive changes in their orbits. Most Kreutz sungrazers have radii of less than , but the brightest ones reach radii of . The bodies themselves have irregular shapes and appearances which have been described as diffuse, star-like or tailed. The material that makes up their cometary nuclei has a low tensile strength. They have only low concentrations of volatiles and thus become active only close to the Sun, since they have lost most of their volatiles during earlier transits. Their brightness may peak shortly before perihelion at 10–15 solar radii, after which they become dimmer. This may be due to the evaporation of minerals like olivine and pyroxene. Other studies find a more chaotic pattern of brightening and darkening. The water and organic materials of a comet evaporate first, exposing fluffy aggregations of olivines that form dust tails. Dust from these comets remains in the solar corona, where it interacts with the Sun's magnetic field. Notable members The brightest members of the Kreutz sungrazers have been spectacular, easily visible in the daytime sky. The three most impressive have been the Great Comet of 1843, the Great Comet of 1882 and X/1106 C1. The progenitor of all Kreutz sungrazers observed to date may be the Great Comet of 371 BC, or comets seen in 214 BC, 423 AD or 467 AD. Another notable Kreutz sungrazer was the Eclipse Comet of 1882. Other candidate Kreutz sungrazers are comets observed in 582 AD in China and Europe, X/1381 V1 which was seen from Japan, Korea, Russia and Egypt, two comets seen in 1668 and 1695, C/1880 C1, the Great Southern Comet of 1887, C/1945 X1 (du Toit), C/1970 K1 and C/2005 S1, one of the best-observed Kreutz sungrazers. Great Comet of 371 BC The Great Comet seen in the winter of 372–371 BC was an extremely bright comet thought to be the progenitor of the entire Kreutz sungrazer family. It was observed by Aristotle and Ephorus during the period in which it was visible to the naked eye. It was reported to have had an extremely long, extremely bright, prominent tail with a reddish colour, as well as a nucleus brighter than any star in the night sky. Great Comet of 1106 AD The Great Comet of 1106 AD was a gigantic comet noticed by observers from all over the world. On 2 February 1106 AD, a star was reported to have appeared next to the Sun, about a degree from it. It seems to have diminished in brightness after this apparition, with a rather faint, unremarkable nucleus after perihelion, but its tail grew enormously and on February 7 Japanese observers said the extremely bright white tail stretched about 100 degrees across the night sky, which was also reported to have been branching into multiple tails. On February 9, it dimmed slightly, but its tail was still exceedingly bright, measuring 60 degrees long and 3 degrees across. The entire naked-eye duration of the giant comet has been recorded to be anywhere from 15 to 70 days in European texts, though. Recent evaluations, as well as observations of the comet splitting into multiple pieces after perihelion, have suggested that this comet was the progenitor of an entire subgroup of Kreutz sungrazers, including the extremely bright sungrazers of 1882, 1843 and 1965. Observations also suggest that the larger fragment of the Great Comet of 371 BC later returned as the Great Comet of 1106 AD. Great Comet of 1843 The Great Comet of 1843 was first noticed in early February of that year, just over three weeks before its perihelion passage when it passed about from the surface of the Sun. By February 27 it was easily visible in the daytime sky, and observers described seeing a tail 2–3° long stretching away from the Sun before being lost in the glare of the sky. After its perihelion passage, it reappeared in the morning sky, and developed an extremely long tail. It extended about 45° across the sky on March 11 and was more than 2° wide; the tail was calculated to be more than 300 million kilometers (2 AU) long. This held the record for the longest measured cometary tail until 2000, when Comet Hyakutake's tail was found to stretch to some 550 million kilometers in length. The maximum apparent magnitude attained by this comet was −10. (The Earth–Sun distance—1 AU—is only 150 million kilometers.) The comet was very prominent throughout early March, before fading away to almost below naked eye visibility by the beginning of April. It was last detected on April 20. This comet apparently made a substantial impression on the public, inspiring in some a fear that judgement day was imminent. Eclipse Comet of 1882 A party of observers gathered in Egypt to watch a solar eclipse in May 1882 also observed a bright streak near the Sun once totality began. The streak was the perihelion passage of a Kreutz comet, and its sighting during the eclipse was the only observation of it. Photographs of the eclipse revealed that the comet had moved noticeably during the 1m50s eclipse, as would be expected for a comet racing past the Sun at almost 500 km/s. The comet is sometimes referred to as Tewfik, after Tewfik Pasha, the Khedive of Egypt at the time. Great Comet of 1882 The Great Comet of 1882 was discovered independently by many observers, as it was already easily visible to the naked eye when it appeared in early September 1882, just a few days before perihelion, at which it reached an apparent magnitude estimated to have been −17, by far the brightest recorded for any comet and exceeding the brightness of the full moon by a factor of 57. It grew rapidly brighter and was eventually so bright it was visible in the daytime for two days (16–17 September), even through light cloud. After its perihelion passage, the comet remained bright for several weeks. During October, its nucleus was seen to fragment into first two and then four pieces. Some observers also reported seeing diffuse patches of light several degrees away from the nucleus. The rate of separation of the fragments of the nucleus was such that they will return about a century apart, between 670 and 960 years after the break-up. Comet Ikeya–Seki Comet Ikeya–Seki is the most recent very bright Kreutz sungrazer. It was discovered independently by two Japanese amateur astronomers on September 18, 1965, within 15 minutes of each other, and quickly recognised as a Kreutz sungrazer. It brightened rapidly over the following four weeks as it approached the Sun, and reached apparent magnitude 2 by October 15. Its perihelion passage occurred on October 21, and observers across the world easily saw it in the daytime sky. A few hours before perihelion passage on October 21 it had a visible magnitude from −10 to −11, comparable to the first quarter of the Moon and brighter than any other comet seen since 1882. A day after perihelion its magnitude decreased to just −4. Japanese astronomers used a coronagraph to observe how the comet broke into three pieces 30 minutes before perihelion. When the comet reappeared in the morning sky in early November, two of these nuclei were definitely detected with the third suspected. The comet developed a very prominent tail, about 25° in length, before fading throughout November. It was last detected in January 1966. Dynamical history and evolution A study by Brian G. Marsden in 1967 was the first attempt to trace back the orbital history of the group to identify the progenitor comet. All known members of the group up until 1965 had almost identical orbital inclinations at about 144°, as well as very similar values for the longitude of perihelion at 280–282°, with a couple of outlying points probably due to uncertain orbital calculations. A greater range of values existed for the argument of perihelion and longitude of the ascending node. Marsden found that the Kreutz sungrazers could be split into two groups, with slightly different orbital elements, implying that the family resulted from fragmentations at more than one perihelion. Tracing back the orbits of Ikeya–Seki and the Great Comet of 1882, Marsden found that at their previous perihelion passage, the difference between their orbital elements was of the same order of magnitude as the difference between the elements of the fragments of Ikeya–Seki after it broke up. This meant it was realistic to presume that they were two parts of the same comet which had broken up one orbit ago. The best candidate for the progenitor comet was the Great Comet of 1106: Ikeya–Seki's derived orbital period indicated that its previous perihelion matched that of 1106, and while the Great Comet of 1882's derived orbit implied a previous perihelion a few decades later, it would only require a small change in the orbital elements to bring it into agreement. The Sun-grazing comets of 1668, 1689, 1702 and 1945 seem to be closely related to those of 1882 and 1965, although their orbits are not well enough determined to establish whether they broke off from the parent comet in 1106, or the previous perihelion passage before that, some time in the 3–5th centuries AD. This subgroup of comets is known as Subgroup II. Comet White–Ortiz–Bolelli, which was seen in 1970, is more closely related to this group than Subgroup I, but appears to have broken off during the previous orbit to the other fragments. The Sun-grazing comets observed in 1843 (Great Comet of 1843) and 1963 (Comet Pereyra) seem to be closely related and belong to the subgroup I, although when their orbits are traced back to one previous perihelion, the differences between the orbital elements are still rather large, probably implying that they broke apart from each other one revolution before that. They may not be related to the comet of 1106, but rather a comet that returned about 50 years before that. Subgroup I also includes comets seen in 1695, 1880 (Great Southern Comet of 1880) and in 1887 (Great Southern Comet of 1887), as well as the vast majority of comets detected by the SOHO mission (see below). The distinction between the two sub-groups is thought to imply that they result from two separate parent comets, which themselves were once part of a 'grandparent' comet which fragmented several orbits previously. One possible candidate for the grandparent is a comet observed by Aristotle and Ephorus in 371 BC. Ephorus claimed to have seen this comet break into two. However modern astronomers are skeptical of the claims of Ephorus, because they were not confirmed by other sources. Instead comets that arrived between 3rd and 5th centuries AD (comets of 214, 426 and 467) are considered as possible progenitors of the Kreutz family. The original comet must certainly have been very large indeed, perhaps as large as 100 km across although a size of only a few tens of kilometres, akin to Comet Hale–Bopp, is also possible. One study suggests that the progenitor's orbit changed in a two-step process beginning in the Oort cloud: first, being perturbed into an ellipse whose semimajor axis was about 100 AU, and second, evolving into a sungrazing orbit via the Kozai mechanism. Although its orbit is rather different from those of the main two groups, it is possible that the comet of 1680 is also related to the Kreutz sungrazers via a fragmentation many orbits ago. The Kreutz sungrazers are probably not a unique phenomenon. Other families of sungrazing comets that formed from the breakup of a parent body are the Meyer sungrazers, the Marsden sunskirters and the Kracht sunskirters. These form the 'non-Kreutz' or 'sporadic' sungrazers. The Kreutz, Marsden and Kracht families and the comet 96P/Machholz may in turn form a larger family, the Machholz interplanetary complex, that may have formed through the breakup of a parent body before 950 CE. The ultimate origin of the Kreutz sungrazers is probably the Oort cloud, with unknown physical processes reducing the semi-major axis until a sungrazing comet resulted. This process may occur a few times every million years, which may either be an underestimate or may indicate that humanity is lucky that such a Kreutz sungrazer family exists just now. Studies have shown that for comets with high orbital inclinations and perihelion distances of less than about 2 AU, the cumulative effect of gravitational perturbations tends to result in sungrazing orbits. One study has estimated that Comet Hale–Bopp has about a 15% chance of eventually becoming a Sun-grazing comet. Comet families resembling the Kreutz group have been detected around the star Beta Pictoris. Recent observations Until recently, a very bright member of the Kreutz sungrazers could pass through the inner Solar System unnoticed if its perihelion had occurred between about May and August. At this time of year, as seen from Earth, the comet would approach and recede almost directly behind the Sun and could only become visible extremely close to the Sun if it became very bright. Only a remarkable coincidence between the perihelion passage of the Eclipse Comet of 1882 and a total solar eclipse allowed its discovery. During the 1980s, two Sun-observing satellites serendipitously discovered several new members of the Kreutz family. Since the launch of the SOHO Sun-observing satellite in 1995, it has been possible to observe comets very close to the Sun at any time of year. The satellite provides a constant view of the immediate solar vicinity, and SOHO has now discovered hundreds of new Sun-grazing comets, some just a few metres across. About 83% of the sungrazers found by SOHO are members of the Kreutz group, with the others including the Meyer, Marsden, and Kracht1&2 families. New Kreutz sungrazers are discovered roughly once every three days, while many are likely going unobserved. Their frequency increased from 1997–2002 to 2003–2008. They probably have radii of only a few dozens of metres. Apart from Comet Lovejoy, none of the sungrazers seen by SOHO has survived its perihelion passage; some may have plunged into the Sun itself, but most are likely to have simply evaporated away completely. Centrifugal breakup is another important process that destroys smaller Kreutz sungrazers, and may explain the delayed breakup of some Kreutz comets long after they passed through perihelion and are moving away from the Sun. , 85% or about 3,400 of the 4,000 comets that have been identified using SOHO data, mostly by amateur astronomers analysing SOHO's observations via the Internet, were Kreutz sungrazers. , NASA's JPL Small-Body Database lists about 1,300 Kreutz sungrazers. Sungrazers frequently arrive in pairs or triplets separated by a few hours. These pairs are too frequent to occur by chance, and cannot be due to break-ups on the previous orbit, because the fragments would have separated by a much greater distance. Instead, it is thought that the pairs result from fragmentations far away from the perihelion. Many comets have been observed to fragment far from perihelion, and it seems that in the case of the Kreutz sungrazers, an initial fragmentation near perihelion can be followed by an ongoing 'cascade' of break-ups throughout the rest of the orbit. There are minor differences between Subgroup I and Subgroup II Kreutz sungrazers; the former come slightly closer to the Sun and the ascending nodes differ by about 20°. The number of Subgroup I Kreutz comets discovered is about nine to four times the number of Subgroup II members. This suggests that the 'grandparent' comet split into parent comets of unequal size. Future Dynamically, the Kreutz sungrazers might continue to be recognised as a distinct family for many thousands of years. Eventually, their orbits will be dispersed by gravitational perturbations, although depending on the rate of fragmentation of the constituent parts, the group might be completely destroyed before it is gravitationally dispersed. During 2002–2017, the occurrence of Kreutz sungrazers remained largely constant. It is not possible to estimate the chances of another very bright Kreutz comet arriving in the near future, but given that at least 10 have reached naked-eye visibility over the last 200 years, another great comet from the Kreutz family seems almost certain to arrive at some point. Comet White–Ortiz–Bolelli in 1970 reached an apparent magnitude of 1. In December 2011, Kreutz sungrazer C/2011 W3 (Lovejoy) survived its perihelion passage for some time and had an apparent magnitude of −3. This comet is probably not the herald of another arrival of bright Kreutz sungrazers.
Physical sciences
Notable comets
Astronomy
1641247
https://en.wikipedia.org/wiki/Anti-reflective%20coating
Anti-reflective coating
An antireflective, antiglare or anti-reflection (AR) coating is a type of optical coating applied to the surface of lenses, other optical elements, and photovoltaic cells to reduce reflection. In typical imaging systems, this improves the efficiency since less light is lost due to reflection. In complex systems such as cameras, binoculars, telescopes, and microscopes the reduction in reflections also improves the contrast of the image by elimination of stray light. This is especially important in planetary astronomy. In other applications, the primary benefit is the elimination of the reflection itself, such as a coating on eyeglass lenses that makes the eyes of the wearer more visible to others, or a coating to reduce the glint from a covert viewer's binoculars or telescopic sight. Many coatings consist of transparent thin film structures with alternating layers of contrasting refractive index. Layer thicknesses are chosen to produce destructive interference in the beams reflected from the interfaces, and constructive interference in the corresponding transmitted beams. This makes the structure's performance change with wavelength and incident angle, so that color effects often appear at oblique angles. A wavelength range must be specified when designing or ordering such coatings, but good performance can often be achieved for a relatively wide range of frequencies: usually a choice of IR, visible, or UV is offered. Applications Anti-reflective coatings are used in a wide variety of applications where light passes through an optical surface, and low loss or low reflection is desired. Examples include anti-glare coatings on corrective lenses and camera lens elements, and antireflective coatings on solar cells. Corrective lenses Opticians may recommend "anti-reflection lenses" because the decreased reflection enhances the cosmetic appearance of the lenses. Such lenses are often said to reduce glare, but the reduction is very slight. Eliminating reflections allows slightly more light to pass through, producing a slight increase in contrast and visual acuity. Antireflective ophthalmic lenses should not be confused with polarized lenses, which are found only in sunglasses and decrease (by absorption) the visible glare of sun reflected off surfaces such as sand, water, and roads. The term "antireflective" relates to the reflection from the surface of the lens itself, not the origin of the light that reaches the lens. Many anti-reflection lenses include an additional coating that repels water and grease, making them easier to keep clean. Anti-reflection coatings are particularly suited to high-index lenses, as these reflect more light without the coating than a lower-index lens (a consequence of the Fresnel equations). It is also generally easier and cheaper to coat high index lenses. Photolithography Antireflective coatings (ARC) are often used in microelectronic photolithography to help reduce image distortions associated with reflections off the surface of the substrate. Different types of antireflective coatings are applied either before (Bottom ARC, or BARC) or after the photoresist, and help reduce standing waves, thin-film interference, and specular reflections. Solar cells Solar cells are often coated with an anti-reflective coating. Materials that have been used include magnesium fluoride, silicon nitride, silicon dioxide, titanium dioxide, and aluminum oxide. Types Index-matching The simplest form of anti-reflective coating was discovered by Lord Rayleigh in 1886. The optical glass available at the time tended to develop a tarnish on its surface with age, due to chemical reactions with the environment. Rayleigh tested some old, slightly tarnished pieces of glass, and found to his surprise that they transmitted more light than new, clean pieces. The tarnish replaces the air-glass interface with two interfaces: an air-tarnish interface and a tarnish-glass interface. Because the tarnish has a refractive index between those of glass and air, each of these interfaces exhibits less reflection than the air-glass interface did. In fact, the total of the two reflections is less than that of the "naked" air-glass interface, as can be calculated from the Fresnel equations. One approach is to use graded-index (GRIN) anti-reflective coatings, that is, ones with nearly continuously varying indices of refraction. With these, it is possible to curtail reflection for a broad band of frequencies and incidence angles. Single-layer interference The simplest interference anti-reflective coating consists of a single thin layer of transparent material with refractive index equal to the square root of the substrate's refractive index. In air, such a coating theoretically gives zero reflectance for light with wavelength (in the coating) equal to four times the coating's thickness. Reflectance is also decreased for wavelengths in a broad band around the center. A layer of thickness equal to a quarter of some design wavelength is called a "quarter-wave layer". The most common type of optical glass is crown glass, which has an index of refraction of about 1.52. An optimal single-layer coating would have to be made of a material with an index of about 1.23. There are no solid materials with such a low refractive index. The closest materials with good physical properties for a coating are magnesium fluoride, MgF2 (with an index of 1.38), and fluoropolymers, which can have indices as low as 1.30, but are more difficult to apply. MgF2 on a crown glass surface gives a reflectance of about 1%, compared to 4% for bare glass. MgF2 coatings perform much better on higher-index glasses, especially those with index of refraction close to 1.9. MgF2 coatings are commonly used because they are cheap and durable. When the coatings are designed for a wavelength in the middle of the visible band, they give reasonably good anti-reflection over the entire band. Researchers have produced films of mesoporous silica nanoparticles with refractive indices as low as 1.12, which function as antireflection coatings. Multi-layer interference By using alternating layers of a low-index material like silica and a higher-index material, it is possible to obtain reflectivities as low as 0.1% at a single wavelength. Coatings that give very low reflectivity over a broad band of frequencies can also be made, although these are complex and relatively expensive. Optical coatings can also be made with special characteristics, such as near-zero reflectance at multiple wavelengths, or optimal performance at angles of incidence other than 0°. Absorbing An additional category of anti-reflection coatings is the so-called "absorbing ARC". These coatings are useful in situations where high transmission through a surface is unimportant or undesirable, but low reflectivity is required. They can produce very low reflectance with few layers, and can often be produced more cheaply, or at greater scale, than standard non-absorbing AR coatings. (See, for example, US Patent 5,091,244.) Absorbing ARCs often make use of unusual optical properties exhibited in compound thin films produced by sputter deposition. For example, titanium nitride and niobium nitride are used in absorbing ARCs. These can be useful in applications requiring contrast enhancement or as a replacement for tinted glass (for example, in a CRT display). Moth eye Moths' eyes have an unusual property: their surfaces are covered with a natural nanostructured film, which eliminates reflections. This allows the moth to see well in the dark, without reflections to give its location away to predators. The structure consists of a hexagonal pattern of bumps, each roughly 200 nm high and spaced on 300 nm centers. This kind of antireflective coating works because the bumps are smaller than the wavelength of visible light, so the light sees the surface as having a continuous refractive index gradient between the air and the medium, which decreases reflection by effectively removing the air-lens interface. Practical anti-reflective films have been made by humans using this effect; this is a form of biomimicry. Canon uses the moth-eye technique in their SWC subwavelength structure coating, which significantly reduces lens flare. Such structures are also used in photonic devices, for example, moth-eye structures grown from tungsten oxide and iron oxide can be used as photoelectrodes for splitting water to produce hydrogen. The structure consists of tungsten oxide spheroids several hundred micrometers in diameter, coated with a few nanometers of iron oxide. Circular polarizer A circular polarizer laminated to a surface can be used to eliminate reflections. The polarizer transmits light with one chirality ("handedness") of circular polarization. Light reflected from the surface after the polarizer is transformed into the opposite "handedness". This light cannot pass back through the circular polarizer because its chirality has changed (e.g. from right circular polarized to left circularly polarized). A disadvantage of this method is that if the input light is unpolarized, the transmission through the assembly will be less than 50%. Theory There are two separate causes of optical effects due to coatings, often called thick-film and thin-film effects. Thick-film effects arise because of the difference in the index of refraction between the layers above and below the coating (or film); in the simplest case, these three layers are the air, the coating, and the glass. Thick-film coatings do not depend on how thick the coating is, so long as the coating is much thicker than a wavelength of light. Thin-film effects arise when the thickness of the coating is approximately the same as a quarter or a half a wavelength of light. In this case, the reflections of a steady source of light can be made to add destructively and hence reduce reflections by a separate mechanism. In addition to depending very much on the thickness of the film and the wavelength of light, thin-film coatings depend on the angle at which the light strikes the coated surface. Reflection Whenever a ray of light moves from one medium to another (for example, when light enters a sheet of glass after travelling through air), some portion of the light is reflected from the surface (known as the interface) between the two media. This can be observed when looking through a window, for instance, where a (weak) reflection from the front and back surfaces of the window glass can be seen. The strength of the reflection depends on the ratio of the refractive indices of the two media, as well as the angle of the surface to the beam of light. The exact value can be calculated using the Fresnel equations. When the light meets the interface at normal incidence (perpendicularly to the surface), the intensity of light reflected is given by the reflection coefficient, or reflectance, R: where n0 and nS are the refractive indices of the first and second media respectively. The value of R varies from 0 (no reflection) to 1 (all light reflected) and is usually quoted as a percentage. Complementary to R is the transmission coefficient, or transmittance, T. If absorption and scattering are neglected, then the value T is always 1 − R. Thus if a beam of light with intensity I is incident on the surface, a beam of intensity RI is reflected, and a beam with intensity TI is transmitted into the medium. For the simplified scenario of visible light travelling from air (n0 ≈ 1.0) into common glass (), the value of R is 0.04, or 4%, on a single reflection. So at most 96% of the light () actually enters the glass, and the rest is reflected from the surface. The amount of light reflected is known as the reflection loss. In the more complicated scenario of multiple reflections, say with light travelling through a window, light is reflected both when going from air to glass and at the other side of the window when going from glass back to air. The size of the loss is the same in both cases. Light also may bounce from one surface to another multiple times, being partially reflected and partially transmitted each time it does so. In all, the combined reflection coefficient is given by . For glass in air, this is about 7.7%. Rayleigh's film As observed by Lord Rayleigh, a thin film (such as tarnish) on the surface of glass can reduce the reflectivity. This effect can be explained by envisioning a thin layer of material with refractive index n1 between the air (index n0) and the glass (index nS). The light ray now reflects twice: once from the surface between air and the thin layer, and once from the layer-to-glass interface. From the equation above and the known refractive indices, reflectivities for both interfaces can be calculated, denoted R01 and R1S respectively. The transmission at each interface is therefore and . The total transmittance into the glass is thus T1ST01. Calculating this value for various values of n1, it can be found that at one particular value of optimal refractive index of the layer, the transmittance of both interfaces is equal, and this corresponds to the maximal total transmittance into the glass. This optimal value is given by the geometric mean of the two surrounding indices: For the example of glass () in air (), this optimal refractive index is . The reflection loss of each interface is approximately 1.0% (with a combined loss of 2.0%), and an overall transmission T1ST01 of approximately 98%. Therefore, an intermediate coating between the air and glass can halve the reflection loss. Interference coatings The use of an intermediate layer to form an anti-reflection coating can be thought of as analogous to the technique of impedance matching of electrical signals. (A similar method is used in fibre optic research, where an index-matching oil is sometimes used to temporarily defeat total internal reflection so that light may be coupled into or out of a fiber.) Further reduced reflection could in theory be made by extending the process to several layers of material, gradually blending the refractive index of each layer between the index of the air and the index of the substrate. Practical anti-reflection coatings, however, rely on an intermediate layer not only for its direct reduction of reflection coefficient, but also use the interference effect of a thin layer. Assume the layer's thickness is controlled precisely, such that it is exactly one quarter of the wavelength of light in the layer (, where λ0 is the vacuum wavelength). The layer is then called a quarter-wave coating. For this type of coating a normally incident beam I, when reflected from the second interface, will travel exactly half its own wavelength further than the beam reflected from the first surface, leading to destructive interference. This is also true for thicker coating layers (3λ/4, 5λ/4, etc.), however the anti-reflective performance is worse in this case due to the stronger dependence of the reflectance on wavelength and the angle of incidence. If the intensities of the two beams R1 and R2 are exactly equal, they will destructively interfere and cancel each other, since they are exactly out of phase. Therefore, there is no reflection from the surface, and all the energy of the beam must be in the transmitted ray, T. In the calculation of the reflection from a stack of layers, the transfer-matrix method can be used. Real coatings do not reach perfect performance, though they are capable of reducing a surface reflection coefficient to less than 0.1%. Also, the layer will have the ideal thickness for only one distinct wavelength of light. Other difficulties include finding suitable materials for use on ordinary glass, since few useful substances have the required refractive index () that will make both reflected rays exactly equal in intensity. Magnesium fluoride (MgF2) is often used, since this is hard-wearing and can be easily applied to substrates using physical vapor deposition, even though its index is higher than desirable (). Further reduction is possible by using multiple coating layers, designed such that reflections from the surfaces undergo maximal destructive interference. One way to do this is to add a second quarter-wave thick higher-index layer between the low-index layer and the substrate. The reflection from all three interfaces produces destructive interference and anti-reflection. Other techniques use varying thicknesses of the coatings. By using two or more layers, each of a material chosen to give the best possible match of the desired refractive index and dispersion, broadband anti-reflection coatings covering the visible range (400–700 nm) with maximal reflectivity of less than 0.5% are commonly achievable. The exact nature of the coating determines the appearance of the coated optic; common AR coatings on eyeglasses and photographic lenses often look somewhat bluish (since they reflect slightly more blue light than other visible wavelengths), though green and pink-tinged coatings are also used. If the coated optic is used at non-normal incidence (that is, with light rays not perpendicular to the surface), the anti-reflection capabilities are degraded somewhat. This occurs because the phase accumulated in the layer relative to the phase of the light immediately reflected decreases as the angle increases from normal. This is counterintuitive, since the ray experiences a greater total phase shift in the layer than for normal incidence. This paradox is resolved by noting that the ray will exit the layer spatially offset from where it entered and will interfere with reflections from incoming rays that had to travel further (thus accumulating more phase of their own) to arrive at the interface. The net effect is that the relative phase is actually reduced, shifting the coating, such that the anti-reflection band of the coating tends to move to shorter wavelengths as the optic is tilted. Non-normal incidence angles also usually cause the reflection to be polarization-dependent. Textured coatings Reflection can be reduced by texturing the surface with 3D pyramids or 2D grooves (gratings). These kind of textured coating can be created using for example the Langmuir-Blodgett method. If wavelength is greater than the texture size, the texture behaves like a gradient-index film with reduced reflection. To calculate reflection in this case, effective medium approximations can be used. To minimize reflection, various profiles of pyramids have been proposed, such as cubic, quintic or integral exponential profiles. If wavelength is smaller than the textured size, the reflection reduction can be explained with the help of the geometric optics approximation: rays should be reflected many times before they are sent back toward the source. In this case the reflection can be calculated using ray tracing. Using texture reduces reflection for wavelengths comparable with the feature size as well. In this case no approximation is valid, and reflection can be calculated by solving Maxwell equations numerically. Antireflective properties of textured surfaces are well discussed in literature for a wide range of size-to-wavelength ratios (including long- and short-wave limits) to find the optimal texture size. History As mentioned above, natural index-matching "coatings" were discovered by Lord Rayleigh in 1886. Harold Dennis Taylor of Cooke company developed a chemical method for producing such coatings in 1904. Interference-based coatings were invented and developed in 1935 by Olexander Smakula, who was working for the Carl Zeiss optics company. These coatings remained a German military secret for several years, until the Allies discovered the secret at the end of World War II. Katharine Burr Blodgett and Irving Langmuir developed organic anti-reflection coatings known as Langmuir–Blodgett films in the late 1930s.
Technology
Optics
null
1642993
https://en.wikipedia.org/wiki/Cotton%20picker
Cotton picker
A cotton picker is either a machine that harvests cotton, or a person who picks ripe cotton fibre from the plants. The machine is also referred to as a cotton harvester. History In many societies, slave labor was utilized to pick the cotton, increasing the plantation owner's profit margins (See Trans-Atlantic Slave Trade). The first practical cotton picker was invented over a period of years beginning in the late 1920s by John Daniel Rust (1892–1954) with the later help of his brother Mack Rust. Other inventors had tried designs with a barbed spindle to twist cotton fibers onto the spindle and then pull the cotton from the boll, but these early designs were impractical because the spindle became clogged with cotton. Rust determined that a smooth, moist spindle could be used to strip the fibers from the boll without trapping them in the machinery. In 1933 John Rust received his first patent, and eventually, he and his brother owned forty-seven patents on cotton picking machinery. However, during the Great Depression it was difficult to obtain financing to develop their inventions. In 1935 the Rust brothers founded the Rust Cotton Picker Company in Memphis, Tennessee, and on 31 August 1936 demonstrated the Rust picker at the Delta Experiment Station in Stoneville, Mississippi. Although the first Rust picker was not without serious deficiencies, it did pick cotton and the demonstration attracted considerable national press coverage. Nevertheless, the Rusts' company did not have the capability of manufacturing cotton pickers in significant quantities. With the success of the Rust picker, other companies redoubled their efforts to produce practical pickers not based on the Rust patents. Then, widespread adoption was delayed by the manufacturing demands of World War II. The International Harvester Company produced a commercially successful commercial cotton picker in 1944. After World War II, the Allis-Chalmers Manufacturing Company manufactured cotton pickers using an improved Rust design. In the following years mechanical pickers were gradually improved and were increasingly adopted by farmers. The introduction of the cotton picker has been cited as a factor in the Second Great Migration. Cotton plant improvements To make mechanical cotton pickers more practical, improvements in the cotton plant and in cotton culture were also necessary. In earlier times, cotton fields had to be picked by hand three and four times each harvest season because the bolls matured at different rates. It was not practical to delay picking until all the bolls were ready for picking because the quality of the cotton deteriorated as soon as bolls opened. But about the time mechanical pickers were introduced, plant breeders developed hybrid cotton varieties with bolls higher off the ground and that ripened uniformly. With those innovations, the harvester could make just one pass through the field. Also, herbicides were developed to defoliate the plants and drop their leaves before the picker came through, producing a cleaner harvest. Conventional harvester The first harvesters were only capable of harvesting one row of cotton at a time, but were still able to replace up to forty hand laborers. The current cotton picker is a self-propelled machine that removes cotton lint and seed (seed-cotton) from the plant at up to six rows at a time. There are two types of pickers in use today. One is the "stripper" picker, primarily found in use in Texas. They are also found in Arkansas. It removes not only the lint from the plant, but a fair deal of the plant matter as well (such as unopened bolls). Later, the plant matter is separated from the lint through a process dropping heavier matter before the lint makes it to the basket at the rear of the picker. The other type of picker is the "spindle" picker. It uses rows of barbed spindles that rotate at high speed and remove the seed-cotton from the plant. The seed-cotton is then removed from the spindles by a counter-rotating doffer and is then blown up into the basket. Once the basket is full the picker dumps the seed-cotton into a "module builder". The module builder creates a compact "brick" of seed-cotton, weighing approximately (16 un-ginned bales), which can be stored in the field or in the "gin yard" until it is ginned. Each ginned bale weighs roughly . An industry-exclusive on-board round module builder was offered by John Deere in 2007. In c.2008 the Case IH Module Express 625 was designed in collaboration with ginners and growers to provide a cotton picker with the ability to build modules while harvesting the crop.
Technology
Farm and garden machinery
null
12163350
https://en.wikipedia.org/wiki/Whirlwind
Whirlwind
A whirlwind is a phenomenon in which a vortex of wind (a vertically oriented rotating column of air) forms due to instabilities and turbulence created by heating and flow (current) gradients. Whirlwinds can vary in size and last from a couple minutes to a couple hours. Types Whirlwinds are subdivided into two types, the great (or major) whirlwinds, and the lesser (or minor) whirlwinds. The first category includes tornadoes, waterspouts, and landspouts. The range of atmospheric vortices constitute a continuum and are difficult to categorize definitively. Some lesser whirlwinds may sometimes form in a similar manner to greater whirlwinds with related increase in intensity. These intermediate types include the gustnado and the fire whirl. Other lesser whirlwinds include dust devils, as well as steam devils, snow devils, debris devils, leaf devils or hay devils, water devils, and shear eddies such as the mountainado and eddy whirlwinds. Formation A major whirlwind (such as a tornado) is formed from supercell thunderstorms (the most powerful type of thunderstorm) or other powerful storms. When the storms start to spin, they react with other high altitude winds, causing a funnel to spin. A cloud forms over the funnel, making it visible. A minor whirlwind is created when local winds start to spin on the ground. This causes a funnel to form. The funnel moves over the ground, pushed by the winds that first formed it. The funnel picks up materials such as dust or snow as it moves over the ground, thus becoming visible. Duration Major whirlwinds last longer because they are formed from very powerful winds, and it is hard, though not impossible, to interrupt them. Minor whirlwinds are not as long-lived; the winds that form them do not last long, and when a minor whirlwind encounters an obstruction (a building, a house, a tree, etc.), its rotation is interrupted, as is the windflow into it, causing it to dissipate. Associated weather Supercell thunderstorms, other powerful storms, and strong winds are seen with major whirlwinds. Wind storms are commonly seen with minor whirlwinds. Also, small, semi-powerful “wind blasts” may be seen before some minor whirlwinds, which can come from a wind storm. These wind blasts can start to rotate and form minor whirlwinds. Winds from other small storms (such as rain storms and local thunderstorms) can cause minor whirlwinds to form. Like major whirlwinds, these minor whirlwinds can also be dangerous at times. Similar phenomena Eddies and vortices may form in any fluid. In water, a whirlpool is a similar phenomenon.
Physical sciences
Storms
Earth science
7490861
https://en.wikipedia.org/wiki/Magnetic-tape%20data%20storage
Magnetic-tape data storage
Magnetic-tape data storage is a system for storing digital information on magnetic tape using digital recording. Tape was an important medium for primary data storage in early computers, typically using large open reels of 7-track, later 9-track tape. Modern magnetic tape is most commonly packaged in cartridges and cassettes, such as the widely supported Linear Tape-Open (LTO) and IBM 3592 series. The device that performs the writing or reading of data is called a tape drive. Autoloaders and tape libraries are often used to automate cartridge handling and exchange. Compatibility was important to enable transferring data. Tape data storage is now used more for system backup, data archive and data exchange. The low cost of tape has kept it viable for long-term storage and archive. Open reels Initially, magnetic tape for data storage was wound on reels. This standard for large computer systems persisted through the late 1980s, with steadily increasing capacity due to thinner substrates and changes in encoding. Tape cartridges and cassettes were available starting in the mid-1970s and were frequently used with small computer systems. With the introduction of the IBM 3480 cartridge in 1984, described as "about one-fourth the size ... yet it stored up to 20 percent more data", large computer systems started to move away from open-reel tapes and towards cartridges. UNIVAC Magnetic tape was first used to record computer data in 1951 on the UNIVAC I. The UNISERVO drive recording medium was a thin metal strip of wide nickel-plated phosphor bronze. Recording density was 128 characters per inch (198 micrometres per character) on eight tracks at a linear speed of , yielding a data rate of 12,800 characters per second. Of the eight tracks, six were data, one was for parity, and one was a clock, or timing track. Making allowances for the empty space between tape blocks, the actual transfer rate was around 7,200 characters per second. A small reel of mylar tape provided separation between the metal tape and the read/write head. IBM formats IBM computers from the 1950s used ferric-oxide-coated tape similar to that used in audio recording. IBM's technology soon became the de facto industry standard. Magnetic tape dimensions were wide and wound on removable reels. Different tape lengths were available with and on mil and one half thickness being somewhat standard. During the 1980s, longer tape lengths such as became available using a much thinner PET film. Most tape drives could support a maximum reel size of . A so-called mini-reel was common for smaller data sets, such as for software distribution. These were reels, often with no fixed length—the tape was sized to fit the amount of data recorded on it as a cost-saving measure. CDC used IBM-compatible magnetic tapes, but also offered a variant, with 14 tracks (12 data tracks corresponding to the 12-bit word of CDC 6000 series peripheral processors, plus 2 parity bits) in the CDC 626 drive. Early IBM tape drives, such as the IBM 727 and IBM 729, were mechanically sophisticated floor-standing drives that used vacuum columns to buffer long u-shaped loops of tape. Between servo control of powerful reel motors, a low-mass capstan drive, and the low-friction and controlled tension of the vacuum columns, fast start and stop of the tape at the tape-to-head interface could be achieved. The fast acceleration is possible because the tape mass in the vacuum columns is small; the length of tape buffered in the columns provides time to accelerate the high-inertia reels. When active, the two tape reels thus fed tape into or pulled tape out of the vacuum columns, intermittently spinning in rapid, unsynchronized bursts, resulting in visually striking action. Stock shots of such vacuum-column tape drives in motion were emblematically representative of computers in movies and television. Early half-inch tape had seven parallel tracks of data along the length of the tape, allowing 6-bit characters plus 1 bit of parity written across the tape. This was known as 7-track tape. With the introduction of the IBM System/360 mainframe, 9-track tapes were introduced to support the new 8-bit characters that it used. The end of a file was designated by a special recorded pattern called a tape mark, and end of the recorded data on a tape by two successive tape marks. The physical beginning and end of usable tape was indicated by reflective adhesive strips of aluminum foil placed on the backside. Recording density increased over time. Common 7-track densities started at 200 characters per inch (CPI), then 556, and finally 800; 9-track tapes had densities of 800 (using NRZI), then 1600 (using PE), and finally 6250 (using GCR). This translates into about 5 megabytes to 140 megabytes per standard length () reel of tape. Effective density also increased as the interblock gap (inter-record gap) decreased from a nominal on 7-track tape reel to a nominal on a 6250 bpi 9-track tape reel. At least partly due to the success of the System/360, and the resultant standardization on 8-bit character codes and byte addressing, 9-track tapes were very widely used throughout the computer industry during the 1970s and 1980s. IBM discontinued new reel-to-reel products replacing them with cartridge based products beginning with its 1984 introduction of the cartridge-based 3480 family. DEC format LINCtape, and its derivative, DECtape were variations on this "round tape". They were essentially a personal storage medium, used tape that was wide and featured a fixed formatting track which, unlike standard tape, made it feasible to read and rewrite blocks repeatedly in place. LINCtapes and DECtapes had similar capacity and data transfer rate to the diskettes that displaced them, but their access times were on the order of thirty seconds to a minute. Cartridges and cassettes In the context of magnetic tape, the term cassette or cartridge means a length of magnetic tape in a plastic enclosure with one or two reels for controlling the motion of the tape. The type of packaging affects the load and unload times as well as the length of tape that can be held. In a single-reel cartridge, there is a takeup reel in the drive while a dual reel cartridge has both takeup and supply reels in the cartridge. A tape drive uses one or more precisely controlled motors to wind the tape from one reel to the other, passing a read/write head as it does. A different type is the endless tape cartridge, which has a continuous loop of tape wound on a special reel that allows tape to be withdrawn from the center of the reel and then wrapped up around the edge, and therefore does not need to rewind to repeat. This type is similar to a single-reel cartridge in that there is no take-up reel inside the tape drive. The IBM 7340 Hypertape drive, introduced in 1961, used a dual reel cassette with a tape capable of holding 2 million six-bit characters per cassette. In the 1970s and 1980s, audio Compact Cassettes were frequently used as an inexpensive data storage system for home computers, or in some cases for diagnostics or boot code for larger systems such as the Burroughs B1700. Compact cassettes are logically, as well as physically, sequential; they must be rewound and read from the start to load data. Early cartridges were available before personal computers had affordable disk drives, and could be used as random access devices, automatically winding and positioning the tape, albeit with access times of many seconds. In 1984 IBM introduced the 3480 family of single reel cartridges and tape drives which were then manufactured by a number of vendors through at least 2004. Initially providing 200 megabytes per cartridge, the family capacity increased over time to 2.4 gigabytes per cartridge. DLT (Digital Linear Tape), also a cartridge-based tape, was available beginning 1984 but as of 2007 future development was stopped in favor of LTO. In 2003 IBM introduced the 3592 family to supersede the IBM 3590. While the name is similar, there is no compatibility between the 3590 and the 3592. Like the 3590 and 3480 before it, this tape format has tape spooled into a single reel cartridge. Initially introduced to support 300 gigabytes, the sixth generation released in 2018 supports a native capacity of 20 terabytes. Linear Tape-Open (LTO) single-reel cartridge was announced in 1997 at 100 gigabytes and in its eighth generation supports 12 terabytes in the same sized cartridge. LTO has completely displaced all other tape technologies in computer applications, with the exception of some IBM 3592 family at the high-end. Technical details Linear density (BPI) is the metric for the density at which data is stored on magnetic media. The term BPI can refer to , but more often refers to bytes per inch. The term BPI can mean bytes per inch when the tracks of a particular format are byte-organized, as in nine-track tapes. Tape width The width of the media is the primary classification criterion for tape technologies. has historically been the most common width of tape for high-capacity data storage. Many other sizes exist and most were developed to either have smaller packaging or higher capacity. Recording method Recording method is also an important way to classify tape technologies, generally falling into two categories: linear and scanning. Linear The linear method arranges data in long parallel tracks that span the length of the tape. Multiple tape heads simultaneously write parallel tape tracks on a single medium. This method was used in early tape drives. It is the simplest recording method, but also has the lowest data density. A variation on linear technology is linear serpentine recording, which uses more tracks than tape heads. Each head still writes one track at a time. After making a pass over the whole length of the tape, all heads shift slightly and make another pass in the reverse direction, writing another set of tracks. This procedure is repeated until all tracks have been read or written. By using the linear serpentine method, the tape medium can have many more tracks than read/write heads. Compared to simple linear recording, using the same tape length and the same number of heads, data storage capacity is substantially higher. Scanning Scanning recording methods write short dense tracks across the width of the tape medium, not along the length. Tape heads are placed on a drum or disk which rapidly rotates while the relatively slow-moving tape passes it. An early method used to get a higher data rate than the prevailing linear method was transverse scan. In this method, a spinning disk with the tape heads embedded in the outer edge is placed perpendicular to the path of the tape. This method is used in Ampex's DCRsi instrumentation data recorders and the old Ampex quadruplex videotape system. Another early method was arcuate scan. In this method, the heads are on the face of a spinning disk which is laid flat against the tape. The path of the tape heads forms an arc. Helical scan recording writes short dense tracks in a diagonal manner. This method is used by virtually all current videotape systems and several data tape formats. Block layout and speed matching In a typical format, data is written to tape in blocks with inter-block gaps between them, and each block is written in a single operation with the tape running continuously during the write. However, since the rate at which data is written or read to the tape drive varies as a tape drive usually has to cope with a difference between the rate at which data goes on and off the tape and the rate at which data is supplied or demanded by its host. Various methods have been used alone and in combination to cope with this difference. If the host cannot keep up with the tape drive transfer rate, the tape drive can be stopped, backed up, and restarted (known as shoe-shining). A large memory buffer can be used to queue the data. In the past, the host block size affected the data density on tape, but on modern drives, data is typically organized into fixed-sized blocks which may or may not be compressed or encrypted, and host block size no longer affects data density on tape. Modern tape drives offer a speed matching feature, where the drive can dynamically decrease the physical tape speed as needed to avoid shoe-shining. In the past, the size of the inter-block gap was constant, while the size of the data block was based on host block size, affecting tape capacity – for example, on count key data storage. On most modern drives, this is no longer the case. Linear Tape-Open type drives use a fixed-size block for tape (a fixed-block architecture), independent of the host block size, and the inter-block gap is variable to assist with speed matching during writes. On drives with compression, the compressibility of the data will affect the capacity. Sequential access to data Tape is characterized by sequential access to data. While tape can provide fast data transfer, it takes tens of seconds to load a cassette and position the tape head to selected data. By contrast, hard disk technology can perform the equivalent action in tens of milliseconds (3 orders of magnitude faster) and can be thought of as offering random access to data. File systems require data and metadata to be stored on the data storage medium. Storing metadata in one place and data in another, as is done with disk-based file systems, requires repositioning activity. As a result, most tape systems use a simplified filesystem in which files are addressed by number, not by filename. Metadata such as file name or modification time is typically not stored at all. Tape labels store such metadata, and they are used for interchanging data between systems. File archiver and backup tools have been created to pack multiple files along with the related metadata into a single tape file. Serpentine tape drives (e.g., QIC) offer improved access time by switching to the appropriate track; tape partitions are used for directory information. The Linear Tape File System is a method of storing file metadata on a separate part of the tape. This makes it possible to copy and paste files or directories to a tape as if it were a disk, but does not change the fundamental sequential access nature of tape. Access time Tape has a long random access time since the deck must wind an average of one-third the tape length to move from one arbitrary position to another. Tape systems attempt to alleviate the intrinsic long latency, either using indexing, where a separate lookup table (tape directory) is maintained which gives the physical tape location for a given data block number (a must for serpentine drives), or by marking blocks with a tape mark that can be detected while winding the tape at high speed. Data compression Most tape drives now include some kind of lossless data compression. There are several algorithms that provide similar results: LZW (widely supported), IDRC (Exabyte), ALDC (IBM, QIC) and DLZ1 (DLT). Embedded in tape drive hardware, these compress a relatively small buffer of data at a time, so cannot achieve extremely high compression even of highly redundant data. A ratio of 2:1 is typical, with some vendors claiming 2.6:1 or 3:1. The ratio actually obtained depends on the nature of the data so the compression ratio cannot be relied upon when specifying the capacity of equipment, e.g., a drive claiming a compressed capacity of 500 GB may not be adequate to back up 500 GB of real data. Data that is already stored efficiently may not allow any significant compression and a sparse database may offer much larger factors. Software compression can achieve much better results with sparse data, but uses the host computer's processor, and can slow the backup if the host computer is unable to compress as fast as the data is written. The compression algorithms used in low-end products are not optimally effective, and better results may be obtained by turning off hardware compression and using software compression (and encryption if desired) instead. Plain text, raw images, and database files (TXT, ASCII, BMP, DBF, etc.) typically compress much better than other types of data stored on computer systems. By contrast, encrypted data and pre-compressed data (PGP, ZIP, JPEG, MPEG, MP3, etc.) normally increase in size if data compression is applied. In some cases, this data expansion can be as much as 15%. Encryption Standards exist to encrypt tapes. Encryption is used so that even if a tape is stolen, the thieves cannot use the data on the tape. Key management is crucial to maintain security. Compression is more efficient if done before encryption, as encrypted data cannot be compressed effectively due to the entropy it introduces. Some enterprise tape drives include hardware that can quickly encrypt data. Cartridge memory and self-identification Some tape cartridges, notably LTO cartridges, have small associated data storage chips built in to record metadata about the tape, such as the type of encoding, the size of the storage, dates and other information. It is also common for tape cartridges to have bar codes on their labels in order to assist an automated tape library. Viability Tape remains viable in modern data centers because: it is the lowest cost medium for storing large amounts of data; as a removable medium it allows the creation of an air gap that can prevent data from being hacked, encrypted or deleted; its longevity allows for extended data retention which may be required by regulatory agencies. The lowest cost tiers of cloud storage can be supported by tape. High-density magnetic media In 2002, Imation received a US$11.9 million grant from the U.S. National Institute of Standards and Technology for research into increasing the data capacity of magnetic tape. In 2014, Sony and IBM announced that they had been able to record 148 gigabits per square inch with magnetic tape media developed using a new vacuum thin-film forming technology able to form extremely fine crystal particles, a tape storage technology with the highest reported magnetic tape data density, 148 Gbit/in² (23 Gbit/cm²), potentially allowing a native tape capacity of 185 TB. It was further developed by Sony, with announcement in 2017, about reported data density of 201 Gbit/in² (31 Gbit/cm²), giving standard compressed tape capacity of 330 TB. In May 2014, Fujifilm followed Sony and made an announcement that it will develop a 154 TB tape cartridge in conjunction with IBM, which will have an areal data storage density of 85.9 GBit/in² (13.3 billion bits per cm²) on linear magnetic particulate tape. The technology developed by Fujifilm, called NANOCUBIC, reduces the particulate volume of BaFe magnetic tape, simultaneously increasing the smoothness of the tape, increasing the signal to noise ratio during read and write while enabling high-frequency response. In December 2020, Fujifilm and IBM announced technology that could lead to a tape cassette with a capacity of 580 terabytes, using strontium ferrite as the recording medium. Chronological list of tape formats 1951: UNISERVO 1952: IBM 7-track 1958: TX-2 Tape System 1961: IBM 7340 Hypertape 1962: LINCtape 1963: DECtape 1964: 9-track 1964: Magnetic tape selectric typewriter 1966: 8-track tape 1972: Quarter-inch cartridge (QIC) 1975: KC standard, Compact Cassette 1976: DC100 1977: Tarbell Cassette Interface 1977: Commodore Datasette 1979: DECtape II cartridge 1979: Exatron Stringy Floppy 1981: IBM PC Cassette Interface 1983: Sinclair ZX Microdrive 1984: Sinclair QL Microdrive 1984: Rotronics Wafadrive 1984: IBM 3480 cartridge 1984: Digital Linear Tape (DLT) 1986: SLR 1987: Data8 1989: Digital Data Storage (DDS) on Digital Audio Tape (DAT) 1992: Ampex DST 1994: Mammoth 1995: IBM 3590 1995: StorageTek Redwood SD-3 1995: Travan 1996: AIT 1997: IBM 3570 MP 1998: StorageTek T9840 1999: VXA 2000: StorageTek T9940 2000: LTO-1 2003: SAIT 2003: LTO-2 2003: 3592 2005: LTO-3 2005: TS1120 2006: T10000 2007: LTO-4 2008: TS1130 2008: T10000B 2010: LTO-5 2011: TS1140 2011: T10000C 2012: LTO-6 2013: T10000D 2014: TS1150 2015: LTO-7 2017: TS1155 2017: LTO-8 2018: TS1160 2021: LTO-9 2023: TS1170
Technology
Non-volatile memory
null
7498981
https://en.wikipedia.org/wiki/Cross-flow%20filtration
Cross-flow filtration
In chemical engineering, biochemical engineering and protein purification, cross-flow filtration (also known as tangential flow filtration) is a type of filtration (a particular unit operation). Cross-flow filtration is different from dead-end filtration in which the feed is passed through a membrane or bed, the solids being trapped in the filter and the filtrate being released at the other end. Cross-flow filtration gets its name because the majority of the feed flow travels tangentially across the surface of the filter, rather than into the filter. The principal advantage of this is that the filter cake (which can blind the filter) is substantially washed away during the filtration process, increasing the length of time that a filter unit can be operational. It can be a continuous process, unlike batch-wise dead-end filtration. This type of filtration is typically selected for feeds containing a high proportion of small particle size solids (where the permeate is of most value) because solid material can quickly block (blind) the filter surface with dead-end filtration. Industrial examples of this include the extraction of soluble antibiotics from fermentation liquors. The main driving force of cross-flow filtration process is transmembrane pressure. Transmembrane pressure is a measure of pressure difference between two sides of the membrane. During the process, the transmembrane pressure might decrease due to an increase of permeate viscosity, therefore filtration efficiency decreases and can be time-consuming for large-scale processes. This can be prevented by diluting permeate or increasing flow rate of the system. Operation In cross-flow filtration, the feed is passed across the filter membrane (tangentially) at positive pressure relative to the permeate side. A proportion of the material which is smaller than the membrane pore size passes through the membrane as permeate or filtrate; everything else is retained on the feed side of the membrane as retentate. With cross-flow filtration the tangential motion of the bulk of the fluid across the membrane causes trapped particles on the filter surface to be rubbed off. This means that a cross-flow filter can operate continuously at relatively high solids loads without blinding. Benefits over conventional filtration A higher overall liquid removal rate is achieved by the prevention of filter cake formation Process feed remains in the form of a mobile slurry, suitable for further processing Solids content of the product slurry may be varied over a wide range It is possible to fractionate particles by size Tubular pinch effect Industrial applications Cross-flow membrane filtration technology has been used widely in industry around the globe. Filtration membranes can be polymeric or ceramic, depending upon the application. The principles of cross-flow filtration are used in reverse osmosis, nanofiltration, ultrafiltration and microfiltration. When purifying water, it can be very cost-effective in comparison to the traditional evaporation methods. In protein purification, the term tangential flow filtration (TFF) is used to describe cross-flow filtration with membranes. The process can be used at different stages during purification, depending on the type of membrane selected. In the photograph of an industrial filtration unit (right), it is possible to see that the recycle pipework is considerably larger than either the feed pipework (vertical pipe on the right hand side) or the permeate pipework (small manifolds near to the rows of white clamps). These pipe sizes are directly related to the proportion of liquid that flows through the unit. A dedicated pump is used to recycle the feed several times around the unit before the solids-rich retentate is transferred to the next part of the process. Techniques to improve performance Backwashing In backwashing, the transmembrane pressure is periodically inverted by the use of a secondary pump, so that permeate flows back into the feed, lifting the fouling layer from the surface of the membrane. Backwashing is not applicable to spirally wound membranes and is not a general practice in most applications. (See Clean-in-place) Alternating tangential flow (ATF) A diaphragm pump is used to produce an alternating tangential flow, helping to dislodge retained particles and prevent membrane fouling. Repligen is the largest producer of ATF systems. Clean-in-place (CIP) Clean-in-place systems are typically used to remove fouling from membranes after extensive use. The CIP process may use detergents, reactive agents such as sodium hypochlorite and acids and alkalis such as citric acid and sodium hydroxide (NaOH). Sodium hypochlorite (bleach) must be removed from the feed in some membrane plants. Bleach oxidizes thin-film membranes. Oxidation will degrade the membranes to a point where they will no longer perform at rated rejection levels and have to be replaced. Bleach can be added to a sodium hydroxide CIP during an initial system start-up before spirally-wound membranes are loaded into the plant to help disinfect the system. Bleach is also used to CIP perforated stainless steel (Graver) membranes, as their tolerance for sodium hypochlorite is much higher than a spirally-wound membrane. Caustics and acids are most often used as primary CIP chemicals. Caustic removes organic fouling and acid removes minerals. Enzyme solutions are also used in some systems for helping remove organic fouling material from the membrane plant. The pH and temperature are important to a CIP program. If pH and temperature are too high the membrane will degrade and flux performance will suffer. If pH and temperature are too low, the system simply will not be cleaned properly. Every application has different CIP requirements. e.g. a dairy reverse osmosis (RO) plant most likely will require a more rigorous CIP program than a water purification RO plant. Each membrane manufacturer has their own guidelines for CIP procedures for their product. Concentration The volume of the fluid is reduced by allowing permeate flow to occur. Solvent, solutes, and particles smaller than the membrane pore size pass through the membrane, while particles larger than the pore size are retained, and thereby concentrated. In bioprocessing applications, concentration may be followed by diafiltration. Diafiltration In order to effectively remove permeate components from the slurry, fresh solvent may be added to the feed to replace the permeate volume, at the same rate as the permeate flow rate, such that the volume in the system remains constant. This is analogous to the washing of filter cake to remove soluble components. Dilution and re-concentration is sometimes also referred to as "diafiltration". Process flow disruption (PFD) A technically simpler approach than backwashing is to set the transmembrane pressure to zero by temporarily closing off the permeate outlet, which increases the attrition of the fouling layer without the need for a second pump. PFD is not as effective as backwashing in removing fouling, but can be advantageous. Flow rate calculation The flux or flow rate in cross-flow filtration systems is given by the equation: in which: : liquid flux : transmembrane pressure (should also include effects of osmotic pressure for reverse osmosis membranes) : Resistance of the membrane (related to overall porosity) : Resistance of the cake (variable; related to membrane fouling) : liquid viscosity Note: and include the inverse of the membrane surface area in their derivation; thus, flux increases with increasing membrane area.
Physical sciences
Other separations
Chemistry
14840572
https://en.wikipedia.org/wiki/Numerical%20relay
Numerical relay
In utility and industrial electric power transmission and distribution systems, a numerical relay is a computer-based system with software-based protection algorithms for the detection of electrical faults. Such relays are also termed as microprocessor type protective relays. They are functional replacements for electro-mechanical protective relays and may include many protection functions in one unit, as well as providing metering, communication, and self-test functions. Description and definition The digital protective relay is a protective relay that uses a microprocessor to analyze power system voltages, currents or other process quantities for the purpose of detection of faults in an electric power system or industrial process system. A digital protective relay may also be called a "numeric protective relay". Input processing Low voltage and low current signals (i.e., at the secondary of a voltage transformers and current transformers) are brought into a low pass filter that removes frequency content above about 1/3 of the sampling frequency (a relay A/D converter needs to sample faster than twice per cycle of the highest frequency that it is to monitor). The AC signal is then sampled by the relay's analog-to-digital converter from 4 to 64 (varies by relay) samples per power system cycle. As a minimum, magnitude of the incoming quantity, commonly using Fourier transform concepts (RMS and some form of averaging) would be used in a simple relay function. More advanced analysis can be used to determine phase angles, power, reactive power, impedance, waveform distortion, and other complex quantities. Only the fundamental component is needed for most protection algorithms, unless a high speed algorithm is used that uses subcycle data to monitor for fast changing issues. The sampled data is then passed through a low pass filter that numerically removes the frequency content that is above the fundamental frequency of interest (i.e., nominal system frequency), and uses Fourier transform algorithms to extract the fundamental frequency magnitude and angle. Logic processing The relay analyzes the resultant A/D converter outputs to determine if action is required under its protection algorithm(s). Protection algorithms are a set of logic equations in part designed by the protection engineer, and in part designed by the relay manufacturer. The relay is capable of applying advanced logic. It is capable of analyzing whether the relay should trip or restrain from tripping based on parameters set by the user, compared against many functions of its analogue inputs, relay contact inputs, timing and order of event sequences. If a fault condition is detected, output contacts operate to trip the associated circuit breaker(s). Parameter setting The logic is user-configurable and can vary from simply changing front panel switches or moving of circuit board jumpers to accessing the relay's internal parameter setting webpage via communications link on another computer hundreds of kilometers away. The relay may have an extensive collection of settings, beyond what can be entered via front panel knobs and dials, and these settings are transferred to the relay via an interface with a PC (personal computer), and this same PC interface may be used to collect event reports from the relay. Event recording In some relays, a short history of the entire sampled data is kept for oscillographic records. The event recording would include some means for the user to see the timing of key logic decisions, relay I/O (input/output) changes, and see, in an oscillographic fashion, at least the fundamental component of the incoming analogue parameters. Data display Digital/numerical relays provide a front panel display, or display on a terminal through a communication interface. This is used to display relay settings and real-time current/voltage values, etc. More complex digital relays will have metering and communication protocol ports, allowing the relay to become an element in a SCADA system. Communication ports may include RS-232/RS-485 or Ethernet (copper or fibre-optic). Communication languages may include Modbus, DNP3 or IEC61850 protocols. Comparison with other types By contrast, an electromechanical protective relay converts the voltages and currents to magnetic and electric forces and torques that press against spring tensions in the relay. The tension of the spring and taps on the electromagnetic coils in the relay are the main processes by which a user sets such a relay. In a solid-state relay, the incoming voltage and current wave-forms are monitored by analog circuits, not recorded or digitized. The analog values are compared to settings made by the user via potentiometers in the relay, and in some case, taps on transformers. In some solid-state relays, a simple microprocessor does some of the relay logic, but the logic is fixed and simple. For instance, in some time overcurrent solid state relays, the incoming AC current is first converted into a small signal AC value, then the AC is fed into a rectifier and filter that converts the AC to a DC value proportionate to the AC waveform. An op-amp and comparator is used to create a DC that rises when a trip point is reached. Then a relatively simple microprocessor does a slow speed A/D conversion of the DC signal, integrates the results to create the time-overcurrent curve response, and trips when the integration rises above a set-point. Though this relay has a microprocessor, it lacks the attributes of a digital/numeric relay, and hence the term "microprocessor relay" is not a clear term. History The digital/numeric relay was invented by George Rockefeller. George conceived of it in his Master's Thesis in 1967–68 at Newark College of Engineering. He published his seminal paper Fault Protection with a Digital Computer in 1969. Westinghouse developed the first digital relay with the Prodar 70 being developed between 1969 and 1971. It was commissioned in service on a 230kV transmission line at PG&E's Tesla substation in February 1971 and was in service for six years. In 2017, George received the IEEE Halperin Electric Transmission and Distribution Award. The award was for "pioneering development and practical demonstration of protective relaying of electric power systems with real-time digital computer techniques." George was chairman of the IEEE Power System Relaying and Control (PSRC) committee (1981-1982) as well as a member of the "Computer Relaying Subcommittee" which was created by the PSRC in 1971 and disbanded in 1978. He wrote the foreword for the PSRC tutorial on Computer Relaying produced in 1979. In 1971 M. Ramamoorty was the first to describe calculation of impedance for distance protection using discrete Fourier analysis. The first practical commercially available microprocessor based digital/numeric relay was made by Edmund O. Schweitzer, III in the early 1980s. SEL, AREVA, and ABB Group's were early forerunners making some of the early market advances in the arena, but the arena has become crowded today with many manufacturers. In transmission line and generator protection, by the mid-1990s the digital relay had nearly replaced the solid state and electro-mechanical relay in new construction. In distribution applications, the replacement by the digital relay proceeded a bit more slowly. While the great majority of feeder relays in new applications today are digital, the solid state relay still sees some use where simplicity of the application allows for simpler relays, which allows one to avoid the complexity of digital relays. Protective element types Protective elements refer to the overall logic surrounding the electrical condition that is being monitored. For instance, a differential element refers to the logic required to monitor two (or more) currents, find their difference, and trip if the difference is beyond certain parameters. The term element and function are quite interchangeable in many instances. For simplicity on one-line diagrams, the protection function is usually identified by an ANSI device number. In the era of electromechanical and solid state relays, any one relay could implement only one or two protective functions, so a complete protection system may have many relays on its panel. In a digital/numeric relay, many functions are implemented by the microprocessor programming. Any one numeric relay may implement one or all of these functions. A listing of device numbers is found at ANSI Device Numbers. A summary of some common device numbers seen in digital relays is: 11 – Multi-function Device 21 – Distance 24 – Volts/Hz 25 – Synchronizing 27 – Under Voltage 32 – Directional Power Element 46 – Negative Sequence Current 40 – Loss of Excitation 47 – Negative Sequence Voltage 50 – Instantaneous Overcurrent (N for neutral, G for ground current) 51 – Inverse Time Overcurrent (N for neutral, G from ground current) 59 – Over Voltage 62 – Timer 64 – Ground Fault (64F = Field Ground, 64G = Generator Ground) 67 – Directional Over Current (typically controls a 50/51 element) 79 – Reclosing Relay 81 – Under/Over Frequency 86 – Lockout Relay / Trip Circuit Supervision 87 – Current Differential (87L=transmission line diff; 87T=transformer diff; 87G=generator diff)
Technology
Electrical protective devices
null
5744114
https://en.wikipedia.org/wiki/Method%20of%20lines
Method of lines
The method of lines (MOL, NMOL, NUMOL) is a technique for solving partial differential equations (PDEs) in which all but one dimension is discretized. By reducing a PDE to a single continuous dimension, the method of lines allows solutions to be computed via methods and software developed for the numerical integration of ordinary differential equations (ODEs) and differential-algebraic systems of equations (DAEs). Many integration routines have been developed over the years in many different programming languages, and some have been published as open source resources. The method of lines most often refers to the construction or analysis of numerical methods for partial differential equations that proceeds by first discretizing the spatial derivatives only and leaving the time variable continuous. This leads to a system of ordinary differential equations to which a numerical method for initial value ordinary equations can be applied. The method of lines in this context dates back to at least the early 1960s. Many papers discussing the accuracy and stability of the method of lines for various types of partial differential equations have appeared since. Application to elliptical equations MOL requires that the PDE problem is well-posed as an initial value (Cauchy) problem in at least one dimension, because ODE and DAE integrators are initial value problem (IVP) solvers. Thus it cannot be used directly on purely elliptic partial differential equations, such as Laplace's equation. However, MOL has been used to solve Laplace's equation by using the method of false transients. In this method, a time derivative of the dependent variable is added to Laplace’s equation. Finite differences are then used to approximate the spatial derivatives, and the resulting system of equations is solved by MOL. It is also possible to solve elliptical problems by a semi-analytical method of lines. In this method, the discretization process results in a set of ODE's that are solved by exploiting properties of the associated exponential matrix. Recently, to overcome the stability issues associated with the method of false transients, a perturbation approach was proposed which was found to be more robust than standard method of false transients for a wide range of elliptic PDEs.
Mathematics
Differential equations
null
5747651
https://en.wikipedia.org/wiki/Powder%20keg
Powder keg
A powder keg is a barrel of gunpowder. The powder keg was the primary method for storing and transporting large quantities of black powder until the 1870s and the adoption of the modern cased cartridge. The barrels had to be handled with care, since a spark or other source of heat could cause the contents to deflagrate. In practical use, powder kegs were small casks to limit damage from accidental explosions. Today they are valued as collectibles. Specimens of early American kegs for gunpowder are found in sizes like tall by diameter and tall by diameter, often with strappings of reed or sapling wood rather than metal bands to avoid sparks. Kegs for blasting powder used for mining or quarrying were often larger than kegs for shipping and storing powder for firearms. As metaphor A powder keg is also a metaphor for a region that political, socioeconomic, historical or other circumstances have made prone to outbursts. The analogy is drawn from a perception that certain territories may seem peaceful and dormant until another event triggers a large outburst of violence. The term is most often used to simplify and help the understanding of what is often a complex set of circumstances that lead to conflicts, such as the powder keg of Europe. While the term can be used to designate the entire region of Europe, it is often used specifically to refer to the Balkans, due to its role in the Balkan Wars and World War I. The most cited event attributed to the use of the term was the assassination of Archduke Franz Ferdinand in Sarajevo, Condominium of Bosnia and Herzegovina in 1914, the immediate trigger of World War I.
Technology
Ammunition
null
638291
https://en.wikipedia.org/wiki/Red%20squirrel
Red squirrel
The red squirrel (Sciurus vulgaris), also called Eurasian red squirrel, is a species of tree squirrel in the genus Sciurus. It is an arboreal and primarily herbivorous rodent and common throughout Eurasia. Taxonomy There have been over 40 described subspecies of the red squirrel, but the taxonomic status of some of these is uncertain. A study published in 1971 recognises 16 subspecies and has served as a basis for subsequent taxonomic work. Although the validity of some subspecies is labelled with uncertainty because of the large variation in red squirrels even within a single region, others are relatively distinctive and one of these, S. v. meridionalis of South Italy, was elevated to species status as the Calabrian black squirrel in 2017. At present, there are 23 recognized subspecies of the red squirrel. Genetic studies indicate that another, S. v. hoffmanni of Sierra Espuña in southeast Spain (below included in S. v. alpinus), deserves recognition as distinct. S. v. alpinus. Desmarest, 1822. (synonyms: S. v. baeticus, hoffmanni, infuscatus, italicus, numantius and segurae) S. v. altaicus. Serebrennikov, 1928. S. v. anadyrensis. Ognev, 1929. S. v. arcticus. Trouessart, 1906. (synonym: S. v. jacutensis) S. v. balcanicus. Heinrich, 1936. (synonyms: S. v. istrandjae and rhodopensis) S. v. chiliensis. Sowerby, 1921. S. v. cinerea. Hermann, 1804. S. v. dulkeiti. Ognev, 1929. S. v. exalbidus. Pallas, 1778. (synonyms: S. v. argenteus and kalbinensis) S. v. fedjushini. Ognev, 1935. S. v. formosovi. Ognev, 1935. S. v. fuscoater. Altum, 1876. (synonyms: S. v. brunnea, gotthardi, graeca, nigrescens, russus and rutilans) S. v. fusconigricans. Dvigubsky, 1804 S. v. leucourus. Kerr, 1792. S. v. lilaeus. Miller, 1907. (synonyms: S. v. ameliae and croaticus) S. v. mantchuricus. Thomas, 1909. (synonyms: S. v. coreae and coreanus) S. v. martensi. Matschie, 1901. (synonym: S. v. jenissejensis) S. v. ognevi. Migulin, 1928. (synonyms: S. v. bashkiricus, golzmajeri and uralensis) S. v. orientis. Thomas, 1906. Ezo Red Squirrel (Hokkaidō). S. v. rupestris. Thomas, 1907 S. v. ukrainicus. Migulin, 1928. (synonym: S. v. kessleri) S. v. varius. Gmelin, 1789. S. v. vulgaris. Linnaeus, 1758. (synonyms: S. v. albonotatus, albus, carpathicus, europaeus, niger, rufus and typicus) Description The red squirrel has a typical head-and-body length of , a tail length of , and a mass of . Males and females are the same size. The long tail helps the squirrel to balance and steer when jumping from tree to tree and running along branches and may keep the animal warm during sleep. The coat of the red squirrel varies in colour with time of year and location. There are several coat colour morphs ranging from black to red. Red coats are most common in Great Britain; in other parts of Europe and Asia different coat colours coexist within populations, much like hair colour in some human populations. The underside of the squirrel is always white-cream in colour. The red squirrel sheds its coat twice a year, switching from a thinner summer coat to a thicker, darker winter coat with noticeably larger ear-tufts (a prominent distinguishing feature of this species) between August and November. A lighter, redder overall coat colour, along with the ear-tufts in adults and smaller size, distinguish the red squirrel from the eastern grey squirrel. Distribution and habitat Red squirrels occupy boreal, coniferous woods in northern Europe and Siberia, preferring Scots pine, Norway spruce and Siberian pine. In western and southern Europe they are found in broad-leaved woods where the mixture of tree and shrub species provides a better year-round source of food. In most of the British Isles and in Italy, broad-leaved woodlands are now less suitable due to the better competitive feeding strategy of introduced grey squirrels. In Great Britain, Ireland and in Italy, red squirrel populations have decreased in recent years. This decline is associated with the introduction by humans of the eastern grey squirrel (Sciurus carolinensis) from North America. However, the population in Scotland is stabilising due to conservation efforts. Ecology and behaviour The red squirrel is found in both coniferous forest and temperate broadleaf woodlands. The squirrel makes a drey (nest) out of twigs in a branch-fork, forming a domed structure about in diameter. This is lined with moss, leaves, grass and bark. Tree hollows and woodpecker holes are also used. The red squirrel is a solitary animal and is shy and reluctant to share food with others. However, outside the breeding season and particularly in winter, several red squirrels may share a drey to keep warm. Social organization is based on dominance hierarchies within and between sexes; although males are not necessarily dominant to females, the dominant animals tend to be larger and older than subordinate animals, and dominant males tend to have larger home ranges than subordinate males or females. The red squirrel eats mostly the seeds of trees, neatly stripping conifer cones to get at the seeds within, fungi, nuts (especially hazelnuts but also beech, chestnuts and acorns), berries, vegetables, garden flowers, tree sap and young shoots. More rarely, red squirrels may also eat bird eggs or nestlings. A Swedish study shows that out of 600 stomach contents of red squirrels examined, only 4 contained remnants of birds or eggs. Excess food is put into caches called "middens", either buried or in nooks or holes in trees, and eaten when food is scarce. Although the red squirrel remembers where it created caches at a better-than-chance level, its spatial memory is substantially less accurate and durable than that of grey squirrels. Between 60% and 80% of its active period may be spent foraging and feeding. The red squirrel exhibits a crepuscular activity pattern. It often rests in its nest in the middle of the day, avoiding the heat and the high visibility to birds of prey that are dangers during these hours. During the winter, this mid-day rest is often much briefer, or absent entirely, although harsh weather may cause the animal to stay in its nest for days at a time. No territories are claimed between the red squirrels and the feeding areas of individuals overlap considerably. Reproduction Mating occurs in late winter during February and March and in summer between June and July. Up to two litters a year per female are possible. Each litter averages three young, called kits. Gestation is about 38 to 39 days. The young are looked after by the mother alone and are born helpless, blind, and deaf. They weigh between 10 and 15g. Their body is covered by hair at 21 days, their eyes and ears open after three to four weeks, and they develop all their teeth by 42 days. Juvenile red squirrels can eat solids around 40 days following birth and from that point can leave the nest on their own to find food; however, they still suckle from their mother until weaning occurs at 8 to 10 weeks. During mating, males detect females that are in oestrus by an odour that they produce, and although there is no courtship, the male will chase the female for up to an hour prior to mating. Usually, several males will chase a single female until the dominant male, usually the largest in the group, mates with the female. Males and females will mate several times with many partners. Females must reach a minimum body mass before they enter oestrus, and heavy females on average produce more young. If food is scarce breeding may be delayed. Typically a female will produce her first litter in her second year. Life expectancy Red squirrels that survive their first winter have a life expectancy of 3 years. Individuals may reach 7 years of age, and 10 in captivity. Survival is positively related to the availability of autumn-winter tree seeds; on average, 75–85% of juveniles die during their first winter, and mortality is approximately 50% for winters following the first. Competitors Arboreal predators include small mammals such as the pine marten, wildcats and the stoat, which preys on nestlings; birds, including owls and raptors such as the goshawk and buzzards, may also take the red squirrel. The red fox, cats and dogs can prey upon the red squirrel when it is on the ground. Humans influence the population size and mortality of the red squirrel by destroying or altering habitats, by causing road casualties, and by introducing non-native populations of the North American eastern grey squirrels. The eastern grey squirrel and the red squirrel are not directly antagonistic, and violent conflict between these species is not a factor in the decline in red squirrel populations. However, the eastern grey squirrel appears to be able to decrease the red squirrel population due to several reasons: The eastern grey squirrel carries a disease, the squirrel parapoxvirus, that does not appear to affect their own health but will often kill the red squirrel. It was revealed in 2008 that the numbers of red squirrels at Formby (England) had declined by 80% as a result of this disease, though the population is now recovering. The eastern grey squirrel can better digest acorns, while the red squirrel cannot access the proteins and fats in acorns as easily. When the red squirrel is put under pressure, it will not breed as often. In the UK, due to the above circumstances, the population has today fallen to 160,000 red squirrels or fewer; 120,000 of these are in Scotland. Outside the UK and Ireland, the impact of competition from the eastern grey squirrel has been observed in Piedmont, Italy, where two pairs escaped from captivity in 1948. A significant drop in red squirrel populations in the area has been observed since 1970, and it is feared that the eastern grey squirrel may expand into the rest of Europe. Conservation The red squirrel is protected in most of Europe, as it is listed in Appendix III of the Bern Convention; it is listed as being of least concern on the IUCN Red List. However, in some areas it is abundant and is hunted for its fur. Although not thought to be under any threat worldwide, the red squirrel has nevertheless drastically reduced in number in the United Kingdom; especially after the eastern grey squirrel was introduced from North America in the 1870s. Fewer than 140,000 individuals are thought to be left in 2013; approximately 85% of which are in Scotland, with the Isle of Wight being the largest haven in England. A local charity, the Wight Squirrel Project, supports red squirrel conservation on the island, and islanders are actively recommended to report any invasive greys. The population decrease in Britain is often ascribed to the introduction of the eastern grey squirrel from North America, but the loss and fragmentation of its native woodland habitat has also played a significant role. In contrast, the red squirrel may present a threat if introduced to regions outside its native range. It is classed as a "prohibited new organism" under New Zealand's Hazardous Substances and New Organisms Act 1996 preventing it from being imported into the country. Projects In January 1998, eradication of the non-native North American grey squirrel began on the North Wales island of Anglesey. This facilitated the natural recovery of the small remnant red squirrel population. It was followed by the successful reintroduction of the red squirrel into the pine stands of Newborough Forest. Subsequent reintroductions into broadleaved woodland followed and today the island has the single largest red squirrel population in Wales. Brownsea Island in Poole Harbour is also populated exclusively by red rather than grey squirrels (approximately 200 individuals). Mainland initiatives in southern Scotland and the north of England also rely upon grey squirrel control as the cornerstone of red squirrel conservation strategy. A local programme known as the "North East Scotland Biodiversity Partnership", an element of the national Biodiversity Action Plan was established in 1996. This programme is administered by the Grampian Squirrel Society, with an aim of protecting the red squirrel; the programme centres on the Banchory and Cults areas. In 2008, the Scottish Wildlife Trust announced a four-year project which commenced in the spring of 2009 called "Saving Scotland's Red Squirrels". Other notable projects include red squirrel projects in the Greenfield Forest, including the buffer zones of Mallerstang, Garsdale and Widdale; the Northumberland Kielder Forest Project; and within the National Trust reserve in Formby. These projects were originally part of the Save Our Squirrels campaign that aimed to protect red squirrels in the north of England, but now form part of a five-year Government-led partnership conservation project called "Red Squirrels Northern England" to undertake grey squirrel control in areas important for red squirrels. However, grey squirrels were found to outnumber red squirrels in both Cumbria and Northumberland for the first time. In Northumberland grey sightings were 25% higher than reds, and in Cumbria they were 17.3% higher. On the Isle of Wight, local volunteers are encouraged to record data on the existing red squirrel population, and to monitor it for the presence of invasive greys; as the red squirrel is still dominant on the island, these volunteers are also requested to cull any greys they find. In order to protect existing populations, increasing amounts of legislation have been issued to prevent the further release and expansion of grey squirrel populations. Under the Wildlife and Countryside Act 1981, it is an offense to release captured grey squirrels, indicating that any captured individuals must be culled. Additional rules covered under the WCA's Schedules 5 and 6 include limitations on the keeping of red squirrels in captivity, and also prohibits the culling of red squirrels. Research undertaken in 2007 in the UK credits the pine marten with reducing the population of the invasive eastern grey squirrel. Where the range of the expanding pine marten population meets that of the eastern grey squirrel, the population of these squirrels retreats. It is theorised that, because the grey squirrel spends more time on the ground than the red, they are far more likely to come in contact with this predator. During October 2012, four male and one female red squirrel, on permanent loan from the British Wildlife Centre, were transported to Tresco in the Isles of Scilly by helicopter, and released into Abbey Wood, near the Tresco Abbey Gardens. Only two survived and a further 20 were transported and released in October 2013. Although the red squirrel is not indigenous to the Isles of Scilly, those who supported this work intend to use Tresco as a "safe haven" for the endangered mammal, as the islands are free of predators such as red foxes, and of the Squirrel parapoxvirus-carrying grey squirrel. The UK Animal and Plant Health Agency (APHA) has proposed a method of non-lethal control of grey squirrels as part of a 5-year Red Squirrel Recovery Network (RSRN) project. The planned method for control would be by administering oral contraceptives via a grey squirrel-specific feeder, which would selectively allow feeding based on body weight in order to avoid inadvertently distributing the contraceptive to red squirrels as well. This project has received National Lottery Heritage funding. Historical, cultural and financial significance Squirrel Nutkin is a character, always illustrated as a red squirrel, in English author Beatrix Potter's books for children. "Ekorr'n satt i granen" (The Squirrel sat in the fir tree) is a well-known and appreciated children's song in Sweden. Text and lyrics by Alice Tegnér in 1892. Charles Dennim, protagonist of Geoffrey Household's novel Watcher in the Shadows, is a zoologist who studies and writes about red squirrels. In Norse mythology, Ratatoskr is a red squirrel who runs up and down with messages in the world tree, Yggdrasil, and spreads gossip. In particular, he carried messages between the unnamed eagle at the top of Yggdrasill and the wyrm Níðhöggr beneath its roots. The red squirrel used to be widely hunted for its pelt. In Finland, squirrel pelts were used as currency in ancient times, before the introduction of coinage. The expression "squirrel pelt" is still widely understood there to be a reference to money. It has been suggested that the trade in red squirrel fur, highly prized in the medieval period and intensively traded, may have been responsible for the leprosy epidemic in medieval Europe. Within Great Britain, widespread leprosy is found early in East Anglia, to which many of the squirrel furs were traded, and the strain is the same as that found in modern red squirrels on Brownsea Island. The red squirrel is the national mammal of Denmark. Red squirrels are a common feature in English heraldry, where they are always depicted sitting up and often in the act of cracking a nut.
Biology and health sciences
Rodents
Animals
638364
https://en.wikipedia.org/wiki/Salix%20alba
Salix alba
Salix alba, the white willow, is a species of willow native to Europe and western and central Asia. The name derives from the white tone to the undersides of the leaves. It is a medium to large deciduous tree growing up to 10–30 m tall, with a trunk up to 1 m diameter and an irregular, often-leaning crown. The bark is grey-brown and is deeply fissured in older trees. The shoots in the typical species are grey-brown to green-brown. The leaves are paler than most other willows because they are covered with very fine, silky white hairs, in particular on the underside; they are 5–10 cm long and 0.5–1.5 cm wide. The flowers are produced in catkins in early spring and are pollinated by insects. It is dioecious, with male and female catkins on separate trees; the male catkins are 4–5 cm long, the female catkins 3–4 cm long at pollination, lengthening as the fruit matures. When mature in midsummer, the female catkins comprise numerous small (4 mm) capsules, each containing numerous minute seeds embedded in silky white hairs, which aids wind dispersal. Ecology Like all willows, Salix alba is usually to be found in wet or poorly-drained soil at the edge of pools, lakes or rivers. Its wide-spreading roots take up moisture from a large surrounding area. White willows are fast-growing but relatively short-lived, being susceptible to several diseases, including watermark disease caused by the bacterium Brenneria salicis (named because of the characteristic 'watermark' staining in the wood; syn. Erwinia salicis) and willow anthracnose, caused by the fungus Marssonina salicicola. These diseases can be a serious problem on trees grown for timber or ornament. It readily forms natural hybrids with crack willow Salix fragilis, the hybrid being named Salix × rubens Schrank. Varieties, cultivars and hybrids Several cultivars and hybrids have been selected for forestry and horticultural use: Salix alba 'Caerulea' (cricket-bat willow; syn. Salix alba var. caerulea (Sm.) Sm.; Salix caerulea Sm.) is grown as a specialist timber crop in Britain, mainly for the production of cricket bats, and for other uses where a tough, lightweight wood that does not splinter easily is required. It is distinguished mainly by its growth form, very fast-growing with a single straight stem, and also by its slightly larger leaves (10–11 cm long, 1.5–2 cm wide) with a more blue-green colour. Its origin is unknown; it may be a hybrid between white willow and crack willow, but this is not confirmed. Salix alba 'Vitellina' (golden willow; syn. Salix alba var. vitellina (L.) Stokes) is a cultivar grown in gardens for its shoots, which are golden-yellow for one to two years before turning brown. It is particularly decorative in winter; the best effect is achieved by coppicing it every two to three years to stimulate the production of longer young shoots with better colour. Other similar cultivars include 'Britzensis', 'Cardinal', and 'Chermesina', selected for even brighter orange-red shoots. Salix alba 'Vitellina-Tristis' (golden weeping willow, synonym 'Tristis') is a weeping cultivar with yellow branches that become reddish-orange in winter. It is now rare in cultivation and has been largely replaced by Salix x sepulcralis 'Chrysocoma'. It is, however, still the best choice in very cold parts of the world, such as Canada, the northern US, and Russia. The golden hybrid weeping willow (Salix × sepulcralis 'Chrysocoma') is a hybrid between white willow and Peking willow Salix babylonica. Award of Garden Merit The following have received the Royal Horticultural Society's Award of Garden Merit Salix alba 'Golden Ness' Salix alba var. serica (silver willow) Salix alba var. vitellina 'Yelverton' Salix × sepulcralis 'Erythroflexuosa' Salix × sepulcralis var. chrysocoma Uses The wood is tough, strong, and light in weight, but has minimal resistance to decay. The stems (withies) from coppiced and pollarded plants are used for basket-making. Charcoal made from the wood was important for gunpowder manufacture. The bark tannin was used in the past for tanning leather. The wood is used to make cricket bats. S. alba wood has a low density and a lower transverse compressive strength. This allows the wood to bend, which is why it can be used to make baskets. Willow bark contains indole-3-butyric acid, which is a plant hormone stimulating root growth; willow trimmings are sometimes used to clone rootstock in place of commercially synthesized root stimulator. It is also used for ritual purposes by Jews on the holiday of Sukkot. Medicinal uses Willow (of unspecified species) has long been used by herbalists for various ailments, although it is a myth that they attribute to it any analgesic effect. One of the first references to White Willow specifically was by Edward Stone, of Chipping Norton, Oxfordshire, England, in 1763. He 'accidentally' tasted the bark and found it had a bitter taste, which reminded him of Peruvian Bark (Cinchona), which was used to treat malaria. After researching all the 'dispensaries and books on botany,' he found no suggestion of willow ever being used to treat fevers and decided to experiment with it himself. Over the next seven years he successfully used the dried powder of willow bark to cure 'agues and intermittent fevers' of around fifty people, although it worked better when combined with quinine. Stone appears to have been largely ignored by the medical profession and herbalists alike. There are reports of two pharmacists using the remedy in trials, but there is no evidence that it worked. By the early 20th century, Maud Grieve, an herbalist, did not consider White Willow to be a febrifuge. Instead, she describes using the bark and the powdered root for its tonic, antiperiodic and astringent qualities and recommended its use in treating dyspepsia, worms, chronic diarrhoea and dysentery. She considered tannin to be the active constituent. An active extract of the bark, called salicin, after the Latin name Salix, was isolated to its crystalline form in 1828 by Henri Leroux, a French pharmacist, and Raffaele Piria, an Italian chemist, who then succeeded in separating out the acid in its pure state. Salicylic acid is a chemical derivative of salicin and is widely used in medicine. Acetylsalicylic acid (aspirin) is, however, a chemical that does not occur in nature and was originally synthesised from salicylic acid extracted from Meadowsweet, and is not connected to willow.
Biology and health sciences
Malpighiales
Plants
638469
https://en.wikipedia.org/wiki/American%20Shorthair
American Shorthair
The American Shorthair (ASH) is a breed of domestic cat believed to be descended from European cats brought to North America by early settlers to protect valuable cargo from mice and rats. According to the Cat Fanciers' Association, it was the eighth most popular pedigreed cat in the world for 2020. History When settlers sailed from Europe to North America, they carried cats on board (ships' cats) to protect the stores from mice—for instance, the cats that came over on the Mayflower with the Pilgrims to hunt rats on the ship and in the colony. Many of these cats landed in the New World, interbred, and developed special characteristics to help them cope with their new life and climate. Early in the 20th century, a selective breeding program was established to develop the best qualities of these cats. The American Shorthair is a pedigree cat breed, with a strict conformation standard, as set by cat fanciers of the breed and the North American cat fancier associations such as The International Cat Association (TICA) and the Cat Fanciers' Association (CFA). The breed is accepted by all North American cat registries. Originally known as the Domestic Shorthair, the breed was renamed in 1966 to the "American Shorthair" to better represent its "all-American" origins and to differentiate it from other shorthaired breeds. The name "American Shorthair" also reinforces the fact that the breed is a pedigreed breed distinct from the random-bred non-pedigreed domestic short-haired cats in North America, which may nevertheless resemble the ASH. Both the American Shorthair breed and the random-bred cats from which the breed is derived are sometimes called working cats because they were used for controlling rodent populations, on ships and farms. The American shorthair (then referred as Domestic shorthair) was among the first five breeds that were considered as registered cat breeds by CFA during 1906. Description Appearance The American Shorthair is a medium to large sized cat breed with males weighing between 11-15 lbs (5–7 kg) and females weighing between 6-12 lbs (2.75-5.5 kg). The head is large, resembling an oblong with more length than width. The ears are medium sized and slightly rounded at the tips. The eyes are large and wide. The neck is medium in length and well-muscled. The legs are medium in length and muscular. Tail is of medium length. Coat color The American Shorthair is recognized in more than eighty different colors and patterns ranging from the brown-patched tabby to the blue-eyed white, the silvers (tabbies, shaded, smokes and cameos) to the calico van, and many colors in between. Some even come in deep tones of black, brown, or other blends and combinations. Generally, only cats showing evidence of crossbreeding resulting in the colors chocolate, sable, lilac (lavender), or the point-restricted pattern of the Siamese family are disqualified from being shown. Health A study conducted in Japan of cats suspected to have kidney problems found that 47% of tested American Shorthair cats had the PKD1 mutation, which is responsible for feline polycystic kidney disease (PKD). A review of over 5,000 cases of urate urolithiasis in the United States found that the American Shorthair had significantly lower odds of developing urate uroliths than mixed-breed cats. Gallery
Biology and health sciences
Cats
Animals
638719
https://en.wikipedia.org/wiki/Tortricidae
Tortricidae
The Tortricidae are a family of moths, commonly known as tortrix moths or leafroller moths, in the order Lepidoptera. This large family has over 11,000 species described, and is the sole member of the superfamily Tortricoidea, although the genus Heliocosma is sometimes placed within this superfamily. Many of these are economically important pests. Olethreutidae is a junior synonym. The typical resting posture is with the wings folded back, producing a rather rounded profile. Notable tortricids include the codling moth and the spruce budworm, which are among the most well-studied of all insects because of their economic impact. Description Tortricid moths are generally small, with a wingspan of 3 cm or less. Many species are drab and have mottled and marbled brown colors, but some diurnal species are brightly colored and mimic other moths of the families Geometridae and Pyralidae. Life cycle and behavior Tortricid eggs are often flattened and scale-like. Larvae in the subfamilies Chlidanotinae and Olethreutinae usually feed by boring into stems, roots, buds, or seeds. Larvae in the subfamily Tortricinae, however, feed externally and construct leaf rolls. Larvae in the subfamily Tortricinae tend to be more polyphagous than those in Chlidanotinae and Olethreutinae. Tortricinae also possess an anal fork for flicking excrement away from their shelters. Some common tortricids The tortricids include many economically important pests, including: Summer fruit tortrix moth (Adoxophyes orana) Fruit tree tortrix moth (Archips podana) Rose leaf roller (Archips rosana) Argyrotaenia ljungiana, a pest on vines, maize, and fruit trees Peach moth (Cydia molesta) Codling moth (Cydia pomonella) Plum fruit moth (Cydia funebrana) Pea moth (Cydia nigricana) Chestnut and acorn moth (Cydia splendana) Light brown apple moth (Epiphyas postvittana) Hemp borer (Grapholita delineana) Oriental fruit moth (Grapholita molesta) Cherry fruitworm (Grapholita packardi) European grapevine moth (Lobesia botrana) Barred fruit tree tortrix moth (Pandemis cerasana) Grape berry moth (Paralobesia viteana) Long-palped tortrix or vine leaf roller (Sparganothis pilleriana) Bud moth (Spilonota ocellana) False codling moth (Thaumatotibia (Cryptophlebia) leucotreta) Spruce budworm (Genus Choristoneura)
Biology and health sciences
Lepidoptera
Animals
638889
https://en.wikipedia.org/wiki/Path%20%28graph%20theory%29
Path (graph theory)
In graph theory, a path in a graph is a finite or infinite sequence of edges which joins a sequence of vertices which, by most definitions, are all distinct (and since the vertices are distinct, so are the edges). A directed path (sometimes called dipath) in a directed graph is a finite or infinite sequence of edges which joins a sequence of distinct vertices, but with the added restriction that the edges be all directed in the same direction. Paths are fundamental concepts of graph theory, described in the introductory sections of most graph theory texts. See e.g. , , or . cover more advanced algorithmic topics concerning paths in graphs. Definitions Walk, trail, and path A walk is a finite or infinite sequence of edges which joins a sequence of vertices. Let be a graph. A finite walk is a sequence of edges for which there is a sequence of vertices such that ϕ(ei) = {vi, vi + 1} for . is the vertex sequence of the walk. The walk is closed if v1 = vn, and it is open otherwise. An infinite walk is a sequence of edges of the same type described here, but with no first or last vertex, and a semi-infinite walk (or ray) has a first vertex but no last vertex. A trail is a walk in which all edges are distinct. A path is a trail in which all vertices (and therefore also all edges) are distinct. If is a finite walk with vertex sequence then w is said to be a walk from v1 to vn. Similarly for a trail or a path. If there is a finite walk between two distinct vertices then there is also a finite trail and a finite path between them. Some authors do not require that all vertices of a path be distinct and instead use the term simple path to refer to such a path where all vertices are distinct. A weighted graph associates a value (weight) with every edge in the graph. The weight of a walk (or trail or path) in a weighted graph is the sum of the weights of the traversed edges. Sometimes the words cost or length are used instead of weight. Directed walk, directed trail, and directed path A directed walk is a finite or infinite sequence of edges directed in the same direction which joins a sequence of vertices. Let be a directed graph. A finite directed walk is a sequence of edges for which there is a sequence of vertices such that for . is the vertex sequence of the directed walk. The directed walk is closed if v1 = vn, and it is open otherwise. An infinite directed walk is a sequence of edges of the same type described here, but with no first or last vertex, and a semi-infinite directed walk (or ray) has a first vertex but no last vertex. A directed trail is a directed walk in which all edges are distinct. A directed path is a directed trail in which all vertices are distinct. If is a finite directed walk with vertex sequence then w is said to be a walk from v1 to vn. Similarly for a directed trail or a path. If there is a finite directed walk between two distinct vertices then there is also a finite directed trail and a finite directed path between them. A "simple directed path" is a path where all vertices are distinct. A weighted directed graph associates a value (weight) with every edge in the directed graph. The weight of a directed walk (or trail or path) in a weighted directed graph is the sum of the weights of the traversed edges. Sometimes the words cost or length are used instead of weight. Examples A graph is connected if there are paths containing each pair of vertices. A directed graph is strongly connected if there are oppositely oriented directed paths containing each pair of vertices. A path such that no graph edges connect two nonconsecutive path vertices is called an induced path. A path that includes every vertex of the graph without repeats is known as a Hamiltonian path. Two paths are vertex-independent (alternatively, internally disjoint or internally vertex-disjoint) if they do not have any internal vertex or edge in common. Similarly, two paths are edge-independent (or edge-disjoint) if they do not have any edge in common. Two internally disjoint paths are edge-disjoint, but the converse is not necessarily true. The distance between two vertices in a graph is the length of a shortest path between them, if one exists, and otherwise the distance is infinity. The diameter of a connected graph is the largest distance (defined above) between pairs of vertices of the graph. Finding paths Several algorithms exist to find shortest and longest paths in graphs, with the important distinction that the former problem is computationally much easier than the latter. Dijkstra's algorithm produces a list of shortest paths from a source vertex to every other vertex in directed and undirected graphs with non-negative edge weights (or no edge weights), whilst the Bellman–Ford algorithm can be applied to directed graphs with negative edge weights. The Floyd–Warshall algorithm can be used to find the shortest paths between all pairs of vertices in weighted directed graphs. The path partition problem The k-path partition problem is the problem of partitioning a given graph to a smallest collection of vertex-disjoint paths of length at most k.
Mathematics
Graph theory
null
638969
https://en.wikipedia.org/wiki/Forceps
Forceps
Forceps (: forceps or considered a plural noun without a singular, often a pair of forceps; the Latin plural forcipes is no longer recorded in most dictionaries) are a handheld, hinged instrument used for grasping and holding objects. Forceps are used when fingers are too large to grasp small objects or when many objects need to be held at one time while the hands are used to perform a task. The term "forceps" is used almost exclusively in the fields of biology and medicine. Outside biology and medicine, people usually refer to forceps as tweezers, tongs, pliers, clips or clamps. Mechanically, forceps employ the principle of the lever to grasp and apply pressure. Depending on their function, basic surgical forceps can be categorized into the following groups: Non-disposable forceps. They should withstand various kinds of physical and chemical effects of body fluids, secretions, cleaning agents, and sterilization methods. Disposable forceps. They are usually made of lower-quality materials or plastics which are disposed after use. Surgical forceps are commonly made of high-grade carbon steel, which ensures they can withstand repeated sterilization in high-temperature autoclaves. Some are made of other high-quality stainless steel, chromium and vanadium alloys to ensure durability of edges and freedom from rust. Lower-quality steel is used in forceps made for other uses. Some disposable forceps are made of plastic. The invention of surgical forceps is attributed to Stephen Hales. There are two basic types of forceps: non-locking (often called "thumb forceps" or "pick-ups") and locking, though these two types come in dozens of specialized forms for various uses. Non-locking forceps also come in two basic forms: hinged at one end, away from the grasping end (colloquially such forceps are called tweezers) and hinged in the middle, rather like scissors. Locking forceps are almost always hinged in the middle, though some forms place the hinge very close to the grasping end. Locking forceps use various means to lock the grasping surfaces in a closed position to facilitate manipulation or to independently clamp, grasp or hold an object. Thumb forceps Thumb forceps, known simply as forceps in surgical specialties, are commonly held in a pen grip between the thumb and index finger (sometimes also the middle finger), with the top end resting on the first dorsal interosseous muscle at the webspace between the thumb and index finger. Spring tension at the top end holds the grasping ends apart until pressure is applied. This provides an extended pinch and allows the user to easily grasp, manipulate and quickly release small objects or delicate tissue with readily variable pressure. Thumb forceps are used to hold tissue still when applying sutures, to gently move tissues out of the way during exploratory surgery and to access confined cavities that are hard to reach with hands and fingers. Thumb forceps can have smooth tips, cross-hatched tips or serrated tips (often called "mouse's teeth"). Common arrangements of teeth are 1×2 (two teeth on one side meshing with a single tooth on the other), 7×7 and 9×9. Serrated forceps are used on tissue; counter-intuitively, teeth will damage tissue less than a smooth surface because one can grasp with less overall pressure. Smooth or cross-hatched forceps are used to move dressings, remove sutures and similar tasks. Locking forceps Locking forceps, sometimes called clamps, are used to grasp and firmly hold objects or body tissues, or to apply external compression onto tubular structures such as blood vessels or intestines. When they are specifically used to occlude an artery to forestall bleeding, they are called hemostats. Another form of locking forceps is the needle holder, used to guide a suturing needle through tissue. Many locking forceps use finger rings/loops to facilitate handling (see illustration, below, of Kelly forceps). The finger loops are usually grasped by the thumb and middle or (more often) ring finger, while the index finger is placed on the pivot to help stabilize and guide the instrument. The most common locking mechanism is a handle ratchet, which consists of an asymmetrically serrated short protrusion near the finger loop of one of the handles, and a corresponding hook on the other. As the forceps are closed, the opposing teeth engage and interlock, keeping the handles adducted and the jaw surfaces clamped constantly. To unlock, a simple shearing push by the fingers is all that is needed to disengage the teeth and allow the grasping ends to move apart. Kelly forceps Kelly forceps are a type of hemostat usually made of stainless steel. They resemble a pair of scissors with the blade replaced by a blunted grip. They also feature a locking mechanism to allow them to act as clamps. Kelly forceps may be floor-grade (regular use) and as such not used for surgery. They may also be sterilized and used in operations, in both human and veterinary medicine. They may be either curved or straight. In surgery, they may be used for occluding blood vessels, manipulating tissues, or for assorted other purposes. They are named for Howard Atwood Kelly, M.D., first professor of obstetrics and gynecology at the Johns Hopkins School of Medicine. The "mosquito" variant of the tool is more delicate and has smaller, finer tips. Other varieties with similar, if more specialized, uses are Allis clamps, Babcocks, Kochers, Carmalts, and tonsils; all but the last bear the names of the surgeons who designed them. Other medical forceps Other types of forceps include: Alligator forceps Anesthesia forceps, often with smooth jaw surface for clamping tubes such as a double-lumen tube Artery forceps, also known as a hemostat Atraumatic forceps, Debaker forceps Biopsy forceps Bone-cutting forceps Bone-reduction forceps Bone-holding forceps Bulldog forceps Catheter forceps Cilia forceps Curettes forceps Debaker forceps Dermal forceps & nippers Dressing forceps Ear forceps Eye forceps Foerster clamp Gallbladder forceps Gerald forceps Harvey forceps Hemostatic forceps Hysterectomy forceps Intestinal forceps Magill forceps Microsurgery forceps Nasal forceps Needle holder Obstetrical forceps Postmortem forceps Splinter forceps Sponge forceps Spreading forceps Sterilizer forceps Suture sundries forceps Tenaculum forceps Thoracic forceps Thoracic surgical forceps Thumb forceps Tissue forceps Tongue forceps Tooth extracting forceps Tubing forceps Uterine forceps Vulsellum forceps - used to grasp cervical lips to visualize the cervix.
Technology
Surgical instruments
null
639599
https://en.wikipedia.org/wiki/Kyiv%20Metro
Kyiv Metro
The Kyiv Metro (, ) is a rapid transit system in Kyiv owned by the Kyiv City Council and operated by the city-owned company Kyivskyi Metropoliten. It was initially opened on November 6, 1960, as a single line with five stations. It was the first rapid transit system in Ukraine. Today, the system consists of three lines and 52 stations, located throughout Kyiv's ten raion (districts), and operates of routes, with used for revenue service and for non-revenue service. At below ground level, Arsenalna station on the Sviatoshynsko-Brovarska Line is the second deepest metro station in the world after Hongyancun station in Chongqing. In 2016, annual ridership for the metro was 484.56 million passengers, or about 1.32 million passengers daily. The metro accounted for 46.7% of Kyiv's public transport load in 2014. Beginnings (1884-1920) The first idea for an underground railway appeared in 1884. The project, which was given for analysis to the city council by the director of the Southwestern railways, Dmytro Andrievskiy, planned to create tunnels from Kyiv-Pasazhyrskyi railway station. The tunnel was expected to start near Poshtova square and finish near Bessarabka. A new railway station was to be built there, while the old railway station was to be converted into a freight railway station. The project was long discussed but eventually turned down by the city council. Kyiv was a pioneering city for Imperial Russian rapid transit, opening the first Russian tram system. In September 1916, businessmen of the Russo-American trading corporation attempted to collect funds to sponsor the construction of a metro in Kyiv. As a reason to construct it, the trading corporation wrote: Despite the arguments, the project was not accepted by the city council, again. After the downfall of the Tsarist government, Hetman Skoropadsky was also interested in the building of a metro system, somewhere near the district of Zvirynets, where the government center was planned to be built. As one of the members of his cabinet argued: However, the project lost its support after the downfall of the Hetmanate in the autumn of 1918 and the change of the Ukrainian government towards the Directorate. Then, in 1919–1920, during the Russian Civil War (in which Ukraine was involved), the project was shelved for good. Following the Bolsheviks' victory in the Russian Civil War, Kyiv became only a provincial city, and no large-scale proposals to improve the city were made. Initial promotion In 1934, the capital of the Ukrainian SSR was moved from Kharkiv to Kyiv. On July 9, 1936, the Presidium of the Kyiv City Council assessed the diploma project by Papazov (Papazian), an Armenian graduate of the Moscow University of Transport Engineering, called, "The Project of the Kyiv Metro." The meeting minutes stated that "the author successfully resolved one of the problems of reconstruction of the city of Kyiv and establishment of intra-city transportation and also answered various practical questions about the Metro plan (the routes of the underground, the position of stations)." The engineer Papazov (Papazian) received a bonus of 1,000 Soviet rubles for this project from the City of Kyiv. However, it is unknown if his proposals were taken into account in the plan. A few days before, on July 5, the Kyiv newspaper Bil'shovyk published an article that featured a project of underground, prepared by engineers from the Transport Devices Institute in Ukraine's Soviet Socialist Republic's Academy of Sciences. The project promised to drill three lines of a subway approximately long. Rumors started spreading that the construction of the Metro would begin soon. At first, the city council denied these rumors, amid letters from the specialists in the drilling and mining sectors offering their services. But in 1938, officials started preparatory work. However, this stopped abruptly in 1941 with the start of the Great Patriotic War (World War II). By the end of the war, Kyiv was destroyed. Being the third largest city in the USSR, a massive reconstruction process was ordered. This time, the Metro was taken into account. Work continued in 1944, after Kyiv's liberation. On 5 August 1944, a resolution from the Soviet Union's Government was proclaimed. The resolution planned for underground construction, thus the government ordered the appropriate organizations to continue preparatory works, create a technical project, and estimate total costs. To finance this initial work, the USSR's National Commissariat of Finances allocated 1 million Soviet rubles from the Reserve Fund of the USSR's government. On 22 February 1945, another resolution was proclaimed, which definitively ordered the underground to be constructed. To determine where the underground construction was most suitable, experts from the Kyiv Office of Metrogiprotrans analyzed the flow of passengers in the streets of Kyiv, both in the city center and in the outskirts. The analysis revealed three suitable directions to construct the underground: Sviatoshyn–Brovary, Kurenivka–Demiyivka, and Syrets–Pechersk. The former two were chosen to be built. It was decided that the first section of underground openings along these two directions— in length—would be constructed by 1950. This plan, however, did not come to life. The final preparations were not conducted until 1949. By the decision of the Ministry of Communication, the Kyivmetrobud enterprise was established on 14 April. Only then did the underground construction finally begin. History First phase Construction planning of the first line (M1) of the Kyiv Metro began in August 1949. The initial plan had seven stations, and a project design competition for the stations was announced in 1952. The competition commission wanted all seven stations to have a Stalinist style: richly decorated and adorned with Communist symbols and national (Ukrainian) motifs. However, the competition was cancelled, partly due to the cancellation of the two westernmost stations and partly due to Khrushchev Thaw, which made the Stalinist style inappropriate. Tunnel drilling is frequently met with unanticipated difficulties—such as unexpected drilling terrain and underground water sources—causing the construction to fall severely behind schedule. In December 1951, the first connection between separate tunnels was made between Dnipro and Arsenalna stations, while the last was created between the Vokzalna and Universytet stations, in May 1959. Various difficulties arose during the construction of the underground. For example, Arsenalna station was constantly flooded by underground waters despite its exceptional depth, which was originally intended to prevent flooding. Moreover, the project came to a standstill in 1954 when funding was instead allocated to the development of unused land fit for agriculture. Nevertheless, work progressed.At the beginning of 1958, a competition for the best design of stations was announced. A commission analyzing the works was created, consisting of activists, engineering and architecture experts from both the Ukrainian SSR and the USSR, sculptors, artists, writers, and the heads of the organizations Glavtunelstroy, Metrogiprotrans, and Kyivmetrobud. In July, an exhibition of 80 works was organized. The best five designs were used for the first five stations of the Kyiv Metro: Vokzalna, Universytet, Khreshchatyk, Arsenalna, and Dnipro. During this construction, of concrete was poured, and of granite and marble were used to decorate the stations. On 22 October 1960, a test run was made by Alexey Semagin, a motorman of the Moscow Metro, and Ivan Vynogradov, the former train operator from the central railway station of Kyiv-Pasazhyrskyi. Semagin drove, with Vynogradov acting as an assistant. On 6 November 1960, the anniversary of the October Revolution, the five-station, Vokzalna–Dnipro portion of the east–west line (today known as the Sviatoshynsko-Brovarska Line) was opened. That day, the motormen changed their places, and thus Ivan Vynogragov has now been deemed the first motorman of the Kyiv Metro. Opening and aftermath The underground was not available to the public the same day the line was declared open. During the first week, special passes had to be shown to ride the newly opened section. True public service only started on 13 November. At the time, the stations had no turnstiles; the tickets were shown to the inspector. Immediately after the Kyiv Metro's opening, the need for a train depot became a problem. It was not feasible to construct a permanent on-ground depot as the stations were deep underground. Yet the creation of an underground depot was costly. At first, it was solved by creating a temporary depot next to Dnipro station, where Kyivmetrobud had its headquarters at the time. There were some warehouses constructed as well so that necessary items could be substituted if needed. Unfortunately, this temporary depot was not connected to the main underground line. To move trains to the depot, an overhead crane was used. Simultaneously, another logistics problem appeared: there was no connection between the underground and the railway. At the time, the metro line was served by type Д underground trains (produced by Metrowagonmash). To deliver them to the underground, the trains had to be placed on a special carriage at Darnytsia railway station. The carriage was then transported by trams (via the now non-existing tram line along the Dnieper river) to the temporary depot, where the trains were then lifted onto the railway turntable. Since the procedure was uncomfortable and tedious, most trains rested overnight in the tunnels and arrived at the depot only to be checked for repairs and repaired. At the time, the Kyiv Metro was under the jurisdiction of the USSR's Ministry of Communication, and not of Kyiv's city council. Until 1962, the motormen were mostly from Moscow, as no institution provided appropriate education in Ukraine. Some Kyiv railway engineers were employed (such as Vynogragov), but they had to qualify for motormen in Moscow. Extension of the first line The second stage of construction of the first line started in 1960, and finished on 5 November 1963, with the opening of a section with two stations: Politekhnichnyi Instytut and Zavod Bilshovyk (now Shuliavska station). A year later, new type E underground trains were introduced. In 1965, the line crossed the Dnieper river on the newly constructed Kyiv Metro Bridge and Rusanivskyi Metropolitan Bridge and was extended to the large residential areas being built along the east bank of the river. Like the Dnipro station, Hidropark, Livoberezhna, and Darnytsia stations all were built on-ground. Additionally, to resolve the question of a temporary depot, a permanent depot (Darnytsia Metro Depot) was built between Livoberezhna and Darnytsia stations; importantly, it had access to Kyiv-Dniprovskyi railway station. New trains could now be easily transported immediately into the depot, which, having a connection with the metro line, could also easily host trains. A few developments were made to the old stations. Since Khreshchatyk station was opened with only one exit, a second one was built and opened on 4 September 1965. A third exit was finished in May 1970. While being modernized, the station was lengthened by 40 meters. Further extension of the first line to the east was made in 1968, when Komsomolska station (now Chernihivska station) was opened, along with another facility where the trains could be repaired. When it was discovered that Leningrad's Metro E-type underground trains were not suitable for the platform screen doors of new stations under construction, they were delivered to Kyiv in 1969; meanwhile, Kyiv's older D-type trains, which did not have any problems with these new stations, were transported to Leningrad. In 1970, an additional carriage was added to every train, for a total of four. A fifth was added two years later. Since 1972, the number of carriages has remained constant (as of 4 July 2017). On 5 November 1971, Kyiv's then-westernmost neighborhoods were connected to the underground. Three new stations were opened: Zhovtneva (now Beresteiska), Nyvky, and Sviatoshyno (now with the "o" removed). Thus, the underground was extended to 14 stations and a length of . On 23 August 1972, the billionth passenger of the Kyiv underground entered the Arsenalna station. The worker of the "Arsenal" factory was given a yearly ticket in the underground as a present on such an occasion. Finally, in 1973–1974, another modernization of the underground was made, the third to the rolling stock. New type Eм underground trains from Leningrad's train building facility were delivered to Kyiv. Further extensions on this line occurred in 1978 (with Pionerska station, now Lisova) and 2003 (with Zhytomyrska and Akademmistechko stations). Second line Construction of the second line (M2) began in 1971. The line became known as "Kurenivsko-Chervonoarmiyska"; however, the name did not completely correspond to the actual route, as it does not pass via Kurenivka. In mid-1960, when plans for the line were made, the construction was expected to go towards Kurenivka and Priorka, connecting Zavodska station (instead of today's Tarasa Shevchenka), Petropavlivska station near Kurenivskyi park, and Shevhenka Square station under the square. However, as the decision to create the Obolon residential district was made, these plans changed. The new line was constructed in the open and its stations were not constructed deep underground. Because of this, historical buildings were demolished in the Podil neighborhood. During construction, archaeologists discovered a house dated from Kyivs'ka Rus' (879–1240) under the Red Square (now Kontraktova Square). The discovery helped historians understand the life of Podil inhabitants in the Middle Ages at a much more profound scale. This archaeological research was one of the reasons the underground construction was suspended, which is why the small stretch was opened only on 17 December 1976. It contained three stations: Kalinina Square (later renamed to Ploshcha Zhovtnevoi Revolutsii () on 17 October 1977 for the upcoming 60th anniversary of the October Revolution; now Maidan Nezalezhnosti), Poshtova Ploshcha, and Chervona Ploshcha. Additionally, there was a repair facility near Chervona Ploshcha and a transfer corridor to the older (M1) line, separate for trains and passengers. This corridor allowed the exchange of rolling stock, and more importantly, allowed trains on the new line to access the Darnytsia depot until a new one appeared in 1988. Simultaneously, an extension on the first line was made eastward. In 1978, Pionerska station was opened, which might have been the next step towards the realization of Stalin times projects (the line was planned to be extended to Brovary, the satellite town of Kyiv). Nevertheless, construction on the first line came to a halt, and, as of 4 July 2017, there were no plans yet to extend the line eastwards beyond Lisova station, so construction efforts were shifted to the second line. The second line—which became known as the Kurenivsko-Chervonoarmiyska line (today the Obolonsko–Teremkivska line)—continued expanding. On 19 December 1980, three new stations—Tarasa Shevchenka, Petrivka (now Pochaina station), and Prospekt Korniychuka (now Obolon station)—were opened on the northern part of the line. After another two years, the Minska and Heroiv Dnipra stations were added to the second line, on the 55th anniversary of the October Revolution. This connected the then-largest residential district of Kyiv to the rapid transit network. Construction did not stop at the southern end of the line. Ploshcha Lva Tolstoho and Respublikanskyi Stadion (now Olimpiiska station) were opened on 19 December 1981, followed by Chervonoarmiiska (now Palats "Ukrayina" station) and Dzerzhynska (now Lybidska station) on 30 December 1984. Construction then started to the southwest of the newly opened terminus but was soon interrupted by an accident while workers were drilling through the difficult terrain under the Lybid river. Further work only continued 21 years later, in the summer of 2005. Infrastructure development In 1980, while the construction of the M2 line was at its height, the new rolling stock from Metrowagonmash (81-717/714) started to be used. In 1985, a new train repair plant was built, first called ОМ-2. Additionally, once it appeared that the corridor between October Revolution Square and Khreshchatyk was not able to cope with the stream of passengers, a second corridor was built (informally called the "long" corridor), opening on 3 December 1986. The same year, disambiguation to the Darnytsia depot was made (three tracks were made, of which two are for passenger traffic, while the third was supposed to let the trains exit the depot). On 30 December 1987, the second (eastern) exit from the Hydropark station was built and opened only in summer. Lastly, on 19 March 1988, a new depot (called the Obolon Depot) was created to serve the M2 line. Third line Soviet period The first event connected with the construction of the third line (M3) was the creation of a new tunnel on the M1 line between the Vokzalna and Khreshchatyk stations. In the middle of it, a new station, Leninska, was to arise, specially designed as a transfer hub to the future M3 line. When the new tunnel was ready to be connected to the rest of the line, service in the old M1 tunnel between Vokzalna and Khreshchatyk was interrupted from 31 March to 1 October 1987. During this time, two shuttle trains carried passengers from Vokzalna to Universytet stations, and the tunnel between Universytet and Khreshchatyk was closed. To manage passengers, additional temporary lines of buses and trolleybuses were created. The Leninska station itself was inaugurated on the 70th anniversary of the October Revolution, on 6 November 1987, and is now known as Teatralna station. The older tunnels, over each, partially cut by the ceiling of the Zoloti Vorota station, still exist and are now accessible only for maintenance by staff. Construction of the third line (called the Syretsko-Pecherska Line, the northwest–southeast axis) started in 1981. The initial segment was finished on 31 December 1989 and featured three stations: Zoloti Vorota, Palats Sportu, and Mechnikova (now Klovska). The first two were transfer hubs to other lines; the third was the start of a technical tunnel between lines M2 and M3, which allowed trains from the M3 line to use the Obolon depot on the M2 line. Until 30 April 1990, the exit from Zoloti Vorota station (the then-northern terminus of M3) was only possible via Leninska station. An exit from Zoloti Vorota onto Volodymyrska Street was only opened on 1 May 1990. A total of 31 stations and of passenger tunnels were constructed during the first 31 years of the subway system in the Ukrainian SSR. Post-independence Despite severe economic problems at the dawn of Ukrainian independence, momentum on the Kyiv Metro construction was not lost. On 30 December 1991, Druzhby Narodiv and Vydubychi stations were opened (Pecherska station was opened six years later, on 27 December 1997, since its construction had to be frozen due to hydrotechnical problems). In 1992, the line crossed the Dnieper river to Slavutych and Osokorky stations via the Pivdennyi Bridge. Initially, this bridge (which was open from the sides) was intended to be covered by an aluminium construction, but this was found to be ineffective protection against snow and rain. While the construction of the bridge segment was in progress, Telychka station was also under construction. However, due to factory closures in the surrounding heavily industrialized area, the station construction site was abandoned. What remains of it now is a platform and a ventilation shaft, so the station can be used in emergency cases (e.g. a fire or train breakdown). Two years later (on 28 December 1994), the M3 line was further extended to the east, when Pozniaky and Kharkivska stations were revealed to the public. Pozniaky station was the first distinctly three-floor underground station in Kyiv; the lower floor was used by the underground, while the middle and the top floors were used by small market stalls. This structure later enabled the station to easily become a transfer hub for the upcoming Livoberezhna line (M5) by replacing the stalls with passenger transfer areas. The opening of these stations was crucial for the rapidly developing Poznyaky and Kharkivskyi residential districts. In the mid-1990s, construction of a northwestern expansion to the older Syrets district began, with the first extension made on 30 December 1996. Then, Lukianivska station became the new terminus of the line. Lvivska Brama station was also built (between Zoloti Vorota and Lukianivska), but its construction came to a halt in 1997 due to a lack of money and disagreement on how Lvivska Square should be reconstructed. On 30 March 2000, the next station on the line, Dorohozhychi, was opened. Another station, Hertsena station (situated between Lukianivska and Dorohozhychi stations), was also planned—even potentially under initial construction—but ultimately abandoned. Neither the current official scheme (see below) nor the earlier one indicates this station. Four years later, on 14 October 2004, the M3 line was further extended to the northwest, ending at Syrets station (still the current terminus of the line). At the same time, works were done on the southeastern stretch of the line, with Boryspilska station opening on 23 August 2005 and Vyrlytsia opening on 7 March 2006. At first, Vyrlytsia was only an emergency exit, and the station was not planned to be built. However, the City Council later decided, in November 2005, to convert the exit into a full station, which is the reason why this station has side platforms. On 23 August 2007, the third and newest depot in the Kyiv Metro—Kharkivske Depot—was opened. In September 2005, construction began on Chervonyi Khutir station, the last station on the M3 line. In April 2007, the Mayor of Kyiv, Leonid Chernovetskyi, fearing the station will have low ridership, claimed the station would be subject to conservation, as "animals do not ride in the underground" (he meant that the station was situated near the forest, with not many buildings nearby, so there were no people to use the station). Nevertheless, the works continued, and, after a few months' delay, the station opened on 23 May 2008, for the Kyiv Day celebration. The timing coincided with the upcoming mayoral elections on 25 May 2008. This station had the second-lowest ridership in the Kyiv Metro as of 2017. The latest phase (1991–2013) Until the 2000s, the M1 line terminated at Sviatoshyn station (renamed in 1993 from Sviatoshyno station) at the western end. However, new apartments had emerged since 1971, mostly in Bilychi and western Sviatoshyn, which created a need for an extension of the line to the housing facilities. Construction of the final section of today's M1 line started in the fall of 2000. Zhytomyrska station and Akademmistechko station were built, with delays due to irregular financing. Peremohy Avenue was partially closed from 14 January 2001 to 25 December 2002, to construct the tunnels beneath the street. This final extension of the M1 line was opened on 24 May 2003. Construction of the southwestern segment of the M2 line restarted in the summer of 2005, 21 years after the Lybid river accident. Difficult terrain made the work fall behind the schedule, partly due to accidents (such as one in January 2006 during the construction of the Demiivska station). On 15 December 2010, Demiivska, Holosiivska, and Vasylkivska stations were opened. The 50th station, Vystavkovyi Tsentr, was unveiled a year later, on 27 December 2011. Ipodrom followed suit on 25 October 2012. Initially, Ipodrom was planned to be opened together with Teremky station in November 2012, but, with the lack of financing and lagging behind the schedule, only Ipodrom was opened by then (ahead of schedule, partly thanks to funds reallocation, and partly because of the 2012 parliamentary elections due for 28 October). The Ipodrom–Teremky section would wait for underground construction funds until 2013. As there was no turnaround option for trains there, a shuttle train drove between Vystavkovyi Tsentr and Ipodrom stations until Teremky station was opened on 6 November 2013, to commemorate the 70th anniversary of Kyiv's liberation. As of , this was the last extension or opening of any underground-connected facility (not taking into account the opening of the second exit from Osokorky station, which was built together with the station but opened only in 2014). The 2022 Russian invasion During the 2022 Russian invasion on 24 February, regular service on the metro was suspended. A reduced schedule was adopted with limited services running between 8:00 and 19:00. All underground stations (47 of the 52 total) have remained open 24 hours a day to function as bomb shelters. In the wake of air raid missile attacks, metro stations have been often used as bomb shelters. According to Kyiv's mayor Vitali Klitschko, on 2 March 2022, as many as 150,000 residents of Kyiv sought shelter in the city's metro. Metro tunnels flooding in 2023 The expansion of the second line to the Teremky neighborhood faced the problems since the beginning of its construction, originally the Demiivska station was planned to be opened in 2009 but the opening had to be postponed until 2010 after in 2006, one year after the construction started, the retaining wall collapsed with gantry crane damaging city communications. Later, in 2008 the tunneling shield got stuck several times during tunnel construction. By 2009 there were five fatalities in various accidents occurred at the line's construction. 8 December 2023, years after the controversial construction was finished, Kyiv City Council informed that the tunnel of the second line between Lybidska and Demiivska stations has been depressurized and the rails are flooded. Because of this accident the rest of the line which led to Teremky had to be closed, it caused serious transportation problems since that metro line was the main mode of transport in that part of the city. The next day after the announcement similar problems have been discovered on the other end of the line at the Pochaina station which was built in 1980. On 9 December, Prime Minister of Ukraine Denys Shmyhal gathered an emergency meeting of the State Commission on Emergency Situations regarding the accident. Police initiated searches of the engineering organisation responsible for the tunnels, administration of Kyiv Metro and Kyiv City Council, obtaining the construction documents for the investigation of the negligence case. On 13 December, the so-called 'shuttle traffic' began to operate, so the second line was split in two separate parts with a tunnel reconstruction site between them, buses had been launched to replace the full line. On 17 December, the buildings of an abandoned Demiivskyi market which was located at the top of the damaged tunnel began to subside. On 18 December, the administration of Kyiv Metro announced that the repair works between Tarasa Shevchenka and Pochaina stations will start in the summer of 2024 but they won't need the train movement to be stopped. Kyivpastrans announced compensation for e-tickets spent on transfer between 'shuttle traffic'. On 24 January 2024, Kyiv Department of Transport Infrastructure announced that by the results of the expertise the cause of the accident was a bad quality of the project and construction work. On 11 September 2024, the reconstruction of the tunnel was announced to be complete, and the Obolonsko–Teremkivska line continued operating as normal starting from the next day. Modernization Rolling stock In the 1990s, the Kyiv Metro authority was mostly concerned with new underground construction, rather than old infrastructure modernization. This changed in March 2001, when an experimental modification of 81-717/714 trains, the Slavutych 81-553.1/554.1/555.1 wagon, was launched from Obolon depot. It included an increased number of electronic devices and induction motors (instead of synchronous motors in earlier series). The train model, however, was not released into mass production, so the test train remained the only one from its series. This experimental 81-553.1 train is still operated on the Obolonsko–Teremkivska line between 7-10 a.m. on weekdays. The modification of 81-717/714 (81-717.5М/714.5М) trains arrived three years later at the Darnytsia depot. Another modification of the 81-717/714 series, the 81-7021–7022, made by the Kryukiv wagon-manufacturing plant (KVBZ), would be the first underground train made in Ukraine in the Kyiv Metro. This new model was first unveiled to the then-President of Ukraine, Viktor Yushchenko, at the opening of the Boryspilska station. Five months later, a sample was sent to the Darnytsia depot for trials, where an error was detected. Further tests were conducted from 17 June 2006 in the Obolon depot. Finally, in July 2008, the trains were accepted by the governmental commission and were given a special license allowing them to be mass-produced (following Ukrainian technical standards). They started carrying passengers in 2009. 81-7021–7022 trains were supposed to substitute the older 81-717/714 trains, but, as of 5 July 2017, there was only one train of such a model. The next modified trains, 81-540.2К/81-714.5М, made by Wagonmash (St. Peterburg) and Metrowagonmash, are another modification of 81-717/714, arrived in 2010, with additional trains set on rails in 2013. Finally, in 2014–2016, new 81-7080 trains were transported from KVBZ to the Kyiv Metro, which is now actively used on the M1 line. In 2023, Warsaw Metro donated 60 cars of its decommissioned 81-717/714 series trains to Kyiv Metro. They will be used as parts donors to overhaul the existing rolling stock or enter passenger service to strengthen the current fleet. The first set entered service on the M3 Line on 1 November 2023. Stations In October 2005, new escalators were installed in Lisova and Syrets stations, as a movement for modernization in the former case, and the latter as part of a pylon station's equipment. The following year, Darnytsia station was modernized, and a second exit (towards Popudrenka Street) was built. In March–May 2017, a ₴24.84 million hryvnias (US$915,900) refurbishment of Livoberezhna station was made due to the second Eurovision session in Kyiv. Infrastructure Network map Lines As of 2019, 3 lines are operational, with a total length of . The additional that appear in the table account for technical tunnels, used by trains to switch from one line to another. The lines' names are derived from the terminal stations proposed in the original 1945 construction plan, which is why the M1 line is called "Sviatoshynsko–Brovarska", despite finishing 11 kilometers short of Brovary's city center (the official Metro site says "the extension to Brovary is possible in far future"). Similarly, the M2 line's route does not pass through Kurenivka. In February 2018, the Kyiv City Council renamed the M2 line from "Kurenivsko–Chervonoarmiyska" to "Obolonsko–Teremkivska". Colloquially, lines are rarely referred to by their full names, but rather by their numbers or colors (e.g. the "Syretsko–Pecherska line" is usually referred to as "The Third line" or "The Green line"). Line 1 (Sviatoshynsko-Brovarska) The Sviatoshynsko–Brovarska line is the first line of the Kyiv Metro, with a length of . All of its stations on the eastern bank of the Dnieper river are either at or above ground level, this is attributed to a similar experiment to Moscow's Filyovskaya Line. The milder Ukrainian climate, however, prevented these stations from severely deteriorating, which was why extensions in 1968 and 1979 were kept from going underground. The five original stations on this line managed to survive Nikita Khrushchev's struggle with decorative "extras", although more pompous projects were proposed in Stalin's time. These five stations are recognized as architectural monuments and thus are protected by the state. The M1 line is colored red on maps and carries about 550,000 passengers daily, making it the busiest line. Line 2 (Obolonsko–Teremkivska) The Obolonsko–Teremkivska line was the second line of the Kyiv Metro, with a length of . Initially, it extends southwards along the right bank of the Dnieper river but deviates from the river towards the southwest beyond Lybidska station. Most of this line's stations were built in the 1970s and 1980s. Architecturally, the line shows some of the best examples of late-Soviet architectural features (with Lybidska station being an architectural monument protected by the state). The line's newest stations, built-in 2010–2013, are a good example of modern stations with national decoration motifs (such as at Teremky station) and have access for disabled persons. The M2 line is colored blue on maps and carries more than 460,000 passengers daily. Line 3 (Syretsko-Pecherska) The Syretsko-Pecherska Line is the third and longest line in the Kyiv Metro, with a length of . It is a major northwest–southeast axis of the Kyiv rapid transit system. It starts on the western side of the Dnieper river before crossing it on a partially covered bridge and then going on to the southeastern residential districts of Kyiv. The line is the newest and shows some post-independence decorative motifs. Technically, it is also a great development, with most of the platforms longer and wider than older sections and with some stations having access for disabled persons. The Vydubychi–Slavutych tunnel is the longest in the Kyiv Metro, as separates the two stations. This is partial because several stations were not completed—two stations had construction suspended (and never finished) while another station was only planned. The M3 line is colored green on maps and carries more than 300,000 passengers daily. Stations The Kyiv Metro follows a standard Soviet design: a triangular layout of three lines that intersect in the city center, making six radii, and stations that are built very deep underground and could potentially double as bomb shelters. Technical data Currently, there are 52 stations. There are 20 deep-lying stations, of which 17 are of pylon type (including Arsenalna station, the only "London style" station in the former USSR still in existence) and 3 are of column type. Of the 26 sub-surface stations, 13 stations are of pillar-trispan type, 3 are side-platform pillar bi-spans, 8 are single-vaults, and 3 are asymmetrical double-deck bi-spans. In addition, 6 stations are located above ground; of these, four are surface level and two are flyover. Most of the stations have large vestibules, some on the surface level whilst others are underground and interlinked via subways. Access for disabled persons, previously overlooked, has become an important issue, and all new stations have been constructed with the necessary provisions. Arsenalna station is distinguished as the deepest station in the world, at 105.5 m below ground (considering the distance between the surface above the station and the station itself). Dnipro, the next station proceeding towards Dnieper, is overground, which gives the Dnipro–Arsenalna tunnel the largest elevation difference between metro stations in the world, considering the surface's relative height. There are three unfinished stations: Lvivska Brama, Telychka, and Hertsena, all of which are on the green M3 line. Hertsena had only the initial stages of construction completed, while the former two are more advanced in construction (about 30–70% completed) but lack conventional exits. Kyiv Metro is poorly adapted for handicapped persons. Only stations constructed after 2005 are better adapted for handicapped people; for example, they have elevators and turnstiles wide enough for a wheelchair to pass. Architecture Like all the Metro systems of the former Soviet Union, the Kyiv Metro is known for its vivid and colorful decorations. The original stations from the first stage of construction are elaborately decorated, showing the postwar Stalinist architecture blended with traditional Ukrainian motifs. Many first projects for stations offered at the beginning of the 1950s were full of rich decorative elements such as mosaics, ornaments, bas-reliefs, sculptures, and marble. Each station was to have its original shape. These stations were to be constructed in a monumental style like those stations built throughout the 1950s in Moscow and Leningrad. For example, Arsenalna station, instead of a small central hall, should have had a wide hall with sculptures of warriors of the Russian Civil War and World War II; Vokzalna station was to be decorated with ornaments and bas-reliefs on columns and a big decorative map of the Ukrainian SSR; and, based on its first project drawings, Politeckhnichniy Institut station was to have large mosaic panels depicting aspects of the natural sciences. By the end of the 1950s, a period of functionality and struggle against architectural extravagances had begun in Soviet architecture. This action, propagated by Khrushchev, resulted in the loss of many unique projects, with the resulting stations being finished with few decorations, compared to the 1952 projects. Universytet station, however, was less simplified than many others and retained its many pylons adorned with the busts of famous scientists and writers. The next stations which opened in 1963 had an ascetic and strict appearance. Open-air stations of the 1960s and the underground stations of 1971 were built to a standard primitive design called sorokonozhka (), named for the many thin supports on both sides of the platform. Functionality became the most important factor in newer designs, and stations built at that time were almost identical in appearance, save the design of tile patterns and pillar covering material. Only in the 1970s did decorative architecture start to make a recovery. The stations built from the 1980s onwards show more innovative designs. Some older stations have undergone upgrades to lighting and renovation of some decorative materials. After the declaration of Ukrainian independence following the dissolution of the Soviet Union in 1991, some of the Soviet symbols originally incorporated into decor were adapted to modern times or removed altogether by altering the station's architecture. Most remaining (if not all) Soviet symbols were later removed due to 2015 decommunization laws. Passenger flow As of 2015, Lisova was the busiest station in the Kyiv Metro, passing an average of 58,500 passengers per day, with Vokzalna following second with 55,900. The congestion at Vokzalna station, especially during rush hours, is particularly problematic due to the station having only 3 escalators and only one access point for entering and exiting the station. Akademmistechko is third at 51,100. These are the only stations that handle more than 50,000 passengers per day. The next busiest, Livoberezhna, passes 48,100 passengers per day. All four of these busiest stations are on the red M1 line. The fifth busiest station is Pochaina with 46,200 passengers per day (on the blue M2 line), with Minska not far behind at 45,500. On the Syretsko-Pecherska line, the busiest station is Lukianivska, with just under 40,000. In contrast, the least busy station in the system is the Dnipro station (M1), with only 2,800 passengers per day. Chervonyi Khutir (M3) is another station with a daily passenger flow of less than 5,000. The emptiest station on the M2 line is Poshtova Ploshcha, with 8,900 passengers per day. Transfer hubs to other lines and other means of rapid transit Vokzalna station is one of the most important transport hubs in the Kyiv Metro. It gives direct access to the Kyiv-Pasazhyrskyi railway station, the largest train station in Ukraine, as well as to the Kyiv Light Rail line to Borshchahivka and the Kyiv Urban Electric Train ring. Additional important transport hubs include Demiivska station (to the Kyiv Central bus station); Vystavkovyi Tsentr and Lisova stations (mostly to suburban buses, but to the Pivdenna and Darnytsia bus stations as well, respectively); and Livoberezhna, Darnytsia, and Pochaina stations, used by commuting passengers from Troieshchyna (the largest residential district in Kyiv). Distribution of stations across the city Metro stations cover all 10 subdivisions of Kyiv; however, their spread is uneven. Shevchenkivskyi district has the most, at 13 stations; Pecherskyi and Holosiivskyi raions both boast 11 stops; Darnytskyi district has 7; Obolonskyi, Sviatoshynskyi, and Dniprovskyi districts each feature 4; Podilskyi district offers 3; and Desnyanskyi raion, despite being the most populous district in Kyiv, is relatively deprived of Metro connections, with only 2 stations. This distribution reveals that the western side of Kyiv contains the majority of stations (including the M2 line, which does not cross to the eastern side of the Dnieper). Only 12 stations reside in the eastern part of the city, compared to 40 on the western side. Rolling stock As of 2016, there were 824 wagons in operation, with another 5 spares. Most of the wagons are from the Soviet era, largely from the 81-717/714 series and some trains from the older D-type and E-type series. The stock, however, is refreshed. The "Slavutych" trains began to replace the old trains of predominantly 81/7021-7022, 81-540.2К/81-714.5М, and 81–7080 series. The metro's first line was served by carriages of D-type since its opening. In 1969, all these carriages were given to Saint Petersburg Metro. In exchange, the line received carriages of series E-type, and later carriages of series Em-501, Ema-502, and Ezh (Ezh1). In 2014, most carriages of types E and Ezh were modernized with new engines, interiors, and exteriors, converting them into a type similar to type 81-7021–7022; these are known as type 81-7080–7081 or E-KM. All of the trains contain an audio system, which announces the stations and transfer opportunities to other Metro lines. At Arsenalna station, there is an announcement for museums in the area. In addition, most of the older trains are fitted with an overhead video information system, which provides visual information to passengers regarding the current and the next stations and transfer opportunities between lines. While the train is in transit between stations, the system displays advertising (both in video and ticker forms), recreational video content, and local time. Kyiv Metro also has some special service trains, namely: 2 trains checking the electrical system of the Kyiv Metro, which were conventional D-type and E-type trains before conversion A D-type car that measures tracks' length Laboratory car based on an Ezh-type train An E-type freight car A track repair vehicle In December 2016, the Kyiv City Council announced that intended to buy 709 new trains by 2025 for ₴14.96 billion (US$572.6 million at the time). Of these, 276 would replace older trains whose operation terms have expired, 62 would be added to the Syretsko-Pecherska line to ensure regular scheduling, and 371 would be introduced across newer sections of the metro system. After a visit of the Mayor of Kyiv Vitali Klitschko to Warsaw in January 2023, he informed that Warsaw will donate 60 81-717/713 wagons to the Kyiv Metro. In operation Out of operation Travelling Management The Kyiv Metro is managed by the city-owned municipal company Kyivskyi Metropoliten (formally formerly known as Kyivskyi ordena Trudovoho chernovoho prapora Metropoliten imeni V.I. Lenina) which was transferred in the early 1990s from the Ministry of Transportation. The Metro employs several thousand workers in tunnel, track, station, and rolling stock management. In addition to being state-sponsored for operation, income comes from ticket sales and advertisements in stations (controlled by a daughter company Metroreklama). Metro lines are being constructed by Kyivmetrobud public company which allocates segments of construction to individual brigades that are responsible for tunnel and station construction. Kyivmetrobud is directly funded by the profits of the Kyiv Metro and from the city and state budgets. Most of the state funding comes from Kyiv's municipality, while additional subventions are directly received from the state budget since the fares do not correspond to the optimal price which will give the possibility for the Kyiv underground to develop itself. In 2016, the Kyiv Metro received ₴76.1 million in net income (about US$2.85 million), compared with its ₴119 million loss a year earlier. The current director is Viktor Brahinsky. Ticketing A single ride costs ₴8.00 ($0.29) regardless of destination and number of transits within the metro. The ride is paid for by paper QR tickets or contactless cards. Contactless cards Contactless (RFID-based, MIFARE Classic and Ultralight) cards are used to enter the metro. A card can be purchased for a small fee (₴15 ($0.54); refundable should one want to return the card) from cashiers and loaded for up to 50 trips at once (so that not to exceed the 100-ride limit). There is a special gradual scale of prices, such that every tenth purchased access (bought once) makes every previous one cheaper by ₴0.30. To clarify, the first 9 ride accesses cost the standard amount, whereas for 10-19 rides, the price for each one is ₴7.70; for 20-29 rides, ₴7.40; and so on. Thus, loading the card with 50 rides costs only ₴6.50 ($0.23) per ride. The passenger can also pay for the fare using a bank card. MasterCard PayPass and Visa PayWave cards are accepted. Google Pay and Apple Pay can also be used. The cards can be recharged either by a cashier or by using a terminal. The terminals accept hryvnia paper bills in denominations of ₴1 to ₴50, but do not return change; instead, they refill the card for the maximum possible number of rides given the sum of money deposited into the machine, and store the remainder in the system, to be used at next refill. Monthly or 2 week passes with unlimited rides (₴380 ($13.46) per month) or a limited number of rides (62 or 46 rides per month, or half of it for 15 days, with the fare of ₴3.87 and ₴4.13 per ride, respectively) can also be purchased. A seven-minute second-use lockout is imposed for all unlimited cards to prevent abuse. Quarterly and yearly tickets were used until December 2009, when they stopped being released because they generated an estimated ₴115.6 million (about $14.5 million) of losses annually. From the late 1990s until the early 2000s, monthly unlimited metro passes with a magnetic strip were used. Due to reliability and counterfeiting issues, these were phased out in favor of RFID cards. Historical fares Originally the Metro ride cost was 50 kopecks; however, in 1961, following a revaluation of the Soviet ruble, the ride was fixed at 5 kopecks for the next 30 years. After 1991, as Ukraine suffered from hyperinflation, the price gradually rose to 20,000 karbovanets in 1996. Following the 1996 denomination, 20,000 karbovanets became ₴0.20 (20 kopiyok), although the karbovanets sum was still payable up to 16 September that year. Since then, the price has risen to ₴0.50 in 2000 and ₴2.00 in 2008. In November 2008, when the price was increased to ₴2.00, public protests took place and the Antitrust Committee of Ukraine ordered to decrease of the price to ₴1.70. In September 2010, the price was increased back to ₴2.00. In February 2015, the price was further increased. The last increase occurred on 15 July 2017. The fares are set by the Kyiv City State Administration subject to the approval of the Ministry of Infrastructure. A proposal to set prices according to the distance traveled, or time spent in the metro, is being discussed. For this purpose, specially built "long" exit turnstiles with card readers at the far end have already been installed at some station exits. Tokens For many decades, plastic tokens were used for turnstiles; one token per person could be bought from station cashiers or automatic exchange machines. In recent decades, green tokens were used in 2000-2010 and 2015–2018; blue tokens were used in 2010-2015 and since 2018. Initially, the green (old) tokens were to be substituted by QR-code tickets (introduced in August 2017) and RFID cards. The use of these tokens was gradually phased out. On 15 July 2019, Nyvky and Ipodrom stations became the first to stop selling and accepting tokens. Tokens were scheduled to be sold until 30 October 2019 and accepted until 3 November 2019; however, tokens were still being sold and used on 15 November 2019. In April 2020, tokens finally went out of circulation. Planned improvements and expansion In 2012, plans for the Kyiv Metro aimed to nearly double its length by 2025. Whilst completion of this plan is not considered to be feasible, several new stations have opened since the turn of the millennium. Construction of additional lines Line 4 (Podilsko-Vyhurivska) The fourth line of the Kyiv Metro (also known as the Podilsko-Voskresenska line) will connect the northern Troieshchyna districts in the east with the future business center of Rybalskyi Peninsula on the Dnieper River, and from there arc around the Podil neighborhood. The line will continue westwards along the northern slope of the Starokyivska Hora and into the northwestern part of central Kyiv, where it will turn south and reach the Kyiv-Pasazhyrskyi Railway Station. In doing so, it will offer transfers to all three other Metro lines and thus relieve the over-congested transfer points in the center. The last planned stage will connect the line to Kyiv's Zhuliany airport and the residential districts along the way. This line first appeared in plans in 1980, when a new plan for Kyiv Metro development appeared. The section between Voskresenka (2 km south of Troieshchyna) and Vokzalna (through Chervona Ploshcha (now Kontraktova Ploshcha station), instead of today's plan to direct it via Tarasa Shevchenka), was planned to be open by 2000. Due to protests over the passage of the line through the historical neighborhood of Podil, the line was rerouted. Construction started in 1993, from the Podilsko-Voskresenskyi bridge. It was later stopped due to financial difficulties. This remains the only built section of the line. A second attempt was made in 2004, and it was expected that the bridge would be opened three years later, but work was again halted because the bridge had to pass through Rusanivski Sady, where the houses were to be demolished without compensation, which resulted in protest from residents. Other factors that contributed to the suspension of the project included problems with construction and the financial crisis of 2007–2008. Work on the bridge was only restarted in mid-2017, and it was forecasted to be opened by the end of 2021. The feasibility of the line has been subject to discussion, and alternative projects have been proposed, such as the construction of a light rail or light metro instead. Nevertheless, as of 6 July 2017, the subway option is still preferred, despite the route being altered from plan to plan. In May 2017, the Kyiv City Council signed a memorandum with a Chinese consortium (which includes the China Railway International Group and the China Pacific Construction Group) to assess the current advancement of the project and, eventually, to finally construct the line. The first segments of the line (colloquially called "metro to Troieshchyna") are planned to be finished by 2025, while further construction will most probably end after 2030. Line 5 (Livoberezhna) This line first appeared as a planned line on maps in 1974. The northern end of the fifth line of the system exists already as part of the Kyiv tram. However, it will require conversion and, according to some projects, will be temporarily operated as a branch of the Podilsko-Vyhurivska line (4th line). Eventually, a southward extension will commence that will follow along the eastern bank of the Dnieper to the Southern Osokorky district. This is the last Metro project that is envisioned in the present expansion plan and is not expected to be completed until the end of the 2030s. In 2009, it was planned that the first stage of this line will be launched in 2019. As of 6 July 2021, however, no section of the line has been opened. Line 6 (Vyshhorodsko-Darnytska) This is a proposed line in the Kyiv underground. Currently, it is under planning. The line is expected to be built in the distant future—as of 2012, any construction was predicted to start only after 2025. However, shortly after the announcement of such a proposal, the project has been resigned from. Extensions Line 1 extensions One potential long-term extension of the M1 line is to the town of Brovary, to the east of Kyiv. A much more feasible short-term extension is to Novobilychi station, with a depot. In 2011, Volodymyr Fedorenko, then-head of the Kyiv Metro communal enterprise, said this extension might be constructed by 2020. However, this construction has not happened, as the Line 4 construction is currently the absolute priority; moreover, Novobilychi station's location will not bring the line to any significant residential area (Bilychi, the nearest area, is already west of Akademmistechko station), although it will connect with a train station. As of 5 May 2021, no tenders have taken place. The westward extension might continue further into Berkovets. Line 2 (southern extension) A planned extension of the M2 line will feature a side branch: the side branch will go to the Teremky bus station, which opened in December 2016, and Dmytro Lutsenko street, and will serve the Teremky-2 and Teremky-3 residential areas, while the main line is expected to go further southwest to Odeska station. For the moment, this extension is not a priority. Line 3 (eastward extension) Several extension proposals for the M3 line have been made. In the 2005 general plan, the line was supposed to turn sharply from Chevonyi Khutir to Darnytsia railway station, with a subsequent extension to Livoberezhna. Presumably, the proposal was declined, as later plans do not include any lines' prolongation beyond Chervonyi Khutir. In April 2017, Kyiv's Mayor Vitali Klitschko and Ukraine's Minister of Infrastructure, Volodymyr Omelyan, announced they "were talking about the project of subway extension from Boryspilska metro station to Boryspil airport with the help of private financing", although the project's approximately extension is unlikely to happen soon. Line 3 (northward extension) The northward extension to Vynohradar has been a long-planned event, with the first maps appearing in the newspaper Evening Kyiv in August 1970. In February 2017, an article suggested that the plans were already given for analysis and that construction had to start by late 2017. The article mentions that the project will feature an engineering feat not made before in Kyiv: instead of parallel tunnels, one tunnel will be on top of another. On 6 July 2017, the Chinese Machinery Engineering Corporation said it was ready to invest and build the line to Vynohradar. In February 2017, the line's extension and two stations—Mostytska and Varshavska (earlier known as Prospekt Pravdy)—were scheduled for opening in late 2019. In November 2018, Kyiv Metro signed a contract for the construction of the Mostytska and Varshavska subway stations and a branch line toward the Vynohradar station; the deadline for completion was set for 2021. In the contract, it was agreed that the Chamber of Commerce would judge on possible justifiable force majeure that would slow down the work. In early September 2021, the Chamber of Commerce judged there was such, and the expiration of the contract with the Kyiv Metro was to be postponed from November 2021 to May 2023. The opening of the stations was further delayed to 2024 in July 2023. Further plans feature another extension to Marshala Hrechka station with Vynohradar depot built as well. Language Ukrainian, Russian, and English languages When the Metro was opened in 1960, although many workers and all technical-level documents used Russian, all the signs and announcements used Ukrainian exclusively. The lexical similarity of the languages allowed every station to have a Russian translation, and these were often given in Russian language literature and media. However, some Ukrainian names for stations were different from Russian ones, and to signify this, those stations were sometimes partially translated into Russian, effectively blending Ukrainian words into Russian grammar. Examples of this include the station names Ploshcha Zhovtnevoi Revolutsii, Zhovtneva, and Chervonoarmiyska (later renamed to Maidan Nezalezhnosti, Beresteiska, and Palats Ukraina, respectively) which when translated into Russian would become Oktyabrskaya and Krasnoarmeiskaya. The names were instead given as Zhovtnevaya and Chervonoarmeiskaya. In the early to mid-1980s, due in part to both Volodymyr Shcherbytsky's gradual Russification campaign and Kyiv becoming increasingly Russophonic, the metro started to change as well. Although the stations retained their original Ukrainian titles on the vestibules, Russian appeared together with Ukrainian on the walls and replaced Ukrainian in signs and voice announcements. Stations that were opened during this period still had Ukrainian names appearing along with Russian ones on the walls, but now all the decorations where slogans were included, became bilingual. Also during this time, the practice of blending Ukrainian into Russian was dropped, and those selected stations were named in standard Russian translation. During Perestroika in the late 1980s, bilingualism was gradually introduced in signs and voice announcements on trains. Before 1991, this was done with Ukrainian following Russian, but after the republic proclaimed independence in August 1991, the order was changed to Ukrainian preceding Russian. After the fall of the Soviet Union in late 1991, both signs and voice announcements were changed from bilingual to just Ukrainian as the official language of the state. However, the Russian names are still used in Russian-language literature and some documentation, and some of the decorations still feature bilingual inscriptions. The usage of English in stations started just before the Euro-2012 football tournament, and all of Kyiv's underground stations now feature English with Latin transliteration on signs and official maps. English is used for the station arrival announcements on par with Ukrainian. History of station names Some of Kyiv's underground stations have been renamed before and (to a greater extent) after they were completed. The only station to be renamed during the Soviet period occurred in 1977 when the name of "Ploshcha Kalinina" was changed to "Ploshcha Zhovtnevoi Revolutsii" for the upcoming 60th anniversary of the October Revolution. During the 1990s, more changes occurred, mostly relating to stations with names connected to Communism. On 15 October 1990, "Chervona Ploshcha" became "Kontraktova Ploshcha", and "Prospekt Korniychuka" became "Obolon". A half-built station, due to be called "Artema", was renamed "Lukianivska". On 26 August 1991, following Ukraine's declaration of independence, "Ploshcha Zhovtnevoi Revolutsii" became known as "Maidan Nezalezhnosti". In 1993, nine stations, most of which bore communist symbolism in their name, were given politically neutral names: In 2011, "Respublikanskyi Stadion" became "Olimpiiska", in honour of the forthcoming Euro-2012 football tournament. In 2018, "Petrivka" (named after the Soviet politician Grigory Petrovsky) was re-designated as "Pochaina" after the river that once flowed nearby, to comply with 2015 decommunization laws. In 2022, as part of derussification efforts following Russia's invasion of Ukraine, the following stations were renamed:
Technology
Europe_2
null
639613
https://en.wikipedia.org/wiki/Oriental%20Shorthair
Oriental Shorthair
The Oriental Shorthair is a breed of domestic cat that is developed from and closely related to the Siamese cat. It maintains the modern Siamese head and body type but appears in a wide range of coat colors and patterns. Like the Siamese, Orientals have almond-shaped eyes, a triangular head shape, large ears, and an elongated, slender, and muscular body. Their personalities are also very similar. Orientals are social, intelligent, and many are rather vocal. They often remain playful into adulthood, with many enjoying playing fetch. Despite their slender appearance, they are athletic and can leap into high places. They prefer to live in pairs or groups and also seek human interaction. Unlike the breed's blue-eyed forebear, Orientals are usually green-eyed. The Oriental Longhair differs only with respect to coat length. While the breed's genetic roots are ultimately in Thailand, it was formally developed in the US by a number of New York area cat breeders, led by Vicky and Peter Markstein (PetMark cattery), who in 1971–72 were intrigued by lynx patterned and solid colored cats of a Siamese body type at Angela Sayers' Solitaire Cattery and at Patricia White's. These were based on solid-colored cats with the body of a Siamese, bred by Baroness von Ullmann over the 1950s. An "Oriental Shorthairs International" was formed in 1973, and Peter Markstein presented the breed to the 1976 Annual Cat Fanciers Association, at the same time as the Havana Brown was presented by Joe Bittaker. In 1977 the Oriental Shorthair was accepted by the Cat Fanciers' Association for championship competition. Since 1997, it has also received recognition from the GCCF and various other cat breeding organizations. The breed is among the most popular among CFA members. Description Breed The Oriental Shorthair is a member of the Siamese family of breeds, and can be found in various solid colors, and patterns such as smoke, shaded, parti-color/tortoiseshell, tabby and bicolor (any of the above, with white). Not all variants are acceptable to all organizations that recognize the breed. Characteristics Conforming Oriental Shorthairs, like any of the Siamese type, have almond-shaped eyes and a wedge-shaped head with large ears. Their bodies are typically "sleek" but muscular. The long-haired version of the breed, the Oriental Longhair (recognized since 1995 by CFA), simply carries a pair of the recessive long hair genes. Personality Oriental Shorthair cats have high locomotion levels and are natural conversationalists. The adult Oriental Shorthair cats are considered to be active, curious and interested about surroundings by breeders and veterinarians. Size The Oriental Shorthair is a medium size cat. On average, males weigh from , with females weighing less than . Health History and recognition According to the CFA, "Orientals represent a diverse group of cats that have their foundation in the Siamese breed." The Siamese, in both pointed and solid colors, was imported to the UK from Siam (today, Thailand) in the later half of the 1800s, and from there spread widely, becoming one of the most popular breeds. The gene that causes the color to be restricted to the points is a recessive gene; therefore, the general population of the cats of Siam were largely self-colored (solid). When the cats from Siam were bred, the pointed cats were eventually registered as Siamese, while the others were referred to as "non-blue eyed Siamese" or "foreign shorthair". Other breeds that were developed from the landrace cats of Thailand include the Havana Brown (which some breed registries classify as simply an Oriental Shorthair variant) and the Korat. The Oriental Shorthair was accepted as an actual breed for championship competition in the US-headquartered CFA in 1977. In 1985, the CFA recognized the bicolor variant. Two decades later, the breed was finally recognized by the UK-based Governing Council of the Cat Fancy (GCCF) in 1997, but with some differences from CFA on coat conformation. GCCF publishes separate breed registration policies for a number of specific-coat Oriental Shorthair variants today. The Germany-based World Cat Federation (WCF) recognizes the breed, but with color requirements that are comparatively unrestrictive in some way, but notably opposed to white ("All colours and patterns without white and without points are recognized.") In the Cat Fanciers' Association (CFA), some of the point-colored offspring from Oriental Shorthair parents are considered "any other variety" (AOV), but depending on the pedigree, some may compete as Colorpoints. In The International Cat Association (TICA) and many other cat fancier and breeder associations, these cats are considered to be, and compete as, Siamese, when recognized at all. Patterns In total, over 300 coat color and pattern combinations are possible under CFA conformation rules. The basic types include: Solid: The coat color is uniform across the entire cat. Each hair shaft should be the same color from root to tip, and be free of banding and tipping. CFA-acceptable colors for this breed are red, cream, ebony, blue, lavender, cinnamon, fawn and white. The corresponding GCCF colors are (respectively) red, cream, brown, blue, lilac, chocolate and apricot (white is not permitted as the base color in GCCF, and WCF does not permit white at all). Shaded pattern: Will have a white undercoat with only the tips being colored CFA and GCCF recognize this. Other breed registries call this the chinchilla pattern. Smoke pattern: The hair shaft will have a narrow band of white at the base which can only be seen when the hair is parted. This white undercoat to any of the above solid colors (except white, of course) is provided by an interaction of two different genes. CFA and GCCF recognize this. Parti-color: Has patches of red and/or cream, which may be well-defined blotches of color, or marbled. This color pattern is referred to as tortoiseshell (or "tortie" for short) in non-pedigreed cats by CFA, and this alternative term is used by GCCF and organizations for pedigreed cats as well. Tabby coat pattern: Recognized by GCCF and CFA. Each hair shaft should have a band of color around the middle of the hair shaft. GCCF recognizes four variants of tabby: classic, mackerel, spotted and ticked. Bicolor pattern: Recognized by GCCF and CFA. The bicolor pattern is created by the addition of a white spotting gene to any of the other accepted colors/patterns. The cat will have white on its belly, on the legs/paws, and in an inverted "V" on the face. WCF does not permit this variant, as it is opposed to white in this breed. In popular culture In scientific illustrator Jey Parks' 2017 book Star Trek Cats, Star Treks Spock is depicted as an Oriental Shorthair. In Joann Sfar's comic The Rabbi's Cat, the eponymous cat has the physical features of an Oriental Shorthair.
Biology and health sciences
Cats
Animals
639754
https://en.wikipedia.org/wiki/Turkish%20Angora
Turkish Angora
The Turkish Angora (, 'Ankara cat') is a breed of domestic cat. Turkish Angoras are one of the ancient, natural breeds of cat, having originated in central Anatolia (Ankara Province in modern-day Turkey). The breed has been documented as early as the 17th century. Outside of the United States, the breed is usually referred to as simply the Angora or Ankara cat. These cats have slender and elegant bodies. History Like all domestic cats, Turkish Angoras descended from the African wildcat (Felis lybica). Their ancestors were among the cats that were first domesticated in the Fertile Crescent. Longhaired cats were imported to Britain and France from Asia Minor, Persia and Russia as early as the late 16th century. The Turkish Angora was recognised as a distinct breed in Europe by the 17th century. However, there is a strong connection between Angoras and Persians. Charles Catton, in the 1788 book Animals Drawn from Nature and Engraved in Aqua-tinta, gave “Persian cat” and “Angora cat” as alternative names for the same breed. There is a lot of similarity between Angora and Persian cat. Angora by British and American . In 1903, Frances Simpson wrote in The Book of the Cat: The Angora of the 20th century was used for improvement in the Persian coat, but the type has always been divergent from the Persianparticularly as the increasingly flat-faced show cat Persian has been developed in the last few decades. In the early 20th century, Atatürk Forest Farm and Zoo began a breeding program to protect and preserve pure white Angoras. The zoo particularly prized odd-eyed specimens; however, the cats were chosen only for their colourno other criterion was applied. The Turkish Angora, which was brought to Canada in 1963, was accepted as a championship pedigreed breed in 1973 by the Cat Fanciers' Association. However, until 1978 only white Angoras were recognised. Today, all North American registries accept the Turkish Angora in many colours and patterns. While their numbers are still relatively small, the gene pool is continually growing. Characteristics Appearance The Angora has a silky coat that covers a long muscular body. Though it is known for a shimmery white coat and posh tail, Turkish Angoras can display a variety of coat colours, with the only disallowed coats being chocolate, lavender, or colourpoint. The Angora's head is small to medium in size with a smooth wedge. The body is long and slender, with good fineness of the bone. Eyes are large and almond shaped. The colour may be blue, green, amber, yellow, as well as heterochromatic. Ears are pointed, large and wide-set. The eyes are almond shaped, and the profile forms two straight planes. The plumed tail is often carried upright, perpendicular to the back. Behaviour Turkish Angoras are playful, intelligent, athletic and involved. They bond with humans, but often select a particular member of a family to be their constant companion, whom they are very protective of. Health The gene responsible for blue eyes and white coat in the Angora can cause deafness. There have been reports of kittens suffering from ataxia as well as adult cats with hypertrophic cardiomyopathy. Genetic variations Breeders in Turkey feel that the cat's fine-boned version of its natural breed is unrepresentative of the true Turkish cats, which are much sturdier. American "Turkish" Angoras have only a minimal remnant of the original Atatürk Forest Farm and Zoo DNA, and are only "purebred on paper". A genetic study of pedigree cat breeds (using DNA taken from pedigreed cats in the U.S. and Europe) and worldwide random-bred populations showed the Turkish Van as a distinct population from the Turkish Angora despite their geographical association. The Turkish Angora was grouped with the pedigreed Egyptian Mau and random-bred Tunisian cats. Turkish random-bred cats were grouped with Israeli random-bred cats, while the Turkish Van was grouped with Egyptian random-bred cats. However, the UC Davis studied only American cat fancy registered Angoras rather than the "true" Turkish Angora or Ankara Kedisi directly from Turkey, and especially from the Ankara Zoo. A genetic study published in 2012 included a few cats imported from Turkey. The study found that "Turkish-versus USA-originating Turkish Angoras ... are resolved as separate breed populations." The American Turkish Angoras are categorised as descendants of European random-bred cats, and cats imported from Turkey "were assigned to the Eastern Mediterranean" group.
Biology and health sciences
Cats
Animals
639790
https://en.wikipedia.org/wiki/Discovery%20of%20cosmic%20microwave%20background%20radiation
Discovery of cosmic microwave background radiation
The discovery of cosmic microwave background radiation constitutes a major development in modern physical cosmology. In 1964, US physicist Arno Allan Penzias and radio-astronomer Robert Woodrow Wilson discovered the cosmic microwave background (CMB), estimating its temperature as 3.5 K, as they experimented with the Holmdel Horn Antenna. The new measurements were accepted as important evidence for a hot early Universe (big bang theory) and as evidence against the rival steady state theory as theoretical work around 1950 showed the need for a CMB for consistency with the simplest relativistic universe models. In 1978, Penzias and Wilson were awarded the Nobel Prize for Physics for their joint measurement. There had been a prior measurement of the cosmic background radiation (CMB) by Andrew McKellar in 1941 at an effective temperature of 2.3 K using CN stellar absorption lines observed by W. S. Adams. Although no reference to the CMB is made by McKellar, it was not until much later after the Penzias and Wilson measurements that the significance of this measurement was understood. History By the middle of the 20th century, cosmologists had developed two different theories to explain the creation of the universe. Some supported the steady-state theory, which states that the universe has always existed and will continue to survive without noticeable change. Others believed in the Big Bang theory, which states that the universe was created in a massive explosion-like event billions of years ago (later determined to be approximately 13.8 billion years). In 1941, Andrew McKellar used W. S. Adams' spectroscopic observations of CN absorption lines in the spectrum of a B type star to measure a blackbody background temperature of 2.3 K. McKellar referred to his detection as a "'rotational' temperature of interstellar molecules", without reference to a cosmological interpretation, stating that the temperature "will have its own, perhaps limited, significance". Over two decades later, working at a Bell Telephone Laboratories facility atop Crawford Hill in Holmdel, New Jersey, in 1964, Arno Penzias and Robert Wilson were experimenting with a supersensitive, 6 meter (20 ft) horn antenna originally built to detect radio waves bounced off Echo balloon satellites. To measure these faint radio waves, they had to eliminate all recognizable interference from their receiver. They removed the effects of radar and radio broadcasting, and suppressed interference from the heat in the receiver itself by cooling it with liquid helium to −269 °C, only 4 K above absolute zero. When Penzias and Wilson reduced their data, they found a low, steady, mysterious noise that persisted in their receiver. This residual noise was 100 times more intense than they had expected, was evenly spread over the sky, and was present day and night. They were certain that the radiation they detected on a wavelength of 7.35 centimeters did not come from the Earth, the Sun, or our galaxy. After thoroughly checking their equipment, removing some pigeons nesting in the antenna and cleaning out the accumulated droppings, the noise remained. Both concluded that this noise was coming from outside our own galaxy—although they were not aware of any radio source that would account for it. At that same time, Robert H. Dicke, Jim Peebles, and David Wilkinson, astrophysicists at Princeton University just away, were preparing to search for microwave radiation in this region of the spectrum. Dicke and his colleagues reasoned that the Big Bang must have scattered not only the matter that condensed into galaxies, but also must have released a tremendous blast of radiation. With the proper instrumentation, this radiation should be detectable, albeit as microwaves, due to a massive redshift. When his friend Bernard F. Burke, a professor of physics at MIT, told Penzias about a preprint paper he had seen by Jim Peebles on the possibility of finding radiation left over from an explosion that filled the universe at the beginning of its existence, Penzias and Wilson began to realize the significance of what they believed was a new discovery. The characteristics of the radiation detected by Penzias and Wilson fit exactly the radiation predicted by Robert H. Dicke and his colleagues at Princeton University. Penzias called Dicke at Princeton, who immediately sent him a copy of the still-unpublished Peebles paper. Penzias read the paper and called Dicke again and invited him to Bell Labs to look at the horn antenna and listen to the background noise. Dicke, Peebles, Wilkinson and P. G. Roll interpreted this radiation as a signature of the Big Bang. To avoid potential conflict, they decided to publish their results jointly. Two notes were rushed to the Astrophysical Journal Letters. In the first, Dicke and his associates outlined the importance of cosmic background radiation as substantiation of the Big Bang Theory. In a second note, jointly signed by Penzias and Wilson titled, "A Measurement of Excess Antenna Temperature at 4080 Megacycles per Second," they reported the existence of a 3.5 K residual background noise, remaining after accounting for a sky absorption component of 2.3 K and a 0.9 K instrumental component, and attributed a "possible explanation" as that given by Dicke in his companion letter. In 1978, Penzias and Wilson were awarded the Nobel Prize for Physics for their joint detection. They shared the prize with Pyotr Kapitsa, who won it for unrelated work. In 2019, Jim Peebles was also awarded the Nobel Prize for Physics, “for theoretical discoveries in physical cosmology”. Bibliography
Physical sciences
Physical cosmology
Astronomy
639924
https://en.wikipedia.org/wiki/Diet%20%28nutrition%29
Diet (nutrition)
In nutrition, diet is the sum of food consumed by a person or other organism. The word diet often implies the use of specific intake of nutrition for health or weight-management reasons (with the two often being related). Although humans are omnivores, each culture and each person holds some food preferences or some food taboos. This may be due to personal tastes or ethical reasons. Individual dietary choices may be more or less healthy. Complete nutrition requires ingestion and absorption of vitamins, minerals, essential amino acids from protein and essential fatty acids from fat-containing food, also food energy in the form of carbohydrate, protein, and fat. Dietary habits and choices play a significant role in the quality of life, health and longevity. Health A healthy diet can improve and maintain health, which can include aspects of mental and physical health. Specific diets, such as the DASH diet, can be used in treatment and management of chronic conditions. A 2024 review highlighted that bioactive compounds found in Mediterranean diet components (such as olive, grape, garlic, rosemary, and saffron) exhibit properties that may contribute to cardiovascular health. Dietary recommendations exist for many different countries, and they usually emphasise a balanced diet which is culturally appropriate. These recommendation are different from dietary reference values which provide information about the prevention of nutrient deficiencies. Dietary choices Exclusionary diets are diets with certain groups or specific types of food avoided, either due to health considerations or by choice. Many do not eat food from animal sources to varying degrees (e.g. flexitarianism, pescetarianism, vegetarianism, and veganism) for health reasons, issues surrounding morality, or to reduce their personal impact on the environment (e.g. environmental vegetarianism). People on a balanced vegetarian or vegan diet can obtain adequate nutrition, but may need to specifically focus on consuming specific nutrients, such as protein, iron, calcium, zinc, and vitamin B12. Raw foodism and intuitive eating are other approaches to dietary choices. Education, income, local availability, and mental health are all major factors for dietary choices. Weight management A particular diet may be chosen to promote weight loss or weight gain. Changing a person's dietary intake, or "going on a diet", can change the energy balance, and increase or decrease the amount of fat stored by the body. The terms "healthy diet" and "diet for weight management" (dieting) are often related, as the two promote healthy weight management. If a person is overweight or obese, changing to a diet and lifestyle that allows them to burn more calories than they consume may improve their overall health, possibly preventing diseases that are attributed in part to weight, including heart disease and diabetes. Within the past 10 years, obesity rates have increased by almost 10%. Conversely, if a person is underweight due to illness or malnutrition, they may change their diet to promote weight gain. Intentional changes in weight, though often beneficial, can be potentially harmful to the body if they occur too rapidly. Unintentional rapid weight change can be caused by the body's reaction to some medications, or may be a sign of major medical problems including thyroid issues and cancer among other diseases. Eating disorders An eating disorder is a mental disorder that interferes with normal food consumption. It is defined by abnormal eating habits, and thoughts about food that may involve eating much more or much less than needed. Common eating disorders include anorexia nervosa, bulimia nervosa, and binge-eating disorder. Eating disorders affect people of every gender, age, socioeconomic status, and body size. Environmental dietary choices Agriculture is a driver of environmental degradation, such as biodiversity loss, climate change, desertification, soil degradation and pollution. The food system as a whole – including refrigeration, food processing, packaging, and transport – accounts for around one-quarter of greenhouse gas emissions. More sustainable dietary choices can be made to reduce the impact of the food system on the environment. These choices may involve reducing consumption of meat and dairy products and instead eating more plant-based foods, and eating foods grown through sustainable farming practices. Religious and cultural dietary choices Some cultures and religions have restrictions concerning what foods are acceptable in their diet. For example, only Kosher foods are permitted in Judaism, and Halal foods in Islam. Although Buddhists are generally vegetarians, the practice varies and meat-eating may be permitted depending on the sects. In Hinduism, vegetarianism is the ideal. Jains are strictly vegetarian and in addition to that the consumption of any roots (ex: potatoes, carrots) is not permitted. In Christianity there is no restriction on the kinds of animals that can be eaten, though various groups within Christianity have practiced specific dietary restrictions for various reasons. The most common diets used by Christians are Mediterranean and vegetarianism. Diet classification table
Biology and health sciences
Health and fitness
null
640249
https://en.wikipedia.org/wiki/Saddle%20point
Saddle point
In mathematics, a saddle point or minimax point is a point on the surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a critical point), but which is not a local extremum of the function. An example of a saddle point is when there is a critical point with a relative minimum along one axial direction (between peaks) and a relative maximum along the crossing axis. However, a saddle point need not be in this form. For example, the function has a critical point at that is a saddle point since it is neither a relative maximum nor relative minimum, but it does not have a relative maximum or relative minimum in the -direction. The name derives from the fact that the prototypical example in two dimensions is a surface that curves up in one direction, and curves down in a different direction, resembling a riding saddle. In terms of contour lines, a saddle point in two dimensions gives rise to a contour map with a pair of lines intersecting at the point. Such intersections are rare in actual ordnance survey maps, as the height of the saddle point is unlikely to coincide with the integer multiples used in such maps. Instead, the saddle point appears as a blank space in the middle of four sets of contour lines that approach and veer away from it. For a basic saddle point, these sets occur in pairs, with an opposing high pair and an opposing low pair positioned in orthogonal directions. The critical contour lines generally do not have to intersect orthogonally. Mathematical discussion A simple criterion for checking if a given stationary point of a real-valued function F(x,y) of two real variables is a saddle point is to compute the function's Hessian matrix at that point: if the Hessian is indefinite, then that point is a saddle point. For example, the Hessian matrix of the function at the stationary point is the matrix which is indefinite. Therefore, this point is a saddle point. This criterion gives only a sufficient condition. For example, the point is a saddle point for the function but the Hessian matrix of this function at the origin is the null matrix, which is not indefinite. In the most general terms, a saddle point for a smooth function (whose graph is a curve, surface or hypersurface) is a stationary point such that the curve/surface/etc. in the neighborhood of that point is not entirely on any side of the tangent space at that point. In a domain of one dimension, a saddle point is a point which is both a stationary point and a point of inflection. Since it is a point of inflection, it is not a local extremum. Saddle surface A saddle surface is a smooth surface containing one or more saddle points. Classical examples of two-dimensional saddle surfaces in the Euclidean space are second order surfaces, the hyperbolic paraboloid (which is often referred to as "the saddle surface" or "the standard saddle surface") and the hyperboloid of one sheet. The Pringles potato chip or crisp is an everyday example of a hyperbolic paraboloid shape. Saddle surfaces have negative Gaussian curvature which distinguish them from convex/elliptical surfaces which have positive Gaussian curvature. A classical third-order saddle surface is the monkey saddle. Examples In a two-player zero sum game defined on a continuous space, the equilibrium point is a saddle point. For a second-order linear autonomous system, a critical point is a saddle point if the characteristic equation has one positive and one negative real eigenvalue. In optimization subject to equality constraints, the first-order conditions describe a saddle point of the Lagrangian. Other uses In dynamical systems, if the dynamic is given by a differentiable map f then a point is hyperbolic if and only if the differential of ƒ n (where n is the period of the point) has no eigenvalue on the (complex) unit circle when computed at the point. Then a saddle point is a hyperbolic periodic point whose stable and unstable manifolds have a dimension that is not zero. A saddle point of a matrix is an element which is both the largest element in its column and the smallest element in its row.
Mathematics
Functions: General
null
640746
https://en.wikipedia.org/wiki/Secant%20method
Secant method
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method, so it is considered a quasi-Newton method. Historically, it is as an evolution of the method of false position, which predates Newton's method by over 3000 years. The method The secant method is an iterative numerical method for finding a zero of a function . Given two initial values and , the method proceeds according to the recurrence relation This is a nonlinear second-order recurrence that is well-defined given and the two initial values and . Ideally, the initial values should be chosen close to the desired zero. Derivation of the method Starting with initial values and , we construct a line through the points and , as shown in the picture above. In slope–intercept form, the equation of this line is The root of this linear function, that is the value of such that is We then use this new value of as and repeat the process, using and instead of and . We continue this process, solving for , , etc., until we reach a sufficiently high level of precision (a sufficiently small difference between and ): Convergence The iterates of the secant method converge to a root of if the initial values and are sufficiently close to the root and is well-behaved. When is twice continuously differentiable and the root in question is a simple root, i.e., it has multiplicity 1, the order of convergence is the golden ratio This convergence is superlinear but subquadratic. If the initial values are not close enough to the root or is not well-behaved, then there is no guarantee that the secant method converges at all. There is no general definition of "close enough", but the criterion for convergence has to do with how "wiggly" the function is on the interval between the initial values. For example, if is differentiable on that interval and there is a point where on the interval, then the algorithm may not converge. Comparison with other root-finding methods The secant method does not require or guarantee that the root remains bracketed by sequential iterates, like the bisection method does, and hence it does not always converge. The false position method (or ) uses the same formula as the secant method. However, it does not apply the formula on and , like the secant method, but on and on the last iterate such that and have a different sign. This means that the false position method always converges; however, only with a linear order of convergence. Bracketing with a super-linear order of convergence as the secant method can be attained with improvements to the false position method (see Regula falsi § Improvements in regula falsi) such as the ITP method or the Illinois method. The recurrence formula of the secant method can be derived from the formula for Newton's method by using the finite-difference approximation, for a small : The secant method can be interpreted as a method in which the derivative is replaced by an approximation and is thus a quasi-Newton method. If we compare Newton's method with the secant method, we see that Newton's method converges faster (order 2 against order the golden ratio φ ≈ 1.6). However, Newton's method requires the evaluation of both and its derivative at every step, while the secant method only requires the evaluation of . Therefore, the secant method may sometimes be faster in practice. For instance, if we assume that evaluating takes as much time as evaluating its derivative and we neglect all other costs, we can do two steps of the secant method (decreasing the logarithm of the error by a factor φ2 ≈ 2.6) for the same cost as one step of Newton's method (decreasing the logarithm of the error by a factor of 2), so the secant method is faster. In higher dimensions, the full set of partial derivatives required for Newton's method, that is, the Jacobian matrix, may become much more expensive to calculate than the function itself. If, however, we consider parallel processing for the evaluation of the derivative or derivatives, Newton's method can be faster in clock time though still costing more computational operations overall. Generalization Broyden's method is a generalization of the secant method to more than one dimension. The following graph shows the function f in red and the last secant line in bold blue. In the graph, the x intercept of the secant line seems to be a good approximation of the root of f. Computational example Below, the secant method is implemented in the Python programming language. It is then applied to find a root of the function with initial points and def secant_method(f, x0, x1, iterations): """Return the root calculated using the secant method.""" for i in range(iterations): x2 = x1 - f(x1) * (x1 - x0) / float(f(x1) - f(x0)) x0, x1 = x1, x2 # Apply a stopping criterion here (see below) return x2 def f_example(x): return x ** 2 - 612 root = secant_method(f_example, 10, 30, 5) print(f"Root: {root}") # Root: 24.738633748750722 It is very important to have a good stopping criterion above, otherwise, due to limited numerical precision of floating point numbers, the algorithm can return inaccurate results if running for too many iterations. For example, the loop above can stop when one of these is reached first: abs(x0 - x1) < tol, or abs(x0/x1-1) < tol, or abs(f(x1)) < tol.
Mathematics
Real analysis
null
640755
https://en.wikipedia.org/wiki/Gun%20barrel
Gun barrel
A gun barrel is a crucial part of gun-type weapons such as small firearms, artillery pieces, and air guns. It is the straight shooting tube, usually made of rigid high-strength metal, through which a contained rapid expansion of high-pressure gas(es) is used to propel a projectile out of the front end (muzzle) at a high velocity. The hollow interior of the barrel is called the bore, and the diameter of the bore is called its calibre, usually measured in inches or millimetres. The first firearms were made at a time when metallurgy was not advanced enough to cast tubes capable of withstanding the explosive forces of early cannons, so the pipe (often built from staves of metal) needed to be braced periodically along its length for structural reinforcement, producing an appearance somewhat reminiscent of storage barrels being stacked together, hence the English name. History Gun barrels are usually made of some type of metal or metal alloy. However, during the late Tang dynasty, Chinese inventors discovered gunpowder, and used bamboo, which has a strong, naturally tubular stalk and is cheaper to obtain and process, as the first barrels in gunpowder projectile weapons such as fire lances. The Chinese were also the first to master cast-iron cannon barrels, and used the technology to make the earliest infantry firearms — the hand cannons. Early European guns were made of wrought iron, usually with several strengthening bands of the metal wrapped around circular wrought iron rings and then welded into a hollow cylinder. Bronze and brass were favoured by gunsmiths, largely because of their ease of casting and their resistance to the corrosive effects of the combustion of gunpowder or salt water when used on naval vessels. Early firearms were muzzleloaders, with the gunpowder and then the shot loaded from the front end (muzzle) of the barrel, and were capable of only a low rate of fire due to the cumbersome loading process. The later-invented breech-loading designs provided a higher rate of fire, but early breechloaders lacked an effective way of sealing the escaping gases that leaked from the back end (breech) of the barrel, reducing the available muzzle velocity. During the 19th century, effective breechblocks were invented that sealed a breechloader against the escape of propellant gases. Early cannon barrels were very thick for their caliber. This was because manufacturing defects such as air bubbles trapped in the metal were common at that time, and played key factors in many gun explosions; these defects made the barrel too weak to withstand the pressures of firing, causing it to fail and fragment explosively. Construction A gun barrel must be able to hold in the expanding gas produced by the propellants to ensure that optimum muzzle velocity is attained by the projectile as it is being pushed out. If the barrel material cannot cope with the pressure within the bore, the barrel itself might suffer catastrophic failure and explode, which will not only destroy the gun but also present a life-threatening danger to people nearby. Modern small arms barrels are made of carbon steel or stainless steel materials known and tested to withstand the pressures involved. Artillery pieces are made by various techniques providing reliably sufficient strength. Fluting Fluting is the removal of material from a cylindrical surface, usually creating rounded grooves, for the purpose of reducing weight. This is most often done to the exterior surface of a rifle barrel, though it may also be applied to the cylinder of a revolver or the bolt of a bolt-action rifle. Most flutings on rifle barrels and revolver cylinders are straight, though helical flutings can be seen on rifle bolts and occasionally also rifle barrels. While the main purpose of fluting is just to reduce weight and improve portability, when adequately done it can retain the structural strength and rigidity and increase the overall specific strength. Fluting will also increase the surface-to-volume ratio and make the barrel more efficient to cool after firing, though the reduced material mass also means the barrel will heat up easily during firing. Composite barrels A composite barrel is a firearm barrel that has been shaved down to be thinner and an exterior sleeve slipped over and fused to it that improves rigidity, weight and cooling. Most common form of composite barrel are those with carbon fiber sleeves, but there are proprietary examples such as the Teludyne Tech Straitjacket. They are seldom used outside sports and competition shooting. Mounting A barrel can be fixed to the receiver using action threads or rivets. Depending on construction different gun barrels can be used: common fixed barrel for firearms interchangeable barrel for firearms quick-change barrel for machineguns replacement gun barrel (spare barrel) for artilleryguns Components Chamber The chamber is the cavity at the back end of a breech-loading gun's barrel where the cartridge is inserted in position ready to be fired. In most firearms (rifles, shotguns, machine guns and pistols), the chamber is an integral part of the barrel, often made by simply reaming the rear bore of a barrel blank, with a single chamber within a single barrel. In revolvers, the chamber is a component of the gun's cylinder and completely separate from the barrel, with a single cylinder having multiple chambers that are rotated in turns into alignment with the barrel in anticipation of being fired. Structurally, the chamber consists of the body, shoulder and neck, the contour of which closely correspond to the casing shape of the cartridge it is designed to hold. The rear opening of the chamber is the breech of the whole barrel, which is sealed tight from behind by the bolt, making the front direction the path of least resistance during firing. When the cartridge's primer is struck by the firing pin, the propellant is ignited and deflagrates, generating high-pressure gas expansion within the cartridge case. However, the chamber (closed from behind by the bolt) restrains the cartridge case (or shell for shotguns) from moving, allowing the bullet (or shot/slug in shotguns) to separate cleanly from the casing and be propelled forward along the barrel to exit out of the front (muzzle) end as a flying projectile. Chambering a gun is the process of loading a cartridge into the gun's chamber, either manually as in single loading, or via operating the weapon's own action as in pump action, lever action, bolt action or self-loading actions. In the case of an air gun, a pellet (or slug) itself has no casing to be retained and will be entirely inserted into the chamber (often called "seating" or "loading" the pellet, rather than "chambering" it) before a mechanically pressurized gas is released behind the pellet and propels it forward, meaning that an air gun's chamber is functionally equivalent to the freebore portion of a firearm barrel. In the context of firearms design, manufacturing and modification, the word "chambering" has a different meaning, and refers to fitting a weapon's chamber specifically to fire a particular caliber or model of cartridge. Bore The bore is the hollow internal lumen of the barrel, and takes up a vast majority portion of the barrel length. It is the part of the barrel where the projectile (bullet, shot, or slug) is located prior to firing and where it gains speed and kinetic energy during the firing process. The projectile's status of motion while travelling down the bore is referred to as its internal ballistics. Most modern firearms (except muskets, shotguns, most tank guns, and some artillery pieces) and air guns (except some BB guns) have helical grooves called riflings machined into the bore wall. When shooting, a rifled bore imparts spin to the projectile about its longitudinal axis, which gyroscopically stabilizes the projectile's flight attitude and trajectory after its exit from the barrel (i.e. the external ballistics). Any gun without riflings in the bore is called a smoothbore gun. When a firearm cartridge is chambered, its casing occupies the chamber but its bullet actually protrudes beyond the chamber into the posterior end of the bore. Even in a rifled bore, this short rear section is without rifling, and allows the bullet an initial "run-up" to build up momentum before encountering riflings during shooting. The most posterior part of this unrifled section is called a freebore, and is usually cylindrical. The portion of the unrifled bore immediately front of the freebore, called the leade, starts to taper slightly and guides the bullet towards the area where the riflingless bore transitions into fully rifled bore. Together they form the throat region, where the riflings impactfully "bite" into the moving bullet during shooting. The throat is subjected to the greatest thermomechanical stress and therefore suffers wear the fastest. Throat erosion is often the main determining factor of a gun's barrel life. Muzzle The muzzle is the front end of a barrel from which the projectile will exit. Precise machining of the muzzle is crucial to accuracy, because it is the last point of contact between the barrel and the projectile. If inconsistent gaps exist between the muzzle and the projectile, escaping propellant gases may spread unevenly and deflect the projectile from its intended path (see transitional ballistics). The muzzle can also be threaded on the outside to allow the attachment of different accessory devices. In rifled barrels, the contour of a muzzle is designed to keep the rifling safe from damage by intruding foreign objects, so the front ends of the rifling grooves are commonly protected behind a recessed crown, which also serves to modulate the even expansion of the propellant gases. The crown itself is often recessed from the outside rim of the muzzle to avoid accidental damage from collision with the surrounding environment. In smooth bore barrels firing multiple sub-projectiles (such as shotgun shot), the bore at the muzzle end might have a tapered constriction called choke to shape the scatter pattern for better range and accuracy. Chokes are implemented as either interchangeable screw-in chokes for particular applications, or as fixed permanent chokes integral to the barrel. During firing, a bright flash of light known as a muzzle flash is often seen at the muzzle. This flash is produced by both superheated propellant gases radiating energy during expansion (primary flash), and the incompletely combusted propellant residues reacting vigorously with the fresh supply of ambient air upon escaping the barrel (secondary flash). The size of the flash depends on factors such as barrel length (shorter barrels have less time for complete combustion, hence more unburnt powder), the type (fast- vs. slow-burning) and amount of propellant (higher total amount means likely more unburnt residues) loaded in the cartridge. Flash suppressors or muzzle shrouds can be attached to the muzzle of the weapon to either diminish or conceal the flash. The rapid expansion of propellant gases at the muzzle during firing also produce a powerful shockwave known as a muzzle blast. The audible component of this blast, also known as a muzzle report, is the loud "bang" sound of gunfire that can easily exceed 140 decibels and cause permanent hearing loss to the shooter and bystanders. The non-audible component of the blast is an infrasonic overpressure wave that can cause damage to nearby fragile objects. Accessory devices such as muzzle brakes and muzzle boosters can be used to redirect muzzle blast in order to counter the recoil-induced muzzle rise or to assist the gas operation of the gun, and suppressors (and even muzzle shrouds) can be used to reduce the blast noise intensity felt by nearby personnel.
Technology
Mechanisms_2
null
640764
https://en.wikipedia.org/wiki/Barrel
Barrel
A barrel or cask is a hollow cylindrical container with a bulging center, longer than it is wide. They are traditionally made of wooden staves and bound by wooden or metal hoops. The word vat is often used for large containers for liquids, usually alcoholic beverages; a small barrel or cask is known as a keg. Barrels have a variety of uses, including storage of liquids such as water, oil, and alcohol. They are also employed to hold maturing beverages such as wine, cognac, armagnac, sherry, port, whiskey, beer, arrack, and sake. Other commodities once stored in wooden casks include gunpowder, meat, fish, paint, honey, nails, and tallow. Modern wooden barrels for wine-making are made of French common oak (Quercus robur), white oak (Quercus petraea), American white oak (Quercus alba), more exotic is mizunara oak (Quercus crispula), and recently Oregon oak (Quercus garryana) has been used. Someone who makes traditional wooden barrels is called a cooper. Today, barrels and casks can also be made of aluminum, stainless steel, and different types of plastic, such as HDPE. Early casks were bound with wooden hoops and in the 19th century these were gradually replaced by metal hoops that were stronger, more durable and took up less space. Barrel has also been used as a standard size of measure, referring to a set capacity or weight of a given commodity. For example, in the UK and Ireland, a barrel of beer refers to a quantity of , and is distinguished from other unit measurements, such as firkins, hogsheads, and kilderkins. Wine was shipped in barrels of . A barrel of oil, defined as , is still used as a measure of volume for oil, although oil is no longer shipped in barrels. The barrel has also come into use as a generic term for a wooden cask of any size. History An Egyptian wall-painting in the tomb of Hesy-Ra, dating to 2600 BC, shows a wooden tub used to measure wheat and constructed of staves bound together with wooden hoops. Another Egyptian tomb painting dating to 1900 BC shows a cooper and tubs made of staves in use at the grape harvest. Herodotus ( 484 –  425 BC) allegedly reports the use of "palm-wood casks" in ancient Babylon, but some modern scholarship disputes this interpretation. In Europe, buckets and casks dating to 200 BC have been found preserved in the mud of lake villages. A lake village near Glastonbury dating to the late Iron Age has yielded one complete tub and a number of wooden staves. The Roman historian Pliny the Elder (died 79 AD) reported that cooperage in Europe originated with the Gauls in Alpine villages who stored their beverages in wooden casks bound with hoops. Pliny identified three different types of coopers: ordinary coopers, wine coopers and coopers who made large casks. Large casks contain more and bigger staves and are correspondingly more difficult to assemble. Roman coopers tended to be independent tradesmen, passing their skills on to their sons. The Greek geographer Strabo ( 64 BC to 24 AD) recorded that wooden pithoi (barrels or wine-jars) were lined with pitch to stop leakage and preserve the wine. Barrels were sometimes used for military purposes. Julius Caesar (100 to 44 BC) used catapults to hurl burning barrels of tar into besieged towns to start fires. The Romans also used empty barrels to make pontoon bridges to cross rivers. Empty casks were used to line the walls of shallow wells from at least Roman times. Such casks were found in 1897 during archaeological excavations of Roman Silchester in Britain. They were made of Pyrenean silver fir and the staves were thick and featured grooves where the heads fitted. They had Roman numerals scratched on the surface of each stave to help with re-assembly. In Anglo-Saxon Britain, wooden barrels were used to store ale, butter, honey and mead. Drinking-containers were also made from small staves of oak, yew or pine. These items required considerable craftsmanship to hold liquids and might be bound with finely-worked precious metals. They were highly valued items and were sometimes buried with the dead as grave goods. Uses today Beverage maturing An "ageing barrel" is used to age wine; distilled spirits such as whiskey, brandy, or rum; beer; tabasco sauce; or (in smaller sizes) traditional balsamic vinegar. When a wine or spirit ages in a barrel, small amounts of oxygen are introduced as the barrel lets some air in (compare to microoxygenation where oxygen is deliberately added). Oxygen enters a barrel when water or alcohol is lost due to evaporation, a portion known as the "angels' share". In an environment with 100% relative humidity, very little water evaporates and so most of the loss is alcohol, a useful trick if one has a wine with very high proof. Most beverages are topped up from other barrels to prevent significant oxidation, although others such as vin jaune and sherry are not. Beverages aged in wooden barrels take on some of the compounds in the barrel, such as vanillin and wood tannins. The presence of these compounds depends on many factors, including the place of origin, how the staves were cut and dried, and the degree of "toast" applied during manufacture. Barrels used for aging are typically made of French or American oak, but chestnut and redwood are also used. Some Asian beverages (e.g., Japanese sake) use Japanese cedar, which imparts an unusual, minty-piney flavor. In Peru and Chile, a grape distillate named pisco is either aged in oak or in earthenware. Wines Some wines are fermented "on barrel", as opposed to in a neutral container like steel or wine-grade HDPE (high-density polyethylene) tanks. Wine can also be fermented in large wooden tanks, which—when open to the atmosphere—are called "open-tops". Other wooden cooperage for storing wine or spirits range from smaller barriques to huge casks, with either elliptical or round heads. The tastes yielded by French and American species of oak are slightly different, with French oak being subtler, while American oak gives stronger aromas. To retain the desired measure of oak influence, a winery will replace a certain percentage of its barrels every year, although this can vary from 5 to 100%. Some winemakers use "200% new oak", where the wine is put into new oak barrels twice during the aging process. Bulk wines are sometimes more cheaply flavored by soaking in oak chips or added commercial oak flavoring instead of being aged in a barrel because of the much lower cost. Sherry Sherry is stored in casks made of North American oak, which is slightly more porous than French or Spanish oak. The casks, or butts, are filled five-sixths full, leaving "the space of two fists" empty at the top to allow flor to develop on top of the wine. Sherry is also commonly swapped between barrels of different ages, a process that is known as solera. Spirits Whiskey Laws in several jurisdictions require that whiskey be aged in wooden barrels. The law in the United States requires that "straight whiskey" (with the exception of corn whiskey) must be stored for at least two years in new, charred oak containers. Other forms of whiskey aged in used barrels cannot be called "straight". International laws require any whisky bearing the label "Scotch" to be distilled and matured in Scotland for a minimum of three years and one day in oak casks. By Canadian law, Canadian whiskies must "be aged in small wood for not less than three years", and "small wood" is defined as a wood barrel not exceeding capacity. Since the U.S. law requires the use of new barrels for several popular types of whiskey, which is not typically considered necessary elsewhere, whiskey made elsewhere is usually aged in used barrels that previously contained American whiskey (usually bourbon whiskey). The typical bourbon barrel is in size, which is thus the de facto standard whiskey barrel size worldwide. Some distillers transfer their whiskey into different barrels to "finish" or add qualities to the final product. These finishing barrels frequently aged a different spirit (such as rum) or wine. Other distillers, particularly those producing Scotch, often disassemble five used bourbon barrels and reassemble them into four casks with different barrel ends for aging Scotch, creating a type of cask referred to as a hogshead. Brandy Maturing is very important for a good brandy, which is typically aged in oak casks. The wood used for those barrels is selected because of its ability to transfer certain aromas to the spirit. Cognac is aged only in oak casks made from wood from the Forest of Tronçais and more often from the Limousin forests. Tequila Some types of tequila are aged in oak barrels to mellow its flavor. "Reposado" tequila is aged for a period of two months to one year, "Añejo" tequila is aged for up to three years, and "Extra Añejo" tequila is aged for at least three years. Like with other spirits, longer aging results in a more pronounced flavor. Beer Beers are sometimes aged in barrels which were previously used for maturing wines or spirits. This is most common in darker beers such as stout, which is sometimes aged in oak barrels identical to those used for whiskey. Whisky distiller Jameson notably purchases barrels used by Franciscan Well brewery for their Shandon Stout to produce a whisky branded as "Jameson Caskmates". Cask ale is aged in the barrel (usually steel) for a short time before serving. Extensive barrel aging is required of many sour beers. Condiments Balsamic vinegar Traditional balsamic vinegar is aged in a series of wooden barrels. Tabasco sauce The pepper mash used to make Tabasco sauce is aged for three years in previously used oak whiskey barrels since its invention in 1868. Soft drinks Vernors ginger ale is marketed as having a "barrel-aged" flavor, and the syrup used to produce the beverage was originally aged in oak barrels when first manufactured in the 19th century. Whether the syrup continues to be aged in oak is unclear. Angels' share "Angels' share" is a term for the portion (share) of a wine or distilled spirit's volume that is lost to evaporation during aging in oak barrels. The ambient humidity tends to affect the composition of this share. Drier conditions tend to make the barrels evaporate more water, strengthening the spirit. However, in higher humidities, more alcohol than water will evaporate, therefore reducing the alcoholic strength of the product. This alcoholic evaporate encourages the growth of a darkly colored fungus, the angels' share fungus, Baudoinia compniacensis, which tends to appear on the exterior surfaces of most things in the immediate area. Water storage Water barrels are often used to collect the rainwater from dwellings (so that it may be used for irrigation or other purposes). This usage, known as rainwater harvesting, requires (besides a large rainwater barrel or water butt) adequate (waterproof) roof-covering and an adequate rain pipe. Oil storage Wooden casks of various sizes were used to store whale oil on ships in the age of sail. Its viscous nature made sperm whale oil a particularly difficult substance to contain in staved containers. Oil coopers were probably the most skilled coopers in pre-industrial cooperage. Olive oil, seed oils and other organic oils were also placed in wooden casks for storage or transport. Wooden casks were also used to store mineral oil. The standard size barrel of crude oil or other petroleum product (abbreviated bbl) is . This measurement originated in the early Pennsylvania oil fields, and permitted both British and American merchants to refer to the same unit, based on the old English wine measure, the tierce. Earlier, another size of whiskey barrel was the most common size; this was the barrel for proof spirits, which was of the same volume as five US bushels. However, by 1866, the oil barrel was standardized at 42 US gallons. Oil has not been shipped in barrels since the introduction of oil tankers, but the 42 US gallon size is still used as a unit of measurement for pricing and tax and regulatory codes. Each barrel is refined into about of gasoline, the rest becoming other products such as jet fuel and heating oil, using fractional distillation. Barrel shape, construction and parts Barrels have a convex shape and bulge at their center, called bilge. This facilitates rolling a well-built wooden barrel on its side and allows the roller to change directions with little friction, compared to a cylinder. It also helps to distribute stress evenly in the material by making the container more curved. Barrels have reinforced edges to enable safe displacement by rolling them at an angle (in addition to rolling on their sides as described). Casks used for ale or beer have shives and keystones in their openings. Before serving the beer, a spile is hammered into the shive and a tap into the keystone. The wooden parts that make up a barrel are called staves, the top and bottom are both called heads or headers, and the rings that hold the staves together are called hoops. These are usually made of galvanized iron, though historically they were made of flexible bits of wood called withies. While wooden hoops could require barrels to be "fully hooped", with hoops stacked tightly together along the entire top and bottom third of a barrel, iron-hooped barrels only require a few hoops on each end. Wine barrels typically come in two hoop configurations. An American barrel features six hoops, from top to center: head- or chime hoop, quarter hoop and bilge hoop (times two), while a French barrel features eight, including a so-called French hoop, located between the quarter- and bilge hoops (see "wine barrel parts" illustration). The opening at the center of a barrel is called a bung hole and the stopper used to seal it is a bung. The latter is generally made of white silicone. Sizes A barrel is one of several units of volume, with dry barrels, fluid barrels (UK beer barrel, US beer barrel), oil barrel, etc. The volume of some barrel units is double others, with various volumes in the range of about . English wine casks Pre-1824 definitions continued to be used in the US, the wine gallon of 231 cubic inches being the standard gallon for liquids (the corn gallon of 268.8 cubic inches for solids). In Britain, the wine gallon was replaced by the imperial gallon. The tierce later became the petrol barrel. The tun was originally 256 gallons, which explains from where the quarter, 8 bushels or 64 (wine) gallons, comes. Brewery casks Although it is common to refer to draught beer containers of any size as barrels, in the UK this is strictly correct only if the container holds 36 imperial gallons. The terms "keg" and "cask" refer to containers of any size, the distinction being that kegs are used for beers intended to be served using external gas cylinders. Cask ales undergo part of their fermentation process in their containers, called casks. Casks are available in several sizes, and it is common to refer to "a firkin" or "a kil" (kilderkin) instead of a cask. The modern US beer barrel is , half a gallon less than the traditional wine barrel (26 U.S.C. §5051). Dry goods Barrels are also used as a unit of measurement for dry goods (dry groceries), such as flour or produce. Traditionally, a barrel is of flour (wheat or rye), with other substances such as pork subject to more local variation. In modern times, produce barrels for all dry goods, excepting cranberries, contain 7,056 cubic inches, about 115.627 L. In the northeastern United States, nails, bolts, and plumbing fittings were commonly shipped in small rough barrels. These were small, 18 inches high by about 10–12 inches in diameter. The wood was the quality of pallet lumber. The binding was sometimes by wire or metal hoops or both. This practice seems to have been prevalent up till the 1980s. Older hardware stores probably still have some of these barrels.
Technology
Containers
null
641156
https://en.wikipedia.org/wiki/Bruise
Bruise
A bruise, also known as a contusion, is a type of hematoma of tissue, the most common cause being capillaries damaged by trauma, causing localized bleeding that extravasates into the surrounding interstitial tissues. Most bruises occur close enough to the epidermis such that the bleeding causes a visible discoloration. The bruise then remains visible until the blood is either absorbed by tissues or cleared by immune system action. Bruises which do not blanch under pressure can involve capillaries at the level of skin, subcutaneous tissue, muscle, or bone. Bruises are not to be confused with other similar-looking lesions. Such lesions include petechia (less than , resulting from numerous and diverse etiologies such as adverse reactions from medications such as warfarin, straining, asphyxiation, platelet disorders and diseases such as cytomegalovirus); and purpura (), classified as palpable purpura or non-palpable purpura and indicating various pathologic conditions such as thrombocytopenia. Additionally, although many terminology schemas treat an ecchymosis (plural, ecchymoses) (over ) as synonymous with a bruise, in some other schemas, an ecchymosis is differentiated by its remoteness from the source and cause of bleeding, with blood dissecting through tissue planes and settling in an area remote from the site of trauma or even nontraumatic pathology, such as in periorbital ecchymosis ("raccoon eyes"), arising from a basilar skull fracture or from a neuroblastoma. As a type of hematoma, a bruise is always caused by internal bleeding into the interstitial tissues which does not break through the skin, usually initiated by blunt trauma, which causes damage through physical compression and deceleration forces. Trauma sufficient to cause bruising can occur from a wide variety of situations including accidents, falls, and surgeries. Disease states such as insufficient or malfunctioning platelets, other coagulation deficiencies, or vascular disorders, such as venous blockage associated with severe allergies can lead to the formation of purpura which is not to be confused with trauma-related bruising/contusion. If the trauma is sufficient to break the skin and allow blood to escape the interstitial tissues, the injury is not a bruise but bleeding, a different variety of hemorrhage. Such injuries may be accompanied by bruising elsewhere. Signs and symptoms Bruises often induce pain immediately after the trauma that results in their formation, but small bruises are not normally dangerous alone. Sometimes bruises can be serious, leading to other more life-threatening forms of hematoma, such as when associated with serious injuries, including fractures and more severe internal bleeding. The likelihood and severity of bruising depends on many factors, including type and healthiness of affected tissues. Minor bruises may be easily recognized in people with light skin color by characteristic blue or purple appearance (idiomatically described as "black and blue") in the days following the injury. Hematomas can be subdivided by size. By definition, ecchymoses are 1 centimetres in size or larger, and are therefore larger than petechiae (less than 3 millimetres in diameter) or purpura (3 to 10 millimetres in diameter). Ecchymoses also have a more diffuse border than other purpura. A broader definition of ecchymosis is the escape of blood into the tissues from ruptured blood vessels. The term also applies to the subcutaneous discoloration resulting from seepage of blood within the injured tissue. Bruise colors vary from red, blue, or almost black, depending on the severity of broken capillaries or blood vessels within the bruise site. Broken venules or arterioles often result in a deep blue or dark red bruise, respectively. Darker colored bruises may result from a more severe bleeding from both blood vessels. Older bruises may appear yellow, green or brown. Cause There are many causes of subcutaneous hematomas including ecchymoses. Coagulopathies such as hemophilia A may cause ecchymosis formation in children. The medication betamethasone can have the adverse effect of causing ecchymosis. The presence of bruises may be seen in patients with platelet or coagulation disorders, or those who are being treated with an anticoagulant. Unexplained bruising may be a warning sign of child abuse, domestic abuse, or serious medical problems such as leukemia or meningoccocal infection. Unexplained bruising can also indicate internal bleeding or certain types of cancer. Long-term glucocorticoid therapy can cause easy bruising. Bruising present around the navel (belly button) with severe abdominal pain suggests acute pancreatitis. Connective tissue disorders such as Ehlers–Danlos syndrome may cause relatively easy or spontaneous bruising depending on the severity. Spontaneous bruising or bruising with minimal trauma in the absence of other explanations and together with other minor or major criteria suggestive of vascular Ehlers–Danlos Syndrome (vEDS) suggests genetic testing for the condition. During an autopsy, bruises accompanying abrasions indicate the abrasions occurred while the individual was alive, as opposed to damage incurred post mortem. Size and shape Bruise shapes may correspond directly to the instrument of injury or be modified by additional factors. Bruises often become more prominent as time lapses, resulting in additional size and swelling, and may grow to a large size over the course of the hours after the injury that caused the bruise was inflicted. Condition and type of tissue: In soft tissues, a larger area is bruised than would be in firmer tissue due to ease of blood to invade tissue. Age: elderly skin and other tissues are often thinner and less elastic and thus more prone to bruising. Gender: More bruising occurs in females due to increased subcutaneous fat. Skin tone: Discoloration caused by bruises is more prominent in lighter complexions. Diseases: Coagulation, platelet and blood vessel diseases or deficiencies can increase bruising due to more bleeding. Location: More extensive vascularity causes more bleeding. Areas such as the arms, knees, shins and the facial area are especially common bruise sites. Forces: Greater striking forces cause greater bruising. Genes: Despite having completely normal coagulation factors, natural redheads have been shown to bruise more, although this may just be due to greater visibility on commonly associated lighter complexion. Severity Bruises can be scored on a scale from 0–5 to categorize the severity and danger of the injury. The harm score is determined by the extent and severity of the injuries to the organs and tissues causing the bruising, in turn depending on multiple factors. For example, a contracted muscle will bruise more severely, as will tissues crushed against underlying bone. Capillaries vary in strength, stiffness and toughness, which can also vary by age and medical conditions. Low levels of damaging forces produce small bruises and generally cause the individual to feel minor pain straight away. Repeated impacts worsen bruises, increasing the harm level. Normally, light bruises heal nearly completely within two weeks, although duration is affected by variation in severity and individual healing processes; generally, more severe or deeper bruises take somewhat longer. Severe bruising (harm score 2–3) may be dangerous or cause serious complications. Further bleeding and excess fluid may accumulate causing a hard, fluctuating lump or swelling hematoma. This has the potential to cause compartment syndrome in which the swelling cuts off blood flow to the tissues. The trauma that induced the bruise may also have caused other severe and potentially fatal harm to internal organs. For example, impacts to the head can cause traumatic brain injury: bleeding, bruising and massive swelling of the brain with the potential to cause concussion, coma and death. Treatment for brain bruising may involve emergency surgery to relieve the pressure on the brain. Damage that causes bruising can also cause bones to be broken, tendons or muscles to be strained, ligaments to be sprained, or other tissue to be damaged. The symptoms and signs of these injuries may initially appear to be those of simple bruising. Abdominal bruising or severe injuries that cause difficulty in moving a limb or the feeling of liquid under the skin may indicate life-threatening injury and require the attention of a physician. Mechanism Increased distress to tissue causes capillaries to break under the skin, allowing blood to escape and build up. As time progresses, blood seeps into the surrounding tissues, causing the bruise to darken and spread. Nerve endings within the affected tissue detect the increased pressure, which, depending on severity and location, may be perceived as pain or pressure or be asymptomatic. The damaged capillary endothelium releases endothelin, a hormone that causes narrowing of the blood vessel to minimize bleeding. As the endothelium is destroyed, the underlying von Willebrand factor is exposed and initiates coagulation, which creates a temporary clot to plug the wound and eventually leads to restoration of normal tissue. During this time, larger bruises may change color due to the breakdown of hemoglobin from within escaped red blood cells in the extracellular space. The striking colors of a bruise are caused by the phagocytosis and sequential degradation of hemoglobin to biliverdin to bilirubin to hemosiderin, with hemoglobin itself producing a red-blue color, biliverdin producing a green color, bilirubin producing a yellow color, and hemosiderin producing a golden-brown color. As these products are cleared from the area, the bruise disappears. Often the underlying tissue damage has been repaired long before this process is complete. Treatment Treatment for light bruises is minimal and may include RICE (rest, ice, compression, and elevation), painkillers (particularly NSAIDs) and, later in recovery, light stretching exercises. Particularly, immediate application of ice while elevating the area may reduce or completely prevent swelling by restricting blood flow to the area and preventing internal bleeding. Rest and preventing re-injury is essential for rapid recovery. Very gently massaging the area and applying heat may encourage blood flow and relieve pain according to the Gate control theory of pain, although causing additional pain may indicate the massage is exacerbating the injury. As for most injuries, these techniques should not be applied until at least three days following the initial damage to ensure all internal bleeding has stopped, because although increasing blood flow will allow more healing factors into the area and encourage drainage, if the injury is still bleeding this will allow more blood to seep out of the wound and cause the bruise to become worse. History Folk medicine, including ancient medicine of Egyptians, Greeks, Celts, Turks, Slavs, Maya, Aztecs and Chinese, has used bruising as a treatment for some health problems. The methods vary widely and include cupping, scraping, and slapping. Fire cupping uses suction which causes bruising in patients. Scraping (gua sha) uses a small hand device with a rounded edge to gently scrape the scalp or the skin. Another ancient device that creates mild bruising is a strigil, used by Greeks and Romans in the bath. Archaeologically there is no precedent for scraping tools before Greek archaeological evidence, not Chinese or Egyptian. Etymology and pronunciation The word ecchymosis (; plural ecchymoses, ), comes to English from Neo-Latin, based on Greek , from , from (elided to ) and . Compare enchyma, "tissue infused with organic juice"; elaboration from chyme, the formative juice of tissues.
Biology and health sciences
Injury
null
641160
https://en.wikipedia.org/wiki/Lymphatic%20vessel
Lymphatic vessel
The lymphatic vessels (or lymph vessels or lymphatics) are thin-walled vessels (tubes), structured like blood vessels, that carry lymph. As part of the lymphatic system, lymph vessels are complementary to the cardiovascular system. Lymph vessels are lined by endothelial cells, and have a thin layer of smooth muscle, and adventitia that binds the lymph vessels to the surrounding tissue. Lymph vessels are devoted to the propulsion of the lymph from the lymph capillaries, which are mainly concerned with the absorption of interstitial fluid from the tissues. Lymph capillaries are slightly bigger than their counterpart capillaries of the vascular system. Lymph vessels that carry lymph to a lymph node are called afferent lymph vessels, and those that carry it from a lymph node are called efferent lymph vessels, from where the lymph may travel to another lymph node, may be returned to a vein, or may travel to a larger lymph duct. Lymph ducts drain the lymph into one of the subclavian veins and thus return it to general circulation. The vessels that bring lymph away from the tissues and towards the lymph nodes can be classified as afferent vessels. These afferent vessels then drain into the subcapsular sinus. The efferent vessels that bring lymph from the lymphatic organs to the nodes bringing the lymph to the right lymphatic duct or the thoracic duct, the largest lymph vessel in the body. These vessels drain into the right and left subclavian veins, respectively. There are far more afferent vessels bringing in lymph than efferent vessels taking it out to allow for lymphocytes and macrophages to fulfill their immune support functions. The lymphatic vessels contain valves. Structure The general structure of lymphatics is based on that of blood vessels. There is an inner lining of single flattened epithelial cells (simple squamous epithelium) composed of a type of epithelium that is called the endothelium, and the cells are called endothelial cells. This layer functions to mechanically transport fluid and since the basement membrane on which it rests is discontinuous; it leaks easily. The next layer is that of smooth muscles that are arranged in a circular fashion around the endothelium, which by shortening (contracting) or relaxing alter the diameter (caliber) of the lumen. The outermost layer is the adventitia which consists of fibrous tissue. The general structure described here is seen only in larger lymphatics; smaller lymphatics have fewer layers. The smallest vessels (lymphatic or lymph capillaries) lack both the muscular layer and the outer adventitia. As they proceed forward and in their course are joined by other capillaries, they grow larger and first take on an adventitia, and then smooth muscles. The lymphatic conducting system broadly consists of two types of channels—the initial lymphatics, the prelymphatics or lymph capillaries that specialize in collection of the lymph from the interstital fluid, and the larger lymph vessels that propel the lymph forward. Unlike the cardiovascular system, the lymphatic system is not closed and has no central pump. Lymph movement occurs despite low pressure due to peristalsis (propulsion of the lymph due to alternate contraction and relaxation of smooth muscle), valves, and compression during contraction of adjacent skeletal muscle and arterial pulsation. Lymph capillaries The lymphatic circulation begins with blind ending (closed at one end) highly permeable superficial lymph capillaries, formed by endothelial cells with button-like junctions between them that allow fluid to pass through them when the interstitial pressure is sufficiently high. These button-like junctions consist of protein filaments like platelet endothelial cell adhesion molecule-1, or PECAM-1. A valve system in place here prevents the absorbed lymph from leaking back into the interstital fluid. This valve system involves collagen fibers attached to lymphatic endothelial cells that respond to increased interstitial fluid pressure by separating the endothelial cells and allowing the flow of lymph into the capillary for circulation. There is another system of semilunar valves that prevents back-flow of lymph along the lumen of the vessel. Lymph capillaries have many interconnections (anastomoses) between them and form a very fine network. Rhythmic contraction of the vessel walls through movements may also help draw fluid into the smallest lymphatic vessels, capillaries. If tissue fluid builds up the tissue will swell; this is called edema. As the circular path through the body's system continues, the fluid is then transported to progressively larger lymphatic vessels culminating in the right lymphatic duct (for lymph from the right upper body) and the thoracic duct (for the rest of the body); both ducts drain into the circulatory system at the right and left subclavian veins. The system collaborates with white blood cells in lymph nodes to protect the body from being infected by cancer cells, fungi, viruses or bacteria. This is known as a secondary circulatory system. Lymph vessels The lymph capillaries drain into larger collecting lymphatics. These are contractile lymphatics which transport lymph using a combination of smooth muscle walls, which contract to assist in transporting lymph, as well as valves to prevent the lymph from flowing backwards. As the collecting lymph vessel accumulates lymph from more and more lymph capillaries along its length, it becomes larger and eventually becomes an afferent lymph vessel as it enters a lymphs node. The lymph percolates through the lymph node tissue and exits via an efferent lymph vessel. An efferent lymph vessel may directly drain into one of the (right or thoracic) lymph ducts, or may empty into another lymph node as its afferent lymph vessel. Both the lymph ducts return the lymph to the blood stream by emptying into the subclavian veins Lymph vessels consist of functional units known as lymphangions which are segments separated by semilunar valves. These segments propel or resist the flow of lymph by the contraction of the encircling smooth muscle depending upon the ratio of its length to its radius. Function Lymph vessels act as reservoirs for plasma and other substances including cells that have leaked from the vascular system and transport lymph fluid back from the tissues to the circulatory system. Without functioning lymph vessels, lymph cannot be effectively drained and lymphedema typically results. Afferent vessels The afferent lymph vessels enter at all parts of the periphery of the lymph node, and after branching and forming a dense plexus in the substance of the capsule, open into the lymph sinuses of the cortical part. It carries unfiltered lymph into the node. In doing this they lose all their coats except their endothelial lining, which is continuous with a layer of similar cells lining the lymph paths. Afferent lymphatic vessels are only found in lymph nodes. This is in contrast to efferent lymphatic vessel which are also found in the thymus and spleen. Efferent vessels The efferent lymphatic vessel commences from the lymph sinuses of the medullary portion of the lymph nodes and leave the lymph nodes at the hilum, either to veins or greater nodes. It carries filtered lymph out of the node. Efferent lymphatic vessels are also found in association with the thymus and spleen. This is in contrast to afferent lymphatic vessels, which are found only in association with lymph nodes. Clinical significance Lymphedema is the swelling of tissues due to insufficient fluid drainage by the lymphatic vessels. It can be the result from absent, underdeveloped or dysfunctional lymphatic vessels. In hereditary (or primary) lymphedema, the lymphatic vessels are absent, underdeveloped or dysfunctional due to genetic causes. In acquired (or secondary) lymphedema, the lymphatic vessels are damaged by injury or infection. Lymphangiomatosis is a disease involving multiple cysts or lesions formed from lymphatic vessels. Additional images
Biology and health sciences
Circulatory system
Biology
641696
https://en.wikipedia.org/wiki/Generalized%20anxiety%20disorder
Generalized anxiety disorder
Generalized anxiety disorder (GAD) is an anxiety disorder characterized by excessive, uncontrollable and often irrational worry about events or activities. Worry often interferes with daily functioning, and individuals with GAD are often overly concerned about everyday matters such as health, finances, death, family, relationship concerns, or work difficulties. Symptoms may include excessive worry, restlessness, trouble sleeping, exhaustion, irritability, sweating, and trembling. Symptoms must be consistent and ongoing, persisting at least six months for a formal diagnosis. Individuals with GAD often have other disorders including other psychiatric disorders (e.g., major depressive disorder), substance use disorder, or obesity, and may have a history of trauma or family with GAD. Clinicians use screening tools such as the GAD-7 and GAD-2 questionnaires to determine if individuals may have GAD and warrant formal evaluation for the disorder. Additionally, sometimes screening tools may enable clinicians to evaluate the severity of GAD symptoms. GAD is believed to have a hereditary or genetic basis (e.g., first-degree relatives of an individual who has GAD are themselves more likely to have GAD), but the exact nature of this relationship is not fully appreciated. Genetic studies of individuals who have anxiety disorders (including GAD) suggest that the hereditary contribution to developing anxiety disorders is only approximately 30–40%, which suggests that environmental factors may be more important to determining whether an individual develops GAD. There is a strong overlapping relationship between GAD and major depressive disorder (MDD), with 72% of those with a lifelong diagnosis of GAD also being diagnosed with MDD at some point in their lives. The pathophysiology of GAD implicates several regions of the brain that mediate the processing of stimuli associated with fear, anxiety, memory, and emotion (i.e., the amygdala, insula, and the frontal cortex). The amygdala is part of the brain that is associated with experiencing emotions. In the amygdala, the basolateral amygdala complex recognizes sensory information and activates GABAergic neurons which can cause somatic symptoms of anxiety. GABAergic neurons control the nervous system by reducing feelings of stress, anxiety, and fear. When there is an inadequate number of GABAergic neurons, those negative feelings become apparent and can release somatic responses of stress. It has been suggested that individuals with GAD have greater amygdala and medial prefrontal cortex (mPFC) activity in response to stimuli than individuals who do not have GAD. However, the relationship between GAD and activity levels in other parts of the frontal cortex is the subject of ongoing research with some literature suggesting greater activation in specific regions for individuals who have GAD but where other research suggests decreased activation levels in individuals who have GAD as compared to individuals who do not have GAD. Treatment includes psychotherapy (e.g., cognitive behavioral therapy [CBT] or metacognitive therapy) and pharmacological intervention. CBT and selective serotonin reuptake inhibitors (SSRI) antidepressants (e.g., escitalopram, sertraline, and fluoxetine) are first-line psychological and pharmacological treatments; other options include serotonin–norepinephrine reuptake inhibitors (SNRI) antidepressants (e.g., duloxetine and venlafaxine). In more severe, last resort cases, potent anxiolytics such as diazepam, clonazepam, and alprazolam are used, though not as first-line drugs as benzodiazepines are frequently abused and habit forming. In Europe, pregabalin is also used. The positive effects (if any) of complementary and alternative medications (CAMs), exercise, therapeutic massage and other interventions have been studied. Estimates regarding prevalence of GAD or lifetime risk (i.e., lifetime morbid risk [LMR]) for GAD vary depending upon which criteria are used for diagnosing GAD (e.g., DSM-5 versus ICD-10) although estimates do not vary widely between diagnostic criteria. In general, ICD-10 is more inclusive than DSM-5, so estimates regarding prevalence and lifetime risk tend to be greater using ICD-10. In regard to prevalence, in a given year, about two (2%) percent of adults in the United States and Europe have been suggested to have GAD. However, the risk of developing GAD at any point in life has been estimated at 9.0%. Although it is possible to experience a single episode of GAD during one's life, most people who experience GAD experience it repeatedly over the course of their lives as a chronic or ongoing condition. GAD is diagnosed twice as frequently in women as in men. Diagnosis DSM-5 criteria The diagnostic criteria for GAD as defined by the Diagnostic and Statistical Manual of Mental Disorders DSM-5 (2013), published by the American Psychiatric Association, are paraphrased as follows:No major changes to GAD have occurred since publication of the Diagnostic and Statistical Manual of Mental Disorders (2004); minor changes include wording of diagnostic criteria. ICD-10 criteria The 10th revision of the International Statistical Classification of Disease (ICD-10) provides a different set of diagnostic criteria for GAD than the DSM-5 criteria described above. In particular, ICD-10 allows diagnosis of GAD as follows: See ICD-10 F41.1 Note: For children different ICD-10 criteria may be applied for diagnosing GAD (see F93.80). History of diagnostic criteria The American Psychiatric Association introduced GAD as a diagnosis in the DSM-III in 1980, when anxiety neurosis was split into GAD and panic disorder. The definition in the DSM-III required uncontrollable and diffuse anxiety or worry that is excessive and unrealistic and persists for 1 month or longer. High rates in comorbidity of GAD and major depression led many commentators to suggest that GAD would be better conceptualized as an aspect of major depression instead of an independent disorder. Many critics stated that the diagnostic features of this disorder were not well established until the DSM-III-R. Since comorbidity of GAD and other disorders decreased with time, the DSM-III-R changed the time requirement for a GAD diagnosis to 6 months or longer. The DSM-IV changed the definition of excessive worry and the number of associated psychophysiological symptoms required for a diagnosis. Another aspect of the diagnosis the DSM-IV clarified was what constitutes a symptom as occurring "often". The DSM-IV also required difficulty controlling the worry to be diagnosed with GAD. The DSM-5 emphasized that excessive worrying had to occur more days than not and on a number of different topics. It has been stated that the constant changes in the diagnostic features of the disorder have made assessing epidemiological statistics such as prevalence and incidence difficult, as well as increasing the difficulty for researchers in identifying the biological and psychological underpinnings of the disorder. Consequently, making specialized medications for the disorder is more difficult as well. This has led to the continuation of GAD being medicated heavily with SSRIs. Risk factors Genetics, family and environment The relationship between genetics and anxiety disorders is an ongoing area of research. It is broadly understood that there exists a hereditary basis for GAD, but the exact nature of this hereditary basis is not fully understood.  While investigators have identified several genetic loci that are regions of interest for further study, there is no singular gene or set of genes that have been identified as causing GAD.  Nevertheless, genetic factors may play a role in determining whether an individual is at greater risk for developing GAD, structural changes in the brain related to GAD, or whether an individual is more or less likely to respond to a particular treatment modality.  Genetic factors that may play a role in development of GAD are usually discussed in view of environmental factors (e.g., life experience or ongoing stress) that might also play a role in development of GAD. The traditional methods of investigating the possible hereditary basis of GAD include using family studies and twin studies (there are no known adoption studies of individuals who have anxiety disorders, including GAD). Meta-analysis of family and twin studies suggests that there is strong evidence of a hereditary basis for GAD in that GAD is more likely to occur in first-degree relatives of individuals who have GAD than in non-related individuals in the same population. Twin studies also suggest that there may be a genetic linkage between GAD and major depressive disorder (MDD), which may explain the common occurrence of MDD in individuals who have GAD (e.g., comorbidity of MDD in individuals with GAD has been estimated at 60%). When GAD is considered among all anxiety disorders (e.g., panic disorder, social anxiety disorder), genetic studies suggest that hereditary contribution to the development of anxiety disorders amounts to only approximately 30–40%, which suggests that environmental factors are likely more important to determining whether an individual may develop GAD. In regard to environmental influences in the development of GAD, it has been suggested that parenting behaviour may be an important influence since parents potentially model anxiety-related behaviours. It has also been suggested that individuals with GAD have experienced a greater number of minor stress-related events in life and that the number of stress-related events may be important in development of GAD (irrespective of other individual characteristics). Studies of possible genetic contributions to the development of GAD have examined relationships between genes implicated in brain structures involved in identifying potential threats (e.g., in the amygdala) and also implicated in neurotransmitters and neurotransmitter receptors known to be involved in anxiety disorders. More specifically, genes studied for their relationship to development of GAD or demonstrated to have had a relationship to treatment response include: PACAP (A54G polymorphism): remission after 6-month treatment with Venlafaxine suggested to have a significant relationship with the A54G polymorphism (Cooper et al. (2013)) HTR2A gene (rs7997012 SNP G allele): HTR2A allele suggested to be implicated in a significant decrease in anxiety symptoms associated with response to 6 months of Venlafaxine treatment (Lohoff et al. (2013)) SLC6A4 promoter region (5-HTTLPR): Serotonin transporter gene suggested to be implicated in significant reduction in anxiety symptoms in response to 6 months of Venlafaxine treatment (Lohoff et al. (2013)) Problematic digital media use Evolutionary Explanations From an evolutionary perspective, generalized anxiety can be viewed as an overextension of the protective mechanisms that help organisms avoid danger. Cost–benefit analyses, sometimes referred to as the “smoke detector principle,” propose that false alarms (unnecessary worry) are less costly than failing to detect real threats. As a result, having a relatively low threshold for perceiving danger may have historically conferred survival benefits. In individuals with GAD, however, this adaptive threshold appears to be set too low or activated too often, generating pervasive worry about routine events and relatively minor stressors. Empirical work supports the idea that GAD involves heightened reactivity in brain regions associated with threat detection, including the amygdala. Researchers have also found links between GAD and elevated inflammation markers, suggesting a possible physiological correlate for the chronic anxiety seen in the disorder. Although anxiety’s defensive functions may have been advantageous in unpredictable environments, modern contexts can render this vigilance maladaptive when it persists as near-constant worry and avoidance. This view places GAD at the extreme end of a continuum, where otherwise beneficial anxiety responses overshoot, leading to significant distress and functional impairment. Pathophysiology The pathophysiology of GAD is an active and ongoing area of research often involving the intersection of genetics and neurological structures. Generalized anxiety disorder has been linked to changes in functional connectivity of the amygdala and its processing of fear and anxiety. Sensory information enters the amygdala through the nuclei of the basolateral complex (consisting of lateral, basal and accessory basal nuclei). The basolateral complex processes the sensory-related fear memories and communicates information regarding threat importance to memory and sensory processing elsewhere in the brain, such as the medial prefrontal cortex and sensory cortices. Neurological structures traditionally appreciated for their roles in anxiety include the amygdala, insula and orbitofrontal cortex (OFC). It is broadly postulated that changes in one or more of these neurological structures are believed to allow greater amygdala response to emotional stimuli in individuals who have GAD as compared to individuals who do not have GAD. Individuals with GAD have been suggested to have greater amygdala and medial prefrontal cortex (mPFC) activation in response to stimuli than individuals who do not have GAD. However, the exact relationship between the amygdala and the frontal cortex (e.g., prefrontal cortex or the orbitofrontal cortex [OFC]) is not fully understood because there are studies that suggest increased or decreased activity in the frontal cortex in individuals who have GAD. Consequently, because of the tenuous understanding of the frontal cortex as it relates to the amygdala in individuals who have GAD, it's an open question as to whether individuals who have GAD bear an amygdala that is more sensitive than an amygdala in an individual without GAD or whether frontal cortex hyperactivity is responsible for changes in amygdala responsiveness to various stimuli. Recent studies have attempted to identify specific regions of the frontal cortex (e.g., dorsomedial prefrontal cortex [dmPFC]) that may be more or less reactive in individuals who have GAD or specific networks that may be differentially implicated in individuals who have GAD. Other lines of study investigate whether activation patterns vary in individuals who have GAD at different ages with respect to individuals who do not have GAD at the same age (e.g., amygdala activation in adolescents with GAD). Treatment Traditional treatment modalities broadly fall into two categories, i.e., psychotherapeutic and pharmacological intervention. In addition to these two conventional therapeutic approaches, areas of active investigation include complementary and alternative medications (CAMs), brain stimulation, exercise, therapeutic massage and other interventions that have been proposed for further study. Treatment modalities can, and often are, utilized concurrently so that an individual may pursue psychological therapy (i.e., psychotherapy) and pharmacological therapy. Both cognitive behavioral therapy (CBT) and medications (such as SSRIs) have been shown to be effective in reducing anxiety. A combination of both CBT and medication is generally seen as the most desirable approach to treatment. Use of medication to lower extreme anxiety levels can be important in enabling patients to engage effectively in CBT. Psychotherapy Psychotherapeutic interventions include a plurality of therapy types that vary based upon their specific methodologies for enabling individuals to gain insight into the working of the conscious and subconscious mind and which sometimes focus on the relationship between cognition and behavior. Cognitive behavioral therapy (CBT) is widely regarded as the first-line psychological therapy for treating GAD. Additionally, many of these psychological interventions may be delivered in an individual or group therapy setting. While individual and group settings are broadly both considered effective for treating GAD, individual therapy tends to promote longer-lasting engagement in therapy (i.e., lower attrition over time). Psychodynamic therapy Psychodynamic therapy is a type of therapy premised upon Freudian psychology in which a psychologist enables an individual explore various elements in their subconscious mind to resolve conflicts that may exist between the conscious and subconscious elements of the mind. In the context of GAD, the psychodynamic theory of anxiety suggests that the unconscious mind engages in worry as a defense mechanism to avoid feelings of anger or hostility because such feelings might cause social isolation or other negative attribution toward oneself. Accordingly, the various psychodynamic therapies attempt to explore the nature of worry as it functions in GAD in order to enable individuals to alter the subconscious practice of using worry as a defense mechanism and to thereby diminish GAD symptoms. Variations of psychotherapy include a near-term version of therapy, "short-term anxiety-provoking psychotherapy (STAPP). Behavioral therapy Behavioral therapy is therapeutic intervention premised upon the concept that anxiety is learned through classical conditioning (e.g., in view of one or more negative experiences) and maintained through operant conditioning (e.g., one finds that by avoiding a feared experience that one avoids anxiety). Thus, behavioral therapy enables an individual to re-learn conditioned responses (behaviors) and to thereby challenge behaviors that have become conditioned responses to fear and anxiety, and which have previously given rise to further maladaptive behaviors. Cognitive therapy Cognitive therapy (CT) is premised upon the idea that anxiety is the result of maladaptive beliefs and methods of thinking. Thus, CT involves assisting individuals to identify more rational ways of thinking and to replace maladaptive thinking patterns (i.e., cognitive distortions) with healthier thinking patterns (e.g., replacing the cognitive distortion of catastrophizing with a more productive pattern of thinking). Individuals in CT learn how to identify objective evidence, test hypotheses, and ultimately identify maladaptive thinking patterns so that these patterns can be challenged and replaced. Acceptance and commitment therapy Acceptance and commitment therapy (ACT) is a behavioral treatment based on acceptance-based models. ACT is designed with the purpose to target three therapeutic goals: (1) reduce the use of avoiding strategies intended to avoid feelings, thoughts, memories, and sensations; (2) decreasing a person's literal response to their thoughts (e.g., understanding that thinking "I'm hopeless" does not mean that the person's life is truly hopeless), and (3) increasing the person's ability to keep commitments to changing their behaviors. These goals are attained by switching the person's attempt to control events to working towards changing their behavior and focusing on valued directions and goals in their lives as well as committing to behaviors that help the individual accomplish those personal goals. This psychological therapy teaches mindfulness (paying attention on purpose, in the present, and in a nonjudgmental manner) and acceptance (openness and willingness to sustain contact) skills for responding to uncontrollable events and therefore manifesting behaviors that enact personal values. Intolerance of uncertainty therapy Intolerance of uncertainty (IU) refers to a consistent negative reaction to uncertain and ambiguous events regardless of their likelihood of occurrence. Intolerance of uncertainty therapy (IUT) is used as a stand-alone treatment for GAD patients. Thus, IUT focuses on helping patients in developing the ability to tolerate, cope with and accept uncertainty in their life in order to reduce anxiety. IUT is based on the psychological components of psychoeducation, awareness of worry, problem-solving training, re-evaluation of the usefulness of worry, imagining virtual exposure, recognition of uncertainty, and behavioral exposure. Studies have shown support for the efficacy of this therapy with GAD patients with continued improvements in follow-up periods. Motivational interviewing A promising innovative approach to improving recovery rates for the treatment of GAD is to combine CBT with motivational interviewing (MI). Motivational interviewing is a strategy centered on the patient that aims to increase intrinsic motivation and decrease ambivalence about change due to the treatment. MI contains four key elements: (1) express empathy, (2) heighten dissonance between behaviors that are not desired and values that are not consistent with those behaviors, (3) move with resistance rather than direct confrontation, and (4) encourage self-efficacy. It is based on asking open-ended questions and listening carefully and reflectively to patients' answers, eliciting "change talk", and talking with patients about the pros and cons of change. Some studies have shown the combination of CBT with MI to be more effective than CBT alone. Cognitive behavioral therapy Cognitive behavioral therapy (CBT) is an evidence-based type of psychotherapy that demonstrates efficacy in treating GAD and which integrates the cognitive and behavioral therapeutic approaches. The objective of CBT is to enable individuals to identify irrational thoughts that cause anxiety and to challenge dysfunctional thinking patterns by engaging in awareness techniques such as hypothesis testing and journaling. Because CBT involves the practice of worry and anxiety management, CBT includes a plurality of intervention techniques that enable individuals to explore worry, anxiety and automatic negative thinking patterns. These interventions include anxiety management training, cognitive restructuring, progressive relaxation, situational exposure and self-controlled desensitization. Several modes of delivery are effective in treating GAD, including internet-delivered CBT, or iCBT. Emotion-focused therapy Emotion-focused therapy (EFT) is a short-term psychotherapy that is focused on humanistic needs of emotions when treating individuals with GAD. EFT can incorporate numerous practices such as experimental therapy, systemic therapy, and elements of CBT to allow individuals to work through difficult emotional states. The primary goal of EFT is assisting individuals in living with their vulnerable emotions and overcoming avoidance so that adaptive experiences such as compassion and protective anger can be generated in response to the emotional needs that are embedded in core emotional vulnerability. Sandplay therapy Sandplay therapy (SPT) is an intervention based on nonverbal therapeutic practices. The main objective of SPT is to allow the individual the ability to work through their emotional problems from childhood traumas (CT) through play using sand and toy figures. Although the therapy is mainly focused on nonverbal cues, verbal cues are also observed and documented during the rehabilitation process of the individual. SPT allows a multi-sensory experience through a safe and protected space allowing the individual the opportunity to regulate their mind and emotions. This therapeutic practice is offered in both adults and children. Exposure therapy There is empirical evidence that exposure therapy can be an effective treatment for people with GAD, citing specifically in vivo exposure therapy (exposure through a real-life situation), which has greater effectiveness than imaginal exposure in regards to generalized anxiety disorder. The aim of in vivo exposure treatment is to promote emotional regulation using systematic and controlled therapeutic exposure to traumatic stimuli. Exposure is used to promote fear tolerance. Exposure therapy is also a preferred method for children who struggle with anxiety. Other forms of psychological therapy Relaxation techniques (e.g., relaxing imagery, meditational relaxation) Metacognitive therapy (MCT): The objective of MCT is to alter thinking patterns regarding worry so that worry is no longer used as a coping strategy. It has promising results in treatment of GAD as well as other mental issues. Mindfulness based stress reduction (MBSR) Mindfulness based cognitive therapy (MBCT): The goal of MBCT is to be used as an alternative or adjunctive to Cognitive Behavior Therapy (CBT). Supportive therapy: This is a Rogerian method of therapy in which subjects experience empathy and acceptance from their therapist to facilitate increasing awareness. Variations of active supportive therapy include Gestalt therapy, Transactional analysis and Counseling. Internet-delivered interpretation training: The focus of this training is to reduce worry and anxiety while promoting positive outcomes and positive interpretations. Pharmacotherapy Medications that have been studied were reviewed in a recent network meta-analysis that compared all studied medications with placebo and also with each other and another compared the rates of remission between different medications. Benzodiazepines (BZs) have been used to treat anxiety starting in the 1960s. There is a risk of dependence and tolerance to benzodiazepines. BZs have a number of effects that make them a good option for treating anxiety including anxiolytic, hypnotic (induce sleep), myorelaxant (relax muscles), anticonvulsant, and amnestic (impair short-term memory) properties. While BZs work well to alleviate anxiety shortly after administration, they are also known for their ability to promote dependence and are frequently used recreationally or non-medically. Antidepressants (e.g., SSRIs / SNRIs) have become a mainstay in treating GAD in adults. First-line medications from any drug category often include those that have been approved by the Food and Drug Administration (FDA) or other similar regulatory body such as the EMA or TGA for treating GAD because these drugs have been shown to be safe and effective. FDA-approved medications for treating GAD FDA-approved medications for treating GAD include: Non-FDA approved medications While certain medications are not specifically FDA approved for treatment of GAD, there are a number of medications that historically have been used or studied for treating GAD. Other medications that have been used or evaluated for treating GAD include: SSRIs (antidepressants) Citalopram Fluoxetine Sertraline Fluvoxamine Benzodiazepines Clonazepam Lorazepam Diazepam GABA analogs Pregabalin Tiagabine Second-generation antipsychotics (SGAs) Olanzapine (evidence of effectiveness is merely a trend) Ziprasidone Risperidone Aripiprazole (studied as an adjunctive measure in concert with other treatment) Quetiapine (atypical antipsychotic studied as an adjunctive measure in adults and geriatric patients) Antihistamines Hydroxyzine (H1 receptor antagonist) Vilazodone (atypical antidepressant) Agomelatine (antidepressant, MT1/2 receptor agonist, 5-HT2C antagonist) Clonidine (noted to cause decreased blood pressure and other ) Guanfacine (α2A receptor agonist, studied in pediatric patients with GAD) Mirtazapine (atypical antidepressant having 5-HT2A and 5-HT2C receptor affinity) Vortioxetine (multimodal antidepressant) Eszopiclone (non-benzodiazepine hypnotic) Tricyclic antidepressants Amitriptyline Clomipramine Doxepin Imipramine Trimipramine Desipramine Nortriptyline Protriptyline Opipramol (atypical TCA) Trazodone Monamine oxidase inhibitors (MAOIs) Tranylcypromine Phenelzine Homeopathic preparations (discussed below, see complementary and alternative medications [CAMs]) Selective serotonin reuptake inhibitors Pharmaceutical treatments for GAD include selective serotonin reuptake inhibitors (SSRIs). SSRIs increase serotonin levels through inhibition of serotonin reuptake receptors. FDA approved SSRIs used for this purpose include escitalopram and paroxetine. However, guidelines suggest using sertraline first due to its cost-effectiveness compared to other SSRIs used for generalized anxiety disorder and a lower risk of withdrawal compared to SNRIs. If sertraline is found to be ineffective, then it is recommended to try another SSRI or SNRI. Common side effects include nausea, sexual dysfunction, headache, diarrhea, constipation, restlessness, increased risk of suicide in young adults and adolescents, among others. Sexual side effects, weight gain, and higher risk of withdrawal are more common in paroxetine than escitalopram and sertraline. In older populations or those taking concomitant medications that increase risk of bleeding, SSRIs may further increase the risk of bleeding. Overdose of an SSRI or concomitant use with another agent that causes increased levels of serotonin can result in serotonin syndrome, which can be life-threatening. Serotonin norepinephrine reuptake inhibitors First line pharmaceutical treatments for GAD also include serotonin-norepinephrine reuptake inhibitors (SNRIs). These inhibit the reuptake of serotonin and noradrenaline to increase their levels in the CNS. FDA approved SNRIs used for this purpose include duloxetine (Cymbalta) and venlafaxine (Effexor). While SNRIs have similar efficacy as SSRIs, many psychiatrists prefer to use SSRIs first in the treatment of Generalized Anxiety Disorder. The slightly higher preference for SSRIs over SNRIs as a first choice for treatment of anxiety disorders may have been influenced by the observation of poorer tolerability of the SNRIs in comparison to SSRIs in systematic reviews of studies of depressed patients. Side effects common to both SNRIs include anxiety, restlessness, nausea, weight loss, insomnia, dizziness, drowsiness, sweating, dry mouth, sexual dysfunction and weakness. In comparison to SSRIs, the SNRIs have a higher prevalence of the side effects of insomnia, dry mouth, nausea and high blood pressure. Both SNRIs have the potential for discontinuation syndrome after abrupt cessation, which can precipitate symptoms including motor disturbances and anxiety and may require tapering. Like other serotonergic agents, SNRIs have the potential to cause serotonin syndrome, a potentially fatal systemic response to serotonergic excess that causes symptoms including agitation, restlessness, confusion, tachycardia, hypertension, mydriasis, ataxia, myoclonus, muscle rigidity, diaphoresis, diarrhea, headache, shivering, goose bumps, high fever, seizures, arrhythmia and unconsciousness. SNRIs like SSRIs carry a black box warning for suicidal ideation, but it is generally considered that the risk of suicide in untreated depression is far higher than the risk of suicide when depression is properly treated. Pregabalin and gabapentin Pregabalin (Lyrica) is effective for treating GAD. It acts on the voltage-dependent calcium channel to decrease the release of neurotransmitters such as glutamate, norepinephrine and substance P. Its therapeutic effect appears after 1 week of use and is similar in effectiveness to lorazepam, alprazolam and venlafaxine but pregabalin has demonstrated superiority by producing more consistent therapeutic effects for psychic and somatic anxiety symptoms. Long-term trials have shown continued effectiveness without the development of tolerance and additionally, unlike benzodiazepines, it does not disrupt sleep architecture and produces less severe cognitive and psychomotor impairment. It also has a low potential for misuse and dependency and may be preferred over the benzodiazepines for these reasons. The anxiolytic effects of pregabalin appear to persist for at least six months continuous use, suggesting tolerance is less of a concern; this gives pregabalin an advantage over certain anxiolytic medications such as benzodiazepines. Gabapentin (Neurontin), a closely related medication to pregabalin with the same mechanism of action, has also demonstrated effectiveness in the treatment of GAD, though unlike pregabalin, it has not been approved specifically for this indication. Nonetheless, it is likely to be of similar usefulness in the management of this condition, and by virtue of being off-patent, it has the advantage of being significantly less expensive in comparison. In accordance, gabapentin is frequently prescribed off-label to treat GAD. Complementary and alternative medicines studied for potential in treating GAD Complementary and alternative medicines (CAMs) are widely used by individuals with GAD despite having no evidence or varied evidence regarding efficacy. Efficacy trials for CAM medications often have various types of bias and low quality reporting in regard to safety. In regard to efficacy, critics point out that CAM trials sometimes predicate claims of efficacy based on a comparison of a CAM against a known drug after which no difference in subjects is found by investigators and which is used to suggest an equivalence between a CAM and a drug. Because this equates a lack of evidence with the positive assertion of efficacy, a "lack of difference" assertion is not a proper claim for efficacy. Moreover, an absence of strict definitions and standards for CAM compounds further burdens the literature regarding CAM efficacy in treating GAD. CAMs academically studied for their potential in treating GAD or GAD symptoms along with a summary of academic findings are given below. What follows is a summary of academic findings. Accordingly, none of the following should be taken as offering medical guidance or an opinion as to the safety or efficacy of any of the following CAMs. Kava Kava (Piper methysticum) extracts: Meta analysis does not suggest efficacy of Kava extracts due to few data available yielding inconclusive results or non-statistically significant results. Nearly a quarter (25.8%) of subjects experienced adverse effects (AEs) from Kava Kava extracts during six trials. Kava Kava may cause liver toxicity. Lavender (Lavandula angustifolia) extracts: Small and varied studies may suggest some level of efficacy as compared to placebo or other medication; claims of efficacy are regarded as needing further evaluation. Silexan is an oil derivative of Lavender studied in pediatric patients with GAD. Concern exists regarding the question as to whether Silexan may cause unopposed estrogen exposure in boys due to disruption of steroid signaling. Galphimia glauca extracts: While Galphima glauca extracts have been the subject of two randomised controlled trials (RCTs) comparing Galphima glauca extracts to lorazepam, efficacy claims are regarded as "highly uncertain." Chamomile (Matricaria chamomilla) extracts: Poor quality trials have trends that may suggest efficacy but further study is needed to establish any claim of efficacy. Crataegus oxycantha and Eschscholtzia californica extracts combined with magnesium: A single 12-week trial of Crataegus oxycantha and Eschscholtzia californica compared to placebo has been used to suggest efficacy. However, efficacy claims require confirmation studies. For the minority of subjects who experienced AEs from extracts, most AEs implicated gastrointestinal tract (GIT) intolerance. Echium amoneum extract: A single, small trial used this extract as a supplement to fluoxetine (vs using a placebo to supplement fluoxetine); larger studies are needed to substantiate efficacy claims. Gamisoyo-San: Small trials of this herbal mixture compared to placebo have suggested no efficacy of the herbal mixture over placebo but further study is necessary to allow definitive conclusion of a lack of efficacy. Passiflora incarnata extract: Claims of efficacy or benzodiazepam equivalence are regarded as "highly uncertain." Valeriana extract: A single 4-week trial suggests no effect of Valeriana extract on GAD but is regarded as "uninformative" on the topic of efficacy in view of its finding that the benzodiazepine diazepam also had no effect. Further study may be warranted. Other possible modalities discussed in literature for potential in treating GAD Other modalities that have been academically studied for their potential in treating GAD or symptoms of GAD are summarised below. What follows is a summary of academic findings. Accordingly, none of the following should be taken as offering medical guidance or an opinion as to the safety or efficacy of any of the following modalities. Acupuncture: A single, very small trial revealed a trend toward efficacy but flaws in the trial design suggest uncertainty regarding efficacy. Balneotherapy: Data from a single non-blinded study suggested possible efficacy of balneotherapy as compared to paroxetine. However, efficacy claims need confirmation. Therapeutic massage: A single, small, possibly biased study revealed inconclusive results. Resistance and aerobic exercise: When compared to no treatment, a single, small, potentially unrepresentative trial suggested a trend toward GAD remission and reduction of worry. Chinese bloodletting: When added to paroxetine, a single, small, imprecise trial that lacked a sham procedure for comparison suggested efficacy at 4-weeks. However, larger trials are needed to evaluate this technique as compared to a sham procedure. Floating in water: When compared to no treatment, a single, imprecise, non-blinded trial suggested a trend toward efficacy (findings were statistically insignificant). Swedish massage: When compared to a sham procedure, a single trial showed a trend toward efficacy (i.e., findings were statistically insignificant). Ayurvedic medications: a single non-blinded trial was inconclusive as to whether Ayurvedic medications were effective in treating GAD. Multifaith spiritually-based intervention: a single, small, non-blinded study was inconclusive regarding efficacy. Lifestyle Lifestyle factors including: stress management, stress reduction, relaxation, sleep hygiene, and caffeine and alcohol reduction can influence anxiety levels. Physical activity has shown to have a positive impact whereas low physical activity may be a risk factor for anxiety disorders. There has also been increasing evidence behind exercise substantially alleviating anxiety. Early primates, such as the Homo Erectus, developed Achilles tendons and foot arches from its earlier ancestor, the tree-swinging Austrlopithecus. These features allowed the Homo Erectus to run and compete against other carnivores also scavenging for meat. Additionally, in a study examining humans and primates, scientists found how evolution has favored low levels of the alpha-2C adrenergic receptor. This protein coding gene helps inhibit the sympathetic nervous system, simultaneously suppressing anxiety. However, humans and their living ancestors, chimpanzees, lacked this gene. This led to a more active nervous system needed for fight-or-flight behavior in ancestral scavenging techniques and for fleeing predators. Substances and GAD While there are no substances that are known to cause generalized anxiety disorder (GAD), certain substances or the withdrawal from certain substances have been implicated in promoting the experience of anxiety. For example, even while benzodiazepines may afford individuals with GAD relief from anxiety, withdrawal from benzodiazepines is associated with the experience of anxiety among other adverse events like sweating and tremor. Tobacco withdrawal symptoms may provoke anxiety in smokers and excessive caffeine use has been linked to aggravating and maintaining anxiety. Comorbidity Depression A longitudinal cohort study found 12% of 972 participants had GAD comorbid with major depressive disorder. Accumulating evidence indicates that patients with comorbid depression and anxiety tend to have greater illness severity and a lower treatment response than those with either disorder alone. In addition, social function and quality of life are more greatly impaired. For many, the symptoms of both depression and anxiety are not severe enough (i.e., are subsyndromal) to justify a primary diagnosis of either major depressive disorder (MDD) or an anxiety disorder. However, dysthymia is the most prevalent comorbid diagnosis of GAD clients. Patients can also be categorized as having mixed anxiety-depressive disorder, though this is an unstable diagnosis that typically either goes away or shifts to a different diagnosis later on. Various explanations for the high comorbidity between GAD and depressive disorders have been suggested, ranging from genetic pleiotropy (i.e., GAD and nonbipolar depression might represent different phenotypic expressions of a common etiology ) to impaired executive control or sleep problems and fatigue as potential bridging mechanisms between the two disorders. Comorbidity and treatment Therapy has been shown to have equal efficacy in patients with GAD and patients with GAD and comorbid disorders. Patients with comorbid disorders have more severe symptoms when starting therapy but demonstrated a greater improvement than patients with simple GAD. Pharmacological approaches, i.e., the use of antidepressants, must be adapted for different comorbidities. For example, serotonin reuptake inhibitors and short-acting benzodiazepines (BZDs) are used for depression and anxiety. However, for patients with anxiety and a substance use disorder, BZDs should be avoided due to their addictive properties. CBT has been found an effective treatment since it improves symptoms of GAD and substance use. Compared to the general population, patients with internalizing disorders such as depression, generalized anxiety disorder (GAD) and post-traumatic stress disorder (PTSD) have higher mortality rates, but die of the same age-related diseases as the population, such as heart disease, cerebrovascular disease and cancer. GAD often coexists with conditions associated with stress, such as muscle tension and irritable bowel syndrome. Patients with GAD can sometimes present with symptoms such as insomnia or headaches as well as pain and interpersonal problems. There is also observed comorbidity between GAD and attention deficit hyperactivity disorder. Anxiety disorders and major depressive disorder occur in a minority of individuals with ADHD, but more often than in the general population. Further research suggests that about 20 to 40 percent of individuals with attention deficit hyperactivity disorder have comorbid anxiety disorders, with GAD being the most prevalent. Those with GAD have a lifetime comorbidity prevalence of 30% to 35% with alcohol use disorder and 25% to 30% for another substance use disorder. People with both GAD and a substance use disorder also have a higher lifetime prevalence for other comorbidities. A study found that GAD was the primary disorder in slightly more than half of the 18 participants that were comorbid with alcohol use disorder. Epidemiology GAD is often estimated to affect approximately 3–6% of adults and 5% of children and adolescents. Although estimates have varied to suggest a GAD prevalence of 3% in children and 10.8% in adolescents. When GAD manifests in children and adolescents, it typically begins around 8 to 9 years of age. Estimates regarding prevalence of GAD or lifetime risk (i.e., lifetime morbid risk [LMR]) for GAD vary depending upon which criteria are used for diagnosing GAD (e.g., DSM-5 vs ICD-10) although estimates do not vary widely between diagnostic criteria. In general, ICD-10 is more inclusive than DSM-5, so estimates regarding prevalence and lifetime risk tend to be greater using ICD-10. In regard to prevalence, in a given year, about two (2%) percent of adults in the United States and Europe have been suggested to have GAD. However, the risk of developing GAD at any point in life has been estimated at 9.0%. Although it is possible to experience a single episode of GAD during one's life, most people who experience GAD experience it repeatedly over the course of their lives as a chronic or ongoing condition. GAD is diagnosed twice as frequently in women as in men and is more often diagnosed in those who are separated, divorced, unemployed, widowed or have low levels of education, and among those with low socioeconomic status. African Americans have higher odds of having GAD and the disorder often manifests itself in different patterns. It has been suggested that greater prevalence of GAD in women may be because women are more likely than men to live in poverty, are more frequently the subject of discrimination, and be sexually and physically abused more often than men. In regard to the first incidence of GAD in an individual's life course, a first manifestation of GAD usually occurs between the late teenage years and the early twenties with the median age of onset being approximately 31 and mean age of onset being 32.7. However, GAD can begin or reoccur at any point in life. Indeed, GAD is common in the elderly population. United States United States: Approximately 3.1 percent of people age 18 and over in a given year (9.5 million). UK 5.9 percent of adults were affected by GAD in 2019. Other Australia: 3 percent of adults Canada: 2.5 percent Italy: 2.9 percent Taiwan: 0.4 percent
Biology and health sciences
Mental disorders
Health
641915
https://en.wikipedia.org/wiki/Tidal%20island
Tidal island
A tidal island is a raised area of land within a waterbody, which is connected to the larger mainland by a natural isthmus or man-made causeway that is exposed at low tide and submerged at high tide, causing the land to switch between being a promontory/peninsula and an island depending on tidal conditions. Because of the mystique surrounding tidal islands, many of them have been sites of religious worship, such as Mont-Saint-Michel with its Benedictine abbey. Tidal islands are also commonly the sites of fortresses because of the natural barrier created by the tidal channel. List of tidal islands Asia Hong Kong Ma Shi Chau in Tai Po District, northeastern New Territories, within the Tolo Harbour Kiu Tau Island in Sai Kung Iran Naaz islands in the Persian Gulf, southern seashore of Qeshm island Japan Enoshima, in Sagami Bay, Kanagawa Prefecture Taiwan Kueibishan in Penghu Jiangong Islet in Kinmen South Korea Jindo Island and Modo Island in southwest South Korea Jebudo in the west Europe Denmark Mandø Island – on Denmark's western coast Knudshoved Island – north of Vordingborg on southern Zealand, Denmark Denmark/Germany The Halligen in the North Frisian Islands, Denmark/Germany France Île Aganton in Brittany Île Madame in Charente-Maritime Île de Noirmoutier in Vendée Mont Saint-Michel in Normandy Tombelaine in Normandy Grand Bé, Petit Bé and Fort National in Saint-Malo Germany The Neuwerk in the Wadden Sea Guernsey Lihou in Guernsey, one of the Channel Islands Iceland Grótta in Seltjarnarnes, the Capital Region Ireland Coney Island near Rosses Point, County Sligo Omey Island in Connemara, County Galway Inishkeel, County Donegal Italy Isola Grande, Sicily Jersey Elizabeth Castle in Jersey, a castle off the south coast accessible on foot at low tide Saint Aubin's Fort La Corbière Lighthouse La Motte, Jersey, alias Green Island L'Avarison, which hosts Seymour Tower Archirondel Tower, now connected via permanent causeway Icho Tower Portelet Tower Spain Cortegada Island in Pontevedra coast, Galicia. San Nikolas Island in Lekeitio, Bizkaia United Kingdom England Asparagus Island, Mount's Bay, Cornwall Burgh Island, Devon Burrow Island, Portsmouth Harbour Chapel Island, Cumbria Chiswick Eyot in the River Thames in London Gugh in the Isles of Scilly (joined to St Agnes at low tide) Hilbre Island, Middle Eye and Little Eye in the River Dee estuary, between North Wales and the English Wirral, but administratively in England. Horsey Island, Essex Lindisfarne, Northumberland, also known as Holy Island Mersea Island, Essex (accessible to road traffic via the Strood) Northey Island, Essex Osea Island, Essex Piel Island, Cumbria Scolt Head Island, Norfolk Sheep Island, Cumbria (joined at low tide to Piel Island and to Walney Island) St Mary's Island, North Tyneside St Michael's Mount, Cornwall White Island, Isles of Scilly and St Martin's, Isles of Scilly Northern Ireland Nendrum Monastery on Mahee Island, Strangford Lough Guns Island, near Ballyhornan Isle of Muck, Portmuck Scotland Baleshare in the Outer Hebrides, joined to North Uist Bernera Island, joined to Lismore Brough of Birsay in Orkney, joined to Orkney Mainland Castle Stalker on Loch Laich in Argyll Cramond Island in the Firth of Forth Island Davaar near Campbeltown, off the Kintyre peninsula Eilean Arnoil in the Outer Hebrides, joined to the Isle of Lewis Eilean Donan in the western Highlands of Scotland Eilean Fladday and Eilean Tigh off the Isle of Raasay Eilean Shona in Loch Moidart, Lochaber, Highland Eilean Tioram, in Loch Moidart Erraid off the Isle of Mull Hestan Island near Rough Island in Auchencairn Bay Islands of Fleet: Ardwall Isle and Barlocco Isle in Galloway Isle Ristol, the innermost of the Summer Isles Kili Holm in Orkney, joined to Egilsay Oronsay in the Inner Hebrides, joined to Colonsay Oronsay in Loch Bracadale, joined to Skye Orosay in the Outer Hebrides, joined to Barra Rough Island opposite Rockcliffe, Dumfries & Galloway Vallay (Bhàlaigh), joined to North Uist in the Outer Hebrides Wales Burry Holms off the Gower Cribinau off Anglesey Gateholm off the south west coast of Pembrokeshire Ynys Llanddwyn off Anglesey Mumbles Lighthouse located in Mumbles, near Swansea St Catherine's Island in Pembrokeshire Sully Island in the Vale of Glamorgan Worm's Head at the end of the Gower Ynys Cantwr off Ramsey Island, Pembrokeshire Ynys Feurig off Anglesey Ynys Gifftan in Gwynedd Ynys Gwelltog off Ramsey Island, Pembrokeshire Ynys Lochtyn on the coast of Cardigan Bay 43 (unbridged) tidal islands can be walked to from the UK mainland. North America Canada Bird Islet in Neck Point Park, Nanaimo, British Columbia, Canada Finisterre Island off of Bowen Island, British Columbia, Canada Francis Peninsula off of Sunshine Coast (British Columbia), British Columbia, Canada Micou's Island in St. Margarets Bay, Nova Scotia, Canada Minister's Island in New Brunswick, Canada Ross Island and Cheney Island in Grand Manan, New Brunswick, Canada Wedge Island, Nova Scotia, Canada Whyte Islet in West Vancouver, British Columbia, Canada United States Bar Island in Maine Battery Point Light in California Bumpkin Island in Massachusetts Camano Island in Puget Sound of Washington state, since earth filled Charles Island, in Connecticut Douglas Island in Alaska High Island, New York Long Point Island, Harpswell, Maine Tskawahyah Island of Cape Alava, Washington Oceania Australia The Point Walter Sandbar in Perth, Western Australia has slowly formed into a tidal island and is only connected to the mainland in extreme low tides. Penguin Island (Western Australia) in the Shoalwater Islands Marine Park Former tidal island Bennelong Island in Sydney, Australia was developed into Bennelong Point and is now the location of the Sydney Opera House. New Zealand Matakana Island in Tauranga Harbour Opahekeheke Island in the Kaipara Harbour Puddingstone Island in Otago Harbour Rabbit Island, Bells Island, and Bests Island in Tasman Bay The Hauraki Gulf islands of Motutapu Island and Rangitoto Island are connected at low tide The Okatakata Islands in Rangaunu Harbour
Physical sciences
Oceanic and coastal landforms
Earth science
641982
https://en.wikipedia.org/wiki/Mushroom%20poisoning
Mushroom poisoning
Mushroom poisoning is poisoning resulting from the ingestion of mushrooms that contain toxic substances. Symptoms can vary from slight gastrointestinal discomfort to death in about 10 days. Mushroom toxins are secondary metabolites produced by the fungus. Mushroom poisoning is usually the result of ingestion of wild mushrooms after misidentification of a toxic mushroom as an edible species. The most common reason for this misidentification is a close resemblance in terms of color and general morphology of the toxic mushrooms species with edible species. To prevent mushroom poisoning, mushroom gatherers familiarize themselves with the mushrooms they intend to collect, as well as with any similar-looking toxic species. The safety of eating wild mushrooms may depend on methods of preparation for cooking. Some toxins, such as amatoxins, are thermostable and mushrooms containing such toxins will not be rendered safe to eat by cooking. Signs and symptoms Poisonous mushrooms contain a variety of different toxins that can differ markedly in toxicity. Symptoms of mushroom poisoning may vary from gastric upset to organ failure resulting in death. Serious symptoms do not always occur immediately after eating, often not until the toxin attacks the kidney or liver, sometimes days or weeks later. The most common consequence of mushroom poisoning is simply gastrointestinal upset. Most "poisonous" mushrooms contain gastrointestinal irritants that cause vomiting and diarrhea (sometimes requiring hospitalization), but usually no long-term damage. However, there are a number of recognized mushroom toxins with specific, and sometimes deadly, effects: The period between ingestion and the onset of symptoms varies dramatically between toxins, some taking days to show symptoms identifiable as mushroom poisoning. α-Amanitin: For 6–12 hours, there are no symptoms. This is followed by a period of gastrointestinal upset (vomiting and profuse, watery diarrhea). This stage is caused primarily by the phallotoxins and typically lasts 24 hours. At the end of this second stage is when severe liver damage begins. The damage may continue for another 2–3 days. Kidney damage can also occur. Some patients will require a liver transplant. Amatoxins are found in some mushrooms in the genus Amanita, but are also found in some species of Galerina and Lepiota. Overall, mortality is between 10 and 15 percent. Recently, Silybum marianum or blessed milk thistle has been shown to protect the liver from amanita toxins and promote regrowth of damaged cells. Orellanine: This toxin generally causes no symptoms for 3–20 days after ingestion. Typically around day 11, the process of kidney failure begins, and is usually symptomatic by day 20. These symptoms can include pain in the area of the kidneys, thirst, vomiting, headache, and fatigue. A few species in the very large genus Cortinarius contain this toxin. People having eaten mushrooms containing orellanine may experience early symptoms as well, because the mushrooms often contain other toxins in addition to orellanine. A related toxin that causes similar symptoms but within 3–6 days has been isolated from Amanita smithiana and some other related toxic Amanitas. Muscarine: Muscarine stimulates the muscarinic receptors of the nerves and muscles. Symptoms include sweating, salivation, tears, blurred vision, palpitations, and, in high doses, respiratory failure. Muscarine is found in mushrooms of the genus Omphalotus, notably the jack o' Lantern mushrooms. It is also found in A. muscaria, although it is now known that the main effect of this mushroom is caused by ibotenic acid. Muscarine can also be found in some Inocybe species and Clitocybe species, in particular Clitocybe dealbata, and some red-pored Boletes. Gyromitrin: Stomach acids convert gyromitrin to monomethylhydrazine (MMH). It affects multiple body systems. It blocks the important neurotransmitter GABA, leading to stupor, delirium, muscle cramps, loss of coordination, tremors, and/or seizures. It causes severe gastrointestinal irritation, leading to vomiting and diarrhea. In some cases, liver failure has been reported. It can also cause red blood cells to break down, leading to jaundice, kidney failure, and signs of anemia. It is found in mushrooms of the genus Gyromitra. A gyromitrin-like compound has also been identified in mushrooms of the genus Verpa. Coprine: Coprine is metabolized to a chemical that resembles disulfiram. It inhibits aldehyde dehydrogenase (ALDH), which, in general, causes no harm, unless the person has alcohol in their bloodstream while ALDH is inhibited. This can happen if alcohol is ingested shortly before or up to a few days after eating the mushrooms. In that case, the alcohol cannot be completely metabolized, and the person will experience flushed skin, vomiting, headache, dizziness, weakness, apprehension, confusion, palpitations, and sometimes trouble to breathe. Coprine is found mainly in mushrooms of the genus Coprinus, although similar effects have been noted after ingestion of Clitocybe clavipes. Ibotenic acid: Decarboxylates into muscimol upon ingestion. The effects of muscimol vary, but nausea and vomiting are common. Confusion, euphoria, or sleepiness are possible. Loss of muscular coordination, sweating, and chills are likely. Some people experience visual distortions, a feeling of strength, or delusions. Symptoms normally appear after 30 minutes to 2 hours and last for several hours. A. muscaria, the "Alice in Wonderland" mushroom, is known for the hallucinatory experiences caused by muscimol, but A. pantherina and A. gemmata also contain the same compound. While normally self-limiting, fatalities have been associated with A. pantherina, and consumption of a large number of any of these mushrooms is likely to be dangerous. Arabitol: A sugar alcohol, similar to mannitol, which causes no harm in most people but causes gastrointestinal irritation in some. It is found in small amounts in oyster mushrooms, and considerable amounts in Suillus species and Hygrophoropsis aurantiaca (the "false chanterelle"). Causes New species of fungi are continuing to be discovered, with an estimated number of 800 new species registered annually. This, added to the fact that many investigations have recently reclassified some species of mushrooms from edible to poisonous has made older classifications insufficient at describing what now is known about the different species of fungi that are harmful to humans. It is now thought that of the approximately 100,000 known fungi species found worldwide, about 100 of them are poisonous to humans. However, by far the majority of mushroom poisonings are not fatal, and the majority of fatal poisonings are attributable to the Amanita phalloides mushroom. A majority of these cases are due to mistaken identity. This is a common occurrence with A. phalloides in particular, due to its resemblance to the Asian paddy-straw mushroom, Volvariella volvacea. Both are light-colored and covered with a universal veil when young. Amanitas can be mistaken for other species, as well, in particular when immature. On at least one occasion they have been mistaken for Coprinus comatus. In this case, the victim had some limited experience in identifying mushrooms, but did not take the time to correctly identify these particular mushrooms until after he began to experience symptoms of mushroom poisoning. The author of Mushrooms Demystified, David Arora cautions puffball-hunters to beware of Amanita "eggs", which are Amanitas still entirely encased in their universal veil. Amanitas at this stage are difficult to distinguish from puffballs. Foragers are encouraged to always cut the fruiting bodies of suspected puffballs in half, as this will reveal the outline of a developing Amanita should it be present within the structure. A majority of mushroom poisonings, in general, are the result of small children, especially toddlers in the "grazing" stage, ingesting mushrooms found on the lawn. While this can happen with any mushroom, Chlorophyllum molybdites is often implicated due to its preference for growing in lawns. C. molybdites causes severe gastrointestinal upset but is not considered deadly poisonous. A few poisonings are the result of misidentification while attempting to collect hallucinogenic mushrooms for recreational use. In 1981, one fatality and two hospitalizations occurred following consumption of Galerina marginata, mistaken for a Psilocybe species. Galerina and Psilocybe species are both small, brown, and sticky, and can be found growing together. However, Galerina contains amatoxins, the same poison found in the deadly Amanita species. Another case reports kidney failure following ingestion of Cortinarius orellanus, a mushroom containing orellanine. It is natural that accidental ingestion of hallucinogenic species also occurs, but is rarely harmful when ingested in small quantities. Cases of serious toxicity have been reported in small children. Amanita pantherina, while containing the same hallucinogens as Amanita muscaria (e.g., ibotenic acid and muscimol), has been more commonly associated with severe gastrointestinal upset than its better-known counterpart. Although usually not fatal, Omphalotus spp., "Jack-o-lantern mushrooms", are another cause of sometimes significant toxicity. They are sometimes mistaken for chanterelles. Both are bright-orange and fruit at the same time of year, although Omphalotus grows on wood and has true gills rather than the veins of a Cantharellus. They contain toxins known as illudins, which causes gastrointestinal symptoms. Bioluminescent species are generally inedible and often mildly toxic. Clitocybe dealbata, which is occasionally mistaken for an oyster mushroom or other edible species contains muscarine. Toxicities can also occur with collection of morels. Even true morels, if eaten raw, will cause gastrointestinal upset. Typically, morels are thoroughly cooked before eating. Verpa bohemica, although referred to as "thimble morels" or "early morels" by some, have caused toxic effects in some individuals. Gyromitra spp., "false morels", are deadly poisonous if eaten raw. They contain a toxin called gyromitrin, which can cause neurotoxicity, gastrointestinal toxicity, and destruction of the blood cells. The Finns consume Gyromitra esculenta after parboiling, but this may not render the mushroom entirely safe, resulting in its being called the "fugu of the Finnish cuisine". A more unusual toxin is coprine, a disulfiram-like compound that is harmless unless ingested within a few days of ingesting alcohol. It inhibits aldehyde dehydrogenase, an enzyme required for breaking down alcohol. Thus, the symptoms of toxicity are similar to being hung over—flushing, headache, nausea, palpitations, and, in severe cases, trouble breathing. Coprinus species, including Coprinopsis atramentaria, contain coprine. Coprinus comatus does not, but it is best to avoid mixing alcohol with other members of this genus. Recently, poisonings have also been associated with Amanita smithiana. These poisonings may be due to orellanine, but the onset of symptoms occurs in 4 to 11 hours, which is much quicker than the 3 to 20 days normally associated with orellanine. Paxillus involutus is also inedible when raw, but is eaten in Europe after pickling or parboiling. However, after the death of the German mycologist Dr. Julius Schäffer, it was discovered that the mushroom contains a toxin that can stimulate the immune system to attack its red blood cells. This reaction is rare but can occur even after safely eating the mushroom for many years. Similarly, Tricholoma equestre was widely considered edible and good, until it was connected with rare cases of rhabdomyolysis. In the fall of 2004, thirteen deaths were associated with consumption of Pleurocybella porrigens or "angel's wings". In general, these mushrooms are considered edible. All the victims died of an acute brain disorder, and all had pre-existing kidney disease. The exact cause of the toxicity was not known at this time and the deaths cannot be definitively attributed to mushroom consumption. However, mushroom poisoning is not always due to mistaken identity. For example, the highly toxic ergot Claviceps purpurea, which grows on rye, is sometimes ground up with rye, unnoticed, and later consumed. This can cause devastating, even fatal, effects, called ergotism. Cases of idiosyncratic or unusual reactions to fungi can also occur. Some are probably due to allergy, others to some other kind of sensitivity. It is not uncommon for a person to experience gastrointestinal upset associated with one particular mushroom species or genus. Some mushrooms might concentrate toxins from their growth substrate, such as Chicken of the Woods growing on yew trees. Poisonous mushrooms Of the most lethal mushrooms, five—the death cap (A. phalloides), the three destroying angels (A. virosa, A. bisporigera, and A. ocreata), and the fool's mushroom (A. verna)—belong to the genus Amanita, and two more—the deadly webcap (C. rubellus), and the fool's webcap (C. orellanus)—are from the genus Cortinarius. Several species of Galerina, Lepiota, and Conocybe also contain lethal amounts of amatoxins. Deadly species are listed in the List of deadly fungi. The following species may cause great discomfort, sometimes requiring hospitalization, but are not considered deadly. Amanita muscaria (fly agaric) – Contains the psychoactive muscimol and the neurotoxin ibotenic acid. Ibotenic acid decarboxylates into muscimol upon curing of the mushroom, rendering it relatively non-toxic, though death via respiratory depression is possible. Muscimol intoxication is often considered unpleasant and undesirable, however, and as such has seen little recreational use compared to the unrelated psilocybin mushroom, though it has been used as an entheogen by the native people of Siberia. Amanita pantherina (panther mushroom) – contains similar toxins as A. muscaria, but is associated with more fatalities than A. muscaria. Chlorophyllum molybdites (greengills) – causes intense gastrointestinal upset. Entoloma (pinkgills) – some species are highly poisonous, such as livid entoloma (Entoloma sinuatum), Entoloma rhodopolium, and Entoloma nidorosum. Symptoms of intense gastrointestinal upset appear after 20 minutes to 4 hours, caused by an unidentified gastrointestinal irritant. Many Inocybe species such as Inocybe fastigiata and Inocybe geophylla contain muscarine. Inosperma erubescens has caused death. Some white Clitocybe species, including C. rivulosa and C. dealbata, contain muscarine. Tricholoma pardinum, Tricholoma tigrinum (tiger tricholoma) – gastrointestinal upset due to an unidentified toxin, begins in 15 minutes to 2 hours and lasts 4 to 6 days. Tricholoma equestre (man-on-horseback) – until recently thought edible and good, can lead to rhabdomyolysis after repeated consumption. Hypholoma fasciculare/Naematoloma fasciculare (sulfur tuft) – usually causes gastrointestinal upset, but the toxins fasciculol E and F could lead to paralysis and death. Paxillus involutus (brown roll-rim) – once thought edible, but now found to destroy red blood cells with regular or long-term consumption. Rubroboletus satanas (Devil's bolete), Suillellus luridus, Rubroboletus legaliae, Chalciporus piperatus, Neoboletus luridiformis, Rubroboletus pulcherrimus – gastrointestinal irritation. Of these, only R. pulcherrimus has been implicated in a death. Many books list N. luridiformis as edible, but Arora lists it as "to be avoided". Hebeloma crustuliniforme (known as poison pie or fairy cakes) – causes gastrointestinal symptoms such as nausea and vomiting. Russula emetica (the sickener) – as its name implies, causes rapid vomiting. Other Russulas with a peppery taste (Russula silvicola, Russula mairei) will likely do the same. Agaricus hondensis, Agaricus californicus, Agaricus praeclaresquamosus, Agaricus xanthodermus – cause vomiting and diarrhea in most people, although some people seem to be immune. Lactifluus piperatus, Lactarius torminosus, Lactarius rufus – these and other peppery-tasting milk-caps are pickled and eaten in Scandinavia, but are indigestible or poisonous unless correctly prepared. Lactarius vinaceorufescens, Lactarius uvidus – reported to be poisonous. Arora reports that all yellow- or purple-staining Lactarius are "best avoided". Ramaria gelatinosa – causes indigestion in many people, although some seem immune. Gomphus floccosus (the scaly chanterelle) – causes gastric upset in many people, although some eat it without problems. G. floccosus is sometimes confused with the chanterelle. Evolution Many different species of mushrooms are poisonous and contain differing toxins that cause different types of harm. The most common toxin that causes severe poisoning is amatoxin, found in various mushroom species that cause the most fatalities every year. Amanita, or “ the death cap”, is a type of mushroom named for its substantial amount of amatoxin, which has about 10 mg per mushroom, which is the lethal dose. Amatoxin blocks the replication of DNA, which leads to cell death. This can affect cells that replicate frequently, such as kidneys, livers, and eventually, the central nervous system. It can also cause the loss of muscle contraction and liver failure. Despite the severe and dangerous symptoms, amatoxin poisoning is treatable given quick, professional care. Mushrooms have also been found to have evolved toxicity independently from each other. Researchers have found that different mushroom species share the same type of amatoxin called amanitin. They specifically looked at three of the deadliest species, Amanita, Galerina, and Lepiota. Through genome sequencing, a scientific process that determines the DNA sequence of an organism’s genome, closely related mushrooms obtained genetic information via horizontal gene transfer. Once assimilated, it can then be passed down to an offspring. The researchers also concluded that there is “an unknown ancestral fungal donor,” that allowed for horizontal gene transfer. Mushroom toxins have appeared and disappeared many times throughout their evolutionary history. Many scientists believe that the toxins evolved in mushrooms are used to deter predation, either from fungivores or mammals. If mushrooms are consumed, it can negatively affect their ability to disperse spores, survive, and reproduce. Snails and insects are fungivores and many have learned or evolved to avoid eating poisonous mushrooms. However, it is believed that mammals pose a higher threat to mushrooms than fungivores, as larger body sizes mean they are more capable of eating an entire fungus in one sitting. Some phenotypes, or observable characteristics, may co-occur with toxicity, and therefore act as a warning signal. The first potential warning sign is aposematism, which is an adaptation that warns off predators based on a physical trait of an organism. In this case, the researchers were interested in observing whether the color of a mushroom deters predators. This would suggest that toxic mushrooms are of different colors than non-poisonous ones. The visual cue of some colors should be enough for predators to know not to consume the mushroom. The second possible warning sign is olfactory aposematism, a similar concept, but instead of focusing on color, the odor of the mushroom would be what deters predation. This would again indicate that poisonous mushrooms would emit a different odor than non-poisonous ones. Alternatively, is the ability of organisms to learn from other organisms. This would suggest that avoidance of toxic mushrooms is a learned behavior. Organisms may avoid toxic mushrooms if they observed other organisms of the same species consume the fungus. Learned behavior is when an organism learns how to behave based on previous experiences. Some researchers believe that if an organism got sick or observed another organism get sick from consuming a poisonous mushroom, then they would know not to continue consuming it for fear of getting sick again. An analysis of 245 North American mushroom species and 265 from Europe, revealed 21.2% of the North American species and 12.1% of the European ones as poisonous. After collecting this information, and using a neural network to classify all of the mushrooms based on color and odor, the researchers concluded that there was no correlation between cap color and mushrooms containing toxins. The cap is the top, rounded part of a mushroom and comes in different colors. This proposes that the cap color does not act as a warning sign to deter predators, providing no evidence that poisonous mushrooms may not signal their toxicity through visual or chemical traits. The three deadly mushrooms listed above, Amanita, Galerina, and Lepiota, are all of different colors, consisting of reds, yellows, browns, and whites. A possible theory as to why color is not a factor in determining whether a mushroom is poisonous is the fact that many of its predators are nocturnal and have poor vision. Therefore, viewing the different colors is difficult, and could result in inaccurate consumption. The study, however, did suggest that poisonous mushrooms do emit a smell that is unpleasant and therefore discourages consumption. Despite this result, there is no definitive evidence to suggest if the odor is a result of the production of the toxin or if it is intended as a warning signal. Additionally, many of the odors are not picked up by humans. This could suggest that there is another characteristic difference between poisonous and non-poisonous mushrooms to avoid predation from larger mammals or that there is another purpose for some mushrooms being poisonous that is not dependent on predators. Prognosis and treatment Some mushrooms contain less toxic compounds and, therefore, are not severely poisonous. Poisonings by these mushrooms may respond well to treatment. However, certain types of mushrooms contain very potent toxins and are very poisonous; so even if symptoms are treated promptly, mortality is high. With some toxins, death can occur in a week or a few days. Although a liver or kidney transplant may save some patients with complete organ failure, in many cases there are no organs available. Patients hospitalized and given aggressive support therapy almost immediately after ingestion of amanitin-containing mushrooms have a mortality rate of only 10%, whereas those admitted 60 or more hours after ingestion have a 50–90% mortality rate. In the United States, mushroom poisoning kills an average of about 3 people a year. According to National Poison Data System (NPDS) annual reports published by America's Poison Centers, the average number of deaths occurring over a ten-year period (2012–2020) sits right at 3 a year. In 2012, 4 out of the 7 total deaths that occurred that year, were attributed to a single event where a "housekeeper at a Board and Care Home for elderly dementia patients collected and cooked wild (Amanita) mushrooms into a sauce that she consumed with six residents of the home.". Over 1,300 emergency room visits in the United States were attributed to poisonous mushroom ingestion in 2016, with about 9% of patients experiencing a serious adverse outcome. Society and culture Folklore Many old wives' tales concern the defining features of poisonous mushrooms. However, there are no general identifiers for poisonous mushrooms, so such beliefs are unreliable. Guidelines to identify particular mushrooms exist, and will serve only if one knows which mushrooms are toxic. Examples of erroneous folklore "rules" include: "Poisonous mushrooms are brightly colored." – Indeed, fly agaric, usually bright-red to orange or yellow, is narcotic and hallucinogenic, although no human deaths have been reported. The deadly destroying angel, in contrast, is an unremarkable white. The deadly Galerinas are brown. Some choice edible species (chanterelles, Amanita caesarea, Laetiporus sulphureus, etc.) are brightly colored, whereas most poisonous species are brown or white. "Insects/animals will avoid toxic mushrooms." – Fungi that are harmless to invertebrates can still be toxic to humans; the death cap, for instance, is often infested by insect larvae. "Poisonous mushrooms blacken silver." – None of the known mushroom toxins react with silver. "Poisonous mushrooms taste bad." – People who have eaten the deadly Amanitas and survived have reported that the mushrooms tasted quite good. "All mushrooms are safe if cooked/parboiled/dried/pickled/etc." – While it is true that some otherwise-inedible species can be rendered safe by special preparation, many toxic species cannot be made toxin-free. Many fungal toxins are not particularly sensitive to heat and so are not broken down during cooking; in particular, α-Amanitin, the poison produced by the death cap (Amanita phalloides) and others of the genus, is not denatured by heat. "Poisonous mushrooms will turn rice red when boiled." – A number of Laotian refugees were hospitalized after eating mushrooms (probably toxic Russula species) deemed safe by this folklore rule and this misconception cost at least one person her life. "Poisonous mushrooms have a pointed cap. Edible ones have a flat, rounded cap." – The shape of the mushroom cap does not correlate with presence or absence of mushroom toxins, so this is not a reliable method to distinguish between edible and poisonous species. Death cap, for instance, has a rounded cap when mature. "Boletes are, in general, safe to eat." – It is true that, unlike a number of Amanita species in particular, in most parts of the world, there are no known deadly varieties of the genus Boletus, which reduces the risks associated with misidentification. However, mushrooms like the Devil's bolete are poisonous both raw and cooked and can lead to strong gastrointestinal symptoms, and other species like the lurid bolete require thorough cooking to break down toxins. As with another mushroom genera, proper caution is, therefore, advised in determining the correct species. Notable cases Siddhartha Gautama (known as The Buddha), by some accounts, may have died of mushroom poisoning around ~479 BCE, though this claim has not been universally accepted. Roman Emperor Claudius is said to have been murdered by being fed the death cap mushroom. However, this story first appeared some two centuries after the events, and it is debatable whether Claudius was murdered at all. The best-selling author Nicholas Evans (The Horse Whisperer) was poisoned (but survived) after eating Cortinarius rubellus. The parents of the physicist Daniel Gabriel Fahrenheit, who created the Fahrenheit temperature scale, died in Danzig on 14 August 1701 from accidentally eating poisonous mushrooms. The composer Johann Schobert died in Paris, along with his wife, all but one of his children, their maidservant, and four acquaintances after insisting that certain poisonous mushrooms they had gathered were edible despite the express warning of cooks at two separate restaurants to which he had taken the mushrooms. July 2023 Leongatha mushroom poisoning − Four people in Leongatha, Australia were taken to hospital after consuming beef Wellington suspected to have contained death cap mushrooms. Three of the four guests subsequently died and one survived, later receiving a liver transplant. The woman who cooked the meal, Erin Patterson, was charged with murder in November 2023. Patterson has pleaded not guilty and the Supreme court is expected to hear her case on 28 April, 2025. In August 2023, Professor Vitaly Melnikov, 77, who had headed the Moscow Department of Rocket and Space Systems at RSC Energia (Russia's leading spacecraft manufacturer), became suddenly seriously ill and subsequently died after eating inedible mushrooms.
Biology and health sciences
Miscellaneous
null
641995
https://en.wikipedia.org/wiki/Asymptotic%20analysis
Asymptotic analysis
In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior. As an illustration, suppose that we are interested in the properties of a function as becomes very large. If , then as becomes very large, the term becomes insignificant compared to . The function is said to be "asymptotically equivalent to , as ". This is often written symbolically as , which is read as " is asymptotic to ". An example of an important asymptotic result is the prime number theorem. Let denote the prime-counting function (which is not directly related to the constant pi), i.e. is the number of prime numbers that are less than or equal to . Then the theorem states that Asymptotic analysis is commonly used in computer science as part of the analysis of algorithms and is often expressed there in terms of big O notation. Definition Formally, given functions and , we define a binary relation if and only if The symbol is the tilde. The relation is an equivalence relation on the set of functions of ; the functions and are said to be asymptotically equivalent. The domain of and can be any set for which the limit is defined: e.g. real numbers, complex numbers, positive integers. The same notation is also used for other ways of passing to a limit: e.g. , , . The way of passing to the limit is often not stated explicitly, if it is clear from the context. Although the above definition is common in the literature, it is problematic if is zero infinitely often as goes to the limiting value. For that reason, some authors use an alternative definition. The alternative definition, in little-o notation, is that if and only if This definition is equivalent to the prior definition if is not zero in some neighbourhood of the limiting value. Properties If and , then, under some mild conditions, the following hold: , for every real if Such properties allow asymptotically equivalent functions to be freely exchanged in many algebraic expressions. Examples of asymptotic formulas Factorial —this is Stirling's approximation Partition function For a positive integer n, the partition function, p(n), gives the number of ways of writing the integer n as a sum of positive integers, where the order of addends is not considered. Airy function The Airy function, Ai(x), is a solution of the differential equation ; it has many applications in physics. Hankel functions Asymptotic expansion An asymptotic expansion of a function is in practice an expression of that function in terms of a series, the partial sums of which do not necessarily converge, but such that taking any initial partial sum provides an asymptotic formula for . The idea is that successive terms provide an increasingly accurate description of the order of growth of . In symbols, it means we have but also and for each fixed k. In view of the definition of the symbol, the last equation means in the little o notation, i.e., is much smaller than The relation takes its full meaning if for all k, which means the form an asymptotic scale. In that case, some authors may abusively write to denote the statement One should however be careful that this is not a standard use of the symbol, and that it does not correspond to the definition given in . In the present situation, this relation actually follows from combining steps k and k−1; by subtracting from one gets i.e. In case the asymptotic expansion does not converge, for any particular value of the argument there will be a particular partial sum which provides the best approximation and adding additional terms will decrease the accuracy. This optimal partial sum will usually have more terms as the argument approaches the limit value. Examples of asymptotic expansions Gamma function Exponential integral Error function where is the double factorial. Worked example Asymptotic expansions often occur when an ordinary series is used in a formal expression that forces the taking of values outside of its domain of convergence. For example, we might start with the ordinary series The expression on the left is valid on the entire complex plane , while the right hand side converges only for . Multiplying by and integrating both sides yields The integral on the left hand side can be expressed in terms of the exponential integral. The integral on the right hand side, after the substitution , may be recognized as the gamma function. Evaluating both, one obtains the asymptotic expansion Here, the right hand side is clearly not convergent for any non-zero value of t. However, by keeping t small, and truncating the series on the right to a finite number of terms, one may obtain a fairly good approximation to the value of . Substituting and noting that results in the asymptotic expansion given earlier in this article. Asymptotic distribution In mathematical statistics, an asymptotic distribution is a hypothetical distribution that is in a sense the "limiting" distribution of a sequence of distributions. A distribution is an ordered set of random variables for , for some positive integer . An asymptotic distribution allows to range without bound, that is, is infinite. A special case of an asymptotic distribution is when the late entries go to zero—that is, the go to 0 as goes to infinity. Some instances of "asymptotic distribution" refer only to this special case. This is based on the notion of an asymptotic function which cleanly approaches a constant value (the asymptote) as the independent variable goes to infinity; "clean" in this sense meaning that for any desired closeness epsilon there is some value of the independent variable after which the function never differs from the constant by more than epsilon. An asymptote is a straight line that a curve approaches but never meets or crosses. Informally, one may speak of the curve meeting the asymptote "at infinity" although this is not a precise definition. In the equation y becomes arbitrarily small in magnitude as x increases. Applications Asymptotic analysis is used in several mathematical sciences. In statistics, asymptotic theory provides limiting approximations of the probability distribution of sample statistics, such as the likelihood ratio statistic and the expected value of the deviance. Asymptotic theory does not provide a method of evaluating the finite-sample distributions of sample statistics, however. Non-asymptotic bounds are provided by methods of approximation theory. Examples of applications are the following. In applied mathematics, asymptotic analysis is used to build numerical methods to approximate equation solutions. In mathematical statistics and probability theory, asymptotics are used in analysis of long-run or large-sample behaviour of random variables and estimators. In computer science in the analysis of algorithms, considering the performance of algorithms. The behavior of physical systems, an example being statistical mechanics. In accident analysis when identifying the causation of crash through count modeling with large number of crash counts in a given time and space. Asymptotic analysis is a key tool for exploring the ordinary and partial differential equations which arise in the mathematical modelling of real-world phenomena. An illustrative example is the derivation of the boundary layer equations from the full Navier-Stokes equations governing fluid flow. In many cases, the asymptotic expansion is in power of a small parameter, : in the boundary layer case, this is the nondimensional ratio of the boundary layer thickness to a typical length scale of the problem. Indeed, applications of asymptotic analysis in mathematical modelling often center around a nondimensional parameter which has been shown, or assumed, to be small through a consideration of the scales of the problem at hand. Asymptotic expansions typically arise in the approximation of certain integrals (Laplace's method, saddle-point method, method of steepest descent) or in the approximation of probability distributions (Edgeworth series). The Feynman graphs in quantum field theory are another example of asymptotic expansions which often do not converge. Asymptotic versus Numerical Analysis De Bruijn illustrates the use of asymptotics in the following dialog between Dr. N.A., a Numerical Analyst, and Dr. A.A., an Asymptotic Analyst: N.A.: I want to evaluate my function for large values of , with a relative error of at most 1%. A.A.: . N.A.: I am sorry, I don't understand. A.A.: N.A.: But my value of is only 100. A.A.: Why did you not say so? My evaluations give N.A.: This is no news to me. I know already that . A.A.: I can gain a little on some of my estimates. Now I find that N.A.: I asked for 1%, not for 20%. A.A.: It is almost the best thing I possibly can get. Why don't you take larger values of ? N.A.: !!! I think it's better to ask my electronic computing machine. Machine: f(100) = 0.01137 42259 34008 67153 A.A.: Haven't I told you so? My estimate of 20% was not far off from the 14% of the real error. N.A.: !!! . . . ! Some days later, Miss N.A. wants to know the value of f(1000), but her machine would take a month of computation to give the answer. She returns to her Asymptotic Colleague, and gets a fully satisfactory reply.
Mathematics
Mathematical analysis
null
642002
https://en.wikipedia.org/wiki/Common%20hill%20myna
Common hill myna
The common hill myna (Gracula religiosa), sometimes spelled "mynah" and formerly simply known as the hill myna or myna bird, is the myna most commonly sighted in aviculture, where it is often simply referred to by the latter two names. It is a member of the starling family (Sturnidae), resident in hill regions of South Asia and Southeast Asia. The Sri Lanka hill myna, a former subspecies of G. religiosa, is now generally accepted as a separate species G. ptilogenys. The Enggano hill myna (G. enganensis) and Nias hill myna (G. robusta) are also widely accepted as specifically distinct, and many authors favor treating the southern hill myna (G. indica) from the Nilgiris and elsewhere in the Western Ghats of India as a separate species. The common hill myna is a popular talking bird. Its specific name religiosa may allude to the practice of teaching mynas to repeat prayers. Taxonomy The common hill myna was formally described in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae under the current binomial name Gracula religiosa. The type location is the Indonesian island of Java. The genus name is from Latin graculus, an unknown bird sometimes identified as the western jackdaw. The specific epithet religiosa is from Latin religiosus meaning "sacred". Seven subspecies are recognised: G. r. peninsularis Whistler & Kinnear, 1933 – central east India G. r. intermedia Hay, 1845 – north India to south China, Indochina and Thailand G. r. andamanensis (Beavan, 1867) – Coco, Andaman and Nicobar Islands G. r. religiosa Linnaeus, 1758 – Malay Peninsula, Sumatra, Java, Borneo and nearby islands G. r. miotera Oberholser, 1917 – Simeulue (west of north Sumatra) (sometimes synonymised with the nominate G. r. religiosa) G. r. batuensis Finsch, 1899 – Batu Islands and Mentawai Islands (off west Sumatra) G. r. palawanensis (Sharpe, 1890) – Palawan (southwest Philippines) The southern hill myna (Gracula indica), the Nias hill myna (Gracula robusta), the Enggano hill myna (Gracula enganensis) and the Tenggara hill myna (Gracula venerata) have all been classified as subspecies. Description This is a stocky jet-black myna, with bright orange-yellow patches of naked skin and fleshy wattles on the side of its head and nape. At about 29 cm length, it is somewhat larger than the common myna (Acridotheres tristis). It is overall green-glossed black plumage, purple-tinged on the head and neck. Its large, white wing patches are obvious in flight, but mostly covered when the bird is sitting. The bill and strong legs are bright yellow, and there are yellow wattles on the nape and under the eye. These differ conspicuously in shape from the naked eye-patch of the common myna and bank myna (A. ginginianus), and more subtly vary between the different hill mynas from South Asia: in the common hill myna, they extend from the eye to the nape, where they join, while the Sri Lanka hill myna has a single wattle across the nape and extending a bit towards the eyes. In the southern hill myna, the wattles are separate and curve towards the top of the head. The Nias and Enggano hill mynas differ in details of the facial wattles, and size, particularly that of the bill. Sexes are similar; juveniles have a duller bill. The subspecies differ in size, in the pattern of wattles on the head and in the glossiness of the plumage. A 2020 study found that the subspecies G. religiosa miotera likely represents a distinct species and was likely driven to extinction in the wild in the late 2010s due unsustainable collecting for the wildlife trade. The paper recommends rescuing the last genetically pure captive individuals for the purpose of captive breeding. The International Ornithological Congress tentatively recognises it as a subspecies. Vocalisations The common hill myna is often detected by its loud, shrill, descending whistles followed by other calls. It is most vocal at dawn and dusk, when it is found in small groups in forest clearings high in the canopy. Both sexes can produce an extraordinarily wide range of loud calls – whistles, wails, screeches, and gurgles, sometimes melodious and often very human-like in quality. Each individual has a repertoire of three to 13 such call types, which may be shared with some near neighbours of the same sex, being learned when young. Dialects change rapidly with distance, such that birds living more than 15 km apart have no call-types in common with one another. Unlike some other birds, such as the greater racket-tailed drongo (Dicrurus paradiseus), the common hill myna does not imitate other birds in the wild, although it is a widely held misconception that they do. On the other hand, in captivity, they are among the most renowned mimics, the only bird, perhaps, on par with the grey parrot (Psittacus erithacus). They can learn to reproduce many everyday sounds, particularly the human voice, and even whistled tunes, with astonishing accuracy and clarity. Distribution and ecology This myna is a resident breeder from Kumaon division in India (80° E longitude) east through Nepal, Sikkim, Bhutan and Arunachal Pradesh, the lower Himalayas, terai and foothills up to 2,000 m ASL. Its range continues east through Southeast Asia northeastwards to southern China, and via Thailand southeastwards across northern Indonesia to Palawan in the Philippines. It is virtually extinct in Bangladesh due to habitat destruction and overexploitation for the pet trade. A feral population on Christmas Island has likewise disappeared. Introduced populations exist in Saint Helena, Puerto Rico and perhaps in the mainland United States and possibly elsewhere; feral birds require at least a warm subtropical climate to persist. This myna is almost entirely arboreal, moving in large, noisy groups of half a dozen or so, in tree-tops at the edge of the forest. It hops sideways along the branch, unlike the characteristic jaunty walk of other mynas. Like most starlings, the hill myna is fairly omnivorous, eating fruit, nectar and insects. They build a nest in a hole in a tree. The usual clutch is two or three eggs. There is no sexual dimorphism in these birds, which results in a limited possibility of choosing the sex to work with for mating. Pet trade and conservation The hill mynas are popular cage birds, renowned for their ability to imitate speech. The widely distributed common hill myna is the one most frequently seen in aviculture. Demand outstrips captive breeding capacity, so they are rarely found in pet stores and usually purchased directly from breeders or importers who can certify the birds are traded legally. This species is widely distributed and locally common, and if adult stocks are safeguarded, it is able to multiply quickly. On a worldwide scale, the IUCN thus considers the common hill myna a Species of Least Concern. But in the 1990s, nearly 20,000 wild-caught birds, mostly adults and juveniles, were brought into trade each year. In the central part of its range, G. r. intermedia populations have declined markedly, especially in Thailand, which supplied much of the thriving Western market. Its neighbor countries, from where exports were often limited due to political or military reasons, nevertheless supplied a burgeoning domestic demand, and demand in the entire region continues to be very high. In 1992, Thailand had the common hill myna put on CITES Appendix III, to safeguard its stocks against collapsing. In 1997, at the request of the Netherlands and the Philippines, the species was uplisted to CITES Appendix II. The Andaman and Nicobar Islands subspecies G. r. andamanensis and (if valid) G. r. halibrecta, described as "exceedingly common" in 1874, qualified as Near Threatened in 1991. The former is not at all common anymore in the Nicobar Islands and the latter—if distinct—has a very limited range. Elsewhere, such as on the Philippines and in Laos, the decline has been more localized. It is also becoming increasingly rare in the regions of northeastern India due to capture of fledged birds for the illegal pet trade. In the Garo Hills region, however, the locals make artificial nests of a split-bamboo framework covered with grass, and put them up in accessible positions in tall trees in a forest clearing or at the edge of a small village to entice the mynas to breed there. The villagers are thus able to extract the young at the proper time for easy hand-rearing, making common hill myna farming a profitable, small-scale cottage industry. It helps to preserve the environment, because the breeding birds are not removed from the population, while habitat destruction is curtailed because the mynas will desert areas of extensive logging and prefer more natural forest to plantations. As the mynas can be somewhat of a pest of fruit trees when too numerous, an additional benefit to the locals is the inexpensive means of controlling the myna population: failing stocks can be bolstered by putting out more nests than can be harvested, while the maximum proportion of nestlings are taken when the population becomes too large.
Biology and health sciences
Passerida
Animals
642101
https://en.wikipedia.org/wiki/Cardioid
Cardioid
In geometry, a cardioid () is a plane curve traced by a point on the perimeter of a circle that is rolling around a fixed circle of the same radius. It can also be defined as an epicycloid having a single cusp. It is also a type of sinusoidal spiral, and an inverse curve of the parabola with the focus as the center of inversion. A cardioid can also be defined as the set of points of reflections of a fixed point on a circle through all tangents to the circle. The name was coined by Giovanni Salvemini in 1741 but the cardioid had been the subject of study decades beforehand. Although named for its heart-like form, it is shaped more like the outline of the cross-section of a round apple without the stalk. A cardioid microphone exhibits an acoustic pickup pattern that, when graphed in two dimensions, resembles a cardioid (any 2d plane containing the 3d straight line of the microphone body). In three dimensions, the cardioid is shaped like an apple centred around the microphone which is the "stalk" of the apple. Equations Let be the common radius of the two generating circles with midpoints , the rolling angle and the origin the starting point (see picture). One gets the parametric representation: and herefrom the representation in polar coordinates: Introducing the substitutions and one gets after removing the square root the implicit representation in Cartesian coordinates: Proof for the parametric representation A proof can be established using complex numbers and their common description as the complex plane. The rolling movement of the black circle on the blue one can be split into two rotations. In the complex plane a rotation around point (the origin) by an angle can be performed by multiplying a point (complex number) by . Hence the rotation around point is, the rotation around point is: . A point of the cardioid is generated by rotating the origin around point and subsequently rotating around by the same angle : From here one gets the parametric representation above: (The trigonometric identities and were used.) Metric properties For the cardioid as defined above the following formulas hold: area , arc length and radius of curvature The proofs of these statements use in both cases the polar representation of the cardioid. For suitable formulas see polar coordinate system (arc length) and polar coordinate system (area) Properties Chords through the cusp C1 Chords through the cusp of the cardioid have the same length . C2 The midpoints of the chords through the cusp lie on the perimeter of the fixed generator circle (see picture). Proof of C1 The points are on a chord through the cusp (=origin). Hence Proof for C2 For the proof the representation in the complex plane (see above) is used. For the points and the midpoint of the chord is which lies on the perimeter of the circle with midpoint and radius (see picture). Cardioid as inverse curve of a parabola A cardioid is the inverse curve of a parabola with its focus at the center of inversion (see graph) For the example shown in the graph the generator circles have radius . Hence the cardioid has the polar representation and its inverse curve which is a parabola (s. parabola in polar coordinates) with the equation in Cartesian coordinates. Remark: Not every inverse curve of a parabola is a cardioid. For example, if a parabola is inverted across a circle whose center lies at the vertex of the parabola, then the result is a cissoid of Diocles. Cardioid as envelope of a pencil of circles In the previous section if one inverts additionally the tangents of the parabola one gets a pencil of circles through the center of inversion (origin). A detailed consideration shows: The midpoints of the circles lie on the perimeter of the fixed generator circle. (The generator circle is the inverse curve of the parabola's directrix.) This property gives rise to the following simple method to draw a cardioid: Choose a circle and a point on its perimeter, draw circles containing with centers on , and draw the envelope of these circles. Cardioid as envelope of a pencil of lines A similar and simple method to draw a cardioid uses a pencil of lines. It is due to L. Cremona: Draw a circle, divide its perimeter into equal spaced parts with points (s. picture) and number them consecutively. Draw the chords: . (That is, the second point is moved by double velocity.) The envelope of these chords is a cardioid. Proof The following consideration uses trigonometric formulae for , , , , and . In order to keep the calculations simple, the proof is given for the cardioid with polar representation (§ Cardioids in different positions). Equation of the tangent of the cardioid with polar representation From the parametric representation one gets the normal vector . The equation of the tangent is: With help of trigonometric formulae and subsequent division by , the equation of the tangent can be rewritten as: Equation of the chord of the circle with midpoint and radius For the equation of the secant line passing the two points one gets: With help of trigonometric formulae and the subsequent division by the equation of the secant line can be rewritten by: Conclusion Despite the two angles have different meanings (s. picture) one gets for the same line. Hence any secant line of the circle, defined above, is a tangent of the cardioid, too: The cardioid is the envelope of the chords of a circle. Remark: The proof can be performed with help of the envelope conditions (see previous section) of an implicit pencil of curves: is the pencil of secant lines of a circle (s. above) and For fixed parameter t both the equations represent lines. Their intersection point is which is a point of the cardioid with polar equation Cardioid as caustic of a circle The considerations made in the previous section give a proof that the caustic of a circle with light source on the perimeter of the circle is a cardioid. If in the plane there is a light source at a point on the perimeter of a circle which is reflecting any ray, then the reflected rays within the circle are tangents of a cardioid. Remark: For such considerations usually multiple reflections at the circle are neglected. Cardioid as pedal curve of a circle The Cremona generation of a cardioid should not be confused with the following generation: Let be a circle and a point on the perimeter of this circle. The following is true: The foots of perpendiculars from point on the tangents of circle are points of a cardioid. Hence a cardioid is a special pedal curve of a circle. Proof In a Cartesian coordinate system circle may have midpoint and radius . The tangent at circle point has the equation The foot of the perpendicular from point on the tangent is point with the still unknown distance to the origin . Inserting the point into the equation of the tangent yields which is the polar equation of a cardioid. Remark: If point is not on the perimeter of the circle , one gets a limaçon of Pascal. The evolute of a cardioid The evolute of a curve is the locus of centers of curvature. In detail: For a curve with radius of curvature the evolute has the representation with the suitably oriented unit normal. For a cardioid one gets: The evolute of a cardioid is another cardioid, one third as large, and facing the opposite direction (s. picture). Proof For the cardioid with parametric representation the unit normal is and the radius of curvature Hence the parametric equations of the evolute are These equations describe a cardioid a third as large, rotated 180 degrees and shifted along the x-axis by . (Trigonometric formulae were used: ) Orthogonal trajectories An orthogonal trajectory of a pencil of curves is a curve which intersects any curve of the pencil orthogonally. For cardioids the following is true: (The second pencil can be considered as reflections at the y-axis of the first one. See diagram.) Proof For a curve given in polar coordinates by a function the following connection to Cartesian coordinates hold: and for the derivatives Dividing the second equation by the first yields the Cartesian slope of the tangent line to the curve at the point : For the cardioids with the equations and respectively one gets: and (The slope of any curve depends on only, and not on the parameters or !) Hence That means: Any curve of the first pencil intersects any curve of the second pencil orthogonally. In different positions Choosing other positions of the cardioid within the coordinate system results in different equations. The picture shows the 4 most common positions of a cardioid and their polar equations. In complex analysis In complex analysis, the image of any circle through the origin under the map is a cardioid. One application of this result is that the boundary of the central period-1 component of the Mandelbrot set is a cardioid given by the equation The Mandelbrot set contains an infinite number of slightly distorted copies of itself and the central bulb of any of these smaller copies is an approximate cardioid. Caustics Certain caustics can take the shape of cardioids. The catacaustic of a circle with respect to a point on the circumference is a cardioid. Also, the catacaustic of a cone with respect to rays parallel to a generating line is a surface whose cross section is a cardioid. This can be seen, as in the photograph to the right, in a conical cup partially filled with liquid when a light is shining from a distance and at an angle equal to the angle of the cone. The shape of the curve at the bottom of a cylindrical cup is half of a nephroid, which looks quite similar.
Mathematics
Two-dimensional space
null
4313282
https://en.wikipedia.org/wiki/Soil%20morphology
Soil morphology
Soil morphology is the branch of soil science dedicated to the technical description of soil, particularly physical properties including texture, color, structure, and consistence. Morphological evaluations of soil are typically performed in the field on a soil profile containing multiple horizons. Along with soil formation and soil classification, soil morphology is considered part of pedology, one of the central disciplines of soil science. Background Since the origin of agriculture, humans have understood that soils contain different properties which affect their ability to grow crops. However, soil science did not become its own scientific discipline until the 19th century, and even then early soil scientists were broadly grouped as either "agro-chemists" or "agro-geologists" due to the enduring strong ties of soil to agriculture. These agro-geologists examined soils in natural settings and were the first to scientifically study soil morphology. A team of Russian early soil scientists led by V.V. Dokuchaev observed soil profiles with similar horizons in areas with similar climate and vegetation, despite being hundreds of kilometers apart. Dokuchaev's work, along with later contributions from K.D. Glinka, C.F. Marbut, and Hans Jenny, established soils as independent, natural bodies with unique properties caused by their equally unique combinations of climate, biological activity, relief, parent material, and time. Soil properties had previously been inferred from geological or environmental conditions alone, but with this new understanding, soil morphological properties were now used to evaluate the integrated influence of these factors. Soil morphology became the basis for understanding observations, experiments, behavior, and practical uses of different soils. To standardize morphological descriptions, official guidelines and handbooks for describing soil were first published in the 1930s by Charles Kellogg and the United States Department of Agriculture-Soil Conservation Service for the United States and by G.R. Clarke for the United Kingdom. Many other countries and national soil survey organizations have since developed their own guidelines. Properties and procedure Observations of soil morphology are typically performed in the field on soil profiles exposed by excavating a pit or extracting a core with a push tube (handheld or hydraulic) or auger. A soil profile is one face of a pedon, or an imaginary three-dimensional unit of soil that would display the full range of properties characteristic of a particular soil. Pedons generally occupy between 1 and 10 m2 of surface land area and are the fundamental unit of field-based soil study. Many soil scientists in the United States document soil morphological descriptions using the standard Pedon Description field sheet published by the USDA-NRCS. In addition to location, landscape, vegetation, topographic, and other site information, soil morphology descriptions generally include the following properties: Horizonation Soil profiles contain multiple layers, known as horizons, that are generally parallel to the soil surface. These horizons are distinguishable from adjacent layers by their changes in morphological properties as the soil naturally forms. The same soil horizons may be named and labeled differently in various soil classification systems around the world, though most systems contain the following: Numerical prefix: indicates a lithologic discontinuity or change in parent material Capital letter: represents the master horizon, such as O, A, E, B, C, R, and others. Multiple capital letters may be used to describe transition horizons, which are layers with properties of multiple master horizons (such as AB or A/B horizons). Lowercase letter: horizon suffix or subordinate distinction, which add details of soil formation. Multiple suffixes may be used in combination, and some master horizons (including O, B, and L) must be described with a suffix. Numerical suffix: indicates subdivisions within a larger horizon. If there are layers distinct enough to be separate horizons, but similar enough to receive the same master and suffix letters, sequential numbers are added to the end of the designation to distinguish the horizons (such as A, Bt1, Bt2, Bt3, C). In addition to the horizon name, the distinctness and topography of each horizon's lower boundary are described. Boundary distinctness is determined by how accurately the border between horizons can be identified and may be very abrupt, abrupt, clear, gradual, or diffuse. Boundary topography refers to the horizontal variation of the border, which is often not parallel to the soil surface and may even be discontinuous. Topography categories include smooth, wavy, irregular, and broken. Color Soil color is quantitatively described using the Munsell color system, which was developed in the early 20th century by Albert Munsell. Munsell was a painter and the system covers the entire range of colors, though the specially adapted Munsell soil color books commonly used in field description only include the most relevant colors for soil. The Munsell color system includes the following three components: Hue: indicates the dominant spectral (i.e., rainbow) color, which in soil is generally yellow and/or red. Each page of the Munsell soil color book displays a different hue. Examples include 10YR, 5YR, and 2.5Y. Value: indicates lightness or darkness. Value increases from the bottom of each page to the top, with lower numbers representing darker color. Color with a value of 0 would be black. Chroma: indicates intensity or brightness. Chroma increases from left to right on each page, with higher numbers representing more vivid or saturated color. Color with a chroma of 0 would be neutral gray. Colors in soil can be quite diverse and result from organic matter content, mineralogy, and the presence and oxidation states of iron and manganese oxides. Organic-rich soils tend to be dark brown or even black due to organic matter accumulating on the mineral particles. Well-drained and highly weathered soils may be bright red or brown from oxidized iron, while reduced iron can impart gray or blue colors and indicate poor drainage. When soil is saturated for prolonged periods, oxygen availability is limited and iron may become a biological electron acceptor. Reduced iron is more soluble than oxidized iron and is easily leached from particle coatings, which exposes bare, light-colored silicate minerals and results in iron depletions. When iron reduction and/or depletion makes gray the dominant matrix color, the soil is said to be gleyed. Soil color is also moisture dependent, specifically the color value. It is important to note the moisture status as "moist" when adding water does not change the soil color, or as "dry" when the soil is air dry. The standard moisture status for describing soil in the field varies regionally; humid areas generally use the moist state while arid ones use the dry state. In detailed descriptions, both the moist and dry colors should be recorded. Soil texture Soil texture is the analysis and classification of the particle size distribution in soil. The relative amounts of sand, silt, and clay particles determine a soil's texture, which affects the appearance, feel and chemical properties of the soil. Field methods To estimate by hand in the field, soil scientists take a handful of sifted soil and moisten it with water until it holds together. The soil is then rolled into a ball nearing 1-2 inches in diameter and squeezed between the thumb and side of the index finger. Ribbons should be made as long as possible until it naturally breaks under its own weight. Longer ribbons indicate a higher clay percentage. The relative smoothness or grittiness indicates the sand percentage, and with practice, this technique can provide accurate textural class determinations. Lab methods An experienced soil scientist can determine soil texture in the field with decent accuracy, as described above. However, not all soils lend themselves to accurate field determinations of soil texture due to the presence of other particles that interfere with measuring the concentration of sand, silt and clay. The mineral texture can be obfuscated by high soil organic matter, iron oxides, amorphous or short-range-order aluminosilicates, and carbonates. In order to precisely determine the amount of clay, sand and silt in a soil, it must be taken to a laboratory for analysis. A strategy known as particle size analysis (PSA) is performed, beginning with the pretreatment of the soil in order to remove all other particles such as organic matter that may interfere with the classification. Pretreatment must leave the soil as strictly sand, silt and clay particles. Pretreatment may consist of processes such as the sieving of the soil to remove larger particles, thus allowing the soil to be dispersed properly. Hydrometer tests may then be used to calculate the amounts of sand, silt and clay present. This consists of mixing the pretreated soil with water and then allowing the mixture to settle, making note of the hydrometer reading. Sand particles are the largest, and thus will settle the quickest, followed by the silt particles, and lastly the clay particles. The sections are then dried and weighed. The three sections should add up to 100% in order for the test to be considered successful. Laser diffraction analysis can also be used as alternative to the sieving and hydrometer methods. From here, the soil can be classified using a soil texture triangle, which labels the type of soil based on the percentages of each particle in the sample. Structure Soil particles naturally aggregate together into larger units or shapes referred to as "peds". Peds have planes of weakness between them are generally identified by probing exposed soil profiles with a knife to pry out and gently break apart volumes of soil. Morphological descriptions of soil structure contain assessments of shape, size, and grade. Structure shapes include granular, platy, blocky, prismatic, columnar, and others, including the "structureless" shapes of massive and single-grained. Size is classified as one of six categories ranging from "very fine" to "extremely coarse", with different size limits for the various shapes and measurements taken on the smallest ped dimension. Grade indicates the distinctness of peds, or how easily distinguishable they are from each other, and is described with the classes "weak", "moderate", and "strong". Structure is often best evaluated while the soil is relatively dry, as peds may swell with moisture, press together and reduce the definition between each ped. Porosity Porosity of topsoil is a measure of the pore space in soil which typically decreases as grain size increases. This is due to soil aggregate formation in finer textured surface soils when subject to soil biological processes. Aggregation involves particulate adhesion and higher resistance to compaction. Porosity of a soil is a function of the soil's bulk density, which is based on the composition of the soil. Sandy soils typically have higher bulk densities and lower porosity than silty or clayey soils. This is because finer grained particles have a larger amount of pore space than coarser grained particles. The table below displays the deal bulk densities that both allow and restrict root growth for the three main texture classifications. The porosity of a soil is an important factor that determines the amount of water a soil can hold, how much air it can hold, and subsequently how well plant roots can grow within the soil. Soil porosity is complex. Traditional models regard porosity as continuous. This fails to account for anomalous features and produces only approximate results. Furthermore, it cannot help model the influence of environmental factors which affect pore geometry. A number of more complex models have been proposed, including fractals, bubble theory, cracking theory, Boolean grain process, packed sphere, and numerous other models. Micromorphology Soil micromorphology refers to the description, measurement, and interpretation of soil features that are too small to be observed by the unassisted eye. While micromorphological descriptions may begin in the field with the use of a 10x hand lens, much more can be described using thin sections made of the soil with the aid of a petrographic polarizing light microscope. The soil can be impregnated with an epoxy resin, but more commonly with a polyester resin (crystic 17449) and sliced and ground to 0.03 millimeter thickness and examined by passing light through the thin soil plasma. Micromorphology in archaeology Soil micromorphology has been a recognized technique in soil science for some 50 years and experience from pedogenic and paleosol studies first permitted its use in the investigation of archaeologically buried soils. More recently, the science has expanded to encompass the characterization of all archeological soils and sediments and has been successful in providing unique cultural and paleoenvironmental information from a whole range of archaeological sites. Soil formation Form Soils are formed from their respective parent material, which may or may not match the composition of the bedrock that they lie on top of. Through biological and chemical processes as well as natural processes such as wind and water erosion, parent material can be broken down. The chemical and physical properties of this parent material is reflected in the qualities of the resulting soil. Climate, topography, and biological organisms all have an impact on the formation of soils in various geographic locations. Topography A steep landform is going to see an increased amount of runoff when compared to a flat landform. Increased runoff can inhibit soil formation as the upper layers continue to get stripped off because they are not developed enough to support root growth. Root growth can help prevent erosion as the roots act to keep the soil in place. This phenomenon leads to soils on slopes being thinner and less developed than soils found on plains or plateaus. Climate Varying levels of precipitation and wind have impacts on the formation of soils. Increased precipitation can lead to increased levels of runoff as previously described, but regular amounts of precipitation can encourage plant root growth which works to stop runoff. The growth of vegetation in a certain area can also work to increase the depth and nutrient quality of a topsoil, as decomposition of organic matter works to strengthen organic soil horizons. Biological processes Varying levels of microbial activity can have a range of impacts on soil formation. Most often, biological processes work to disrupt existing soil formation which leads to chemical translocation. the movement of these chemicals can make nutrients available, which can increase plant root growth.
Physical sciences
Soil science
Earth science
4313746
https://en.wikipedia.org/wiki/Born%20rule
Born rule
The Born rule is a postulate of quantum mechanics that gives the probability that a measurement of a quantum system will yield a given result. In one commonly used application, it states that the probability density for finding a particle at a given position is proportional to the square of the amplitude of the system's wavefunction at that position. It was formulated and published by German physicist Max Born in July, 1926. Details The Born rule states that an observable, measured in a system with normalized wave function (see Bra–ket notation), corresponds to a self-adjoint operator whose spectrum is discrete if: the measured result will be one of the eigenvalues of , and the probability of measuring a given eigenvalue will equal , where is the projection onto the eigenspace of corresponding to . (In the case where the eigenspace of corresponding to is one-dimensional and spanned by the normalized eigenvector , is equal to , so the probability is equal to . Since the complex number is known as the probability amplitude that the state vector assigns to the eigenvector , it is common to describe the Born rule as saying that probability is equal to the amplitude-squared (really the amplitude times its own complex conjugate). Equivalently, the probability can be written as .) In the case where the spectrum of is not wholly discrete, the spectral theorem proves the existence of a certain projection-valued measure (PVM) , the spectral measure of . In this case: the probability that the result of the measurement lies in a measurable set is given by . For example, a single structureless particle can be described by a wave function that depends upon position coordinates and a time coordinate . The Born rule implies that the probability density function for the result of a measurement of the particle's position at time is: The Born rule can also be employed to calculate probabilities (for measurements with discrete sets of outcomes) or probability densities (for continuous-valued measurements) for other observables, like momentum, energy, and angular momentum. In some applications, this treatment of the Born rule is generalized using positive-operator-valued measures (POVM). A POVM is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalization of von Neumann measurements and, correspondingly, quantum measurements described by POVMs are a generalization of quantum measurements described by self-adjoint observables. In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see purification of quantum state); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics and can also be used in quantum field theory. They are extensively used in the field of quantum information. In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite matrices on a Hilbert space that sum to the identity matrix,: The POVM element is associated with the measurement outcome , such that the probability of obtaining it when making a measurement on the quantum state is given by: where is the trace operator. This is the POVM version of the Born rule. When the quantum state being measured is a pure state this formula reduces to: The Born rule, together with the unitarity of the time evolution operator (or, equivalently, the Hamiltonian being Hermitian), implies the unitarity of the theory: a wave function that is time-evolved by a unitary operator will remain properly normalized. (In the more general case where one considers the time evolution of a density matrix, proper normalization is ensured by requiring that the time evolution is a trace-preserving, completely positive map.) History The Born rule was formulated by Born in a 1926 paper. In this paper, Born solves the Schrödinger equation for a scattering problem and, inspired by Albert Einstein and Einstein's probabilistic rule for the photoelectric effect, concludes, in a footnote, that the Born rule gives the only possible interpretation of the solution. (The main body of the article says that the amplitude "gives the probability" [bestimmt die Wahrscheinlichkeit], while the footnote added in proof says that the probability is proportional to the square of its magnitude.) In 1954, together with Walther Bothe, Born was awarded the Nobel Prize in Physics for this and other work. John von Neumann discussed the application of spectral theory to Born's rule in his 1932 book. Derivation from more basic principles Gleason's theorem shows that the Born rule can be derived from the usual mathematical representation of measurements in quantum physics together with the assumption of non-contextuality. Andrew M. Gleason first proved the theorem in 1957, prompted by a question posed by George W. Mackey. This theorem was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics. Several other researchers have also tried to derive the Born rule from more basic principles. A number of derivations have been proposed in the context of the many-worlds interpretation. These include the decision-theory approach pioneered by David Deutsch and later developed by Hilary Greaves and David Wallace; and an "envariance" approach by Wojciech H. Zurek. These proofs have, however, been criticized as circular. In 2018, an approach based on self-locating uncertainty was suggested by Charles Sebens and Sean M. Carroll; this has also been criticized. Simon Saunders, in 2021, produced a branch counting derivation of the Born rule. The crucial feature of this approach is to define the branches so that they all have the same magnitude or 2-norm. The ratios of the numbers of branches thus defined give the probabilities of the various outcomes of a measurement, in accordance with the Born rule. In 2019, Lluís Masanes, Thomas Galley, and Markus Müller proposed a derivation based on postulates including the possibility of state estimation. It has also been claimed that pilot-wave theory can be used to statistically derive the Born rule, though this remains controversial. Within the QBist interpretation of quantum theory, the Born rule is seen as an extension of the normative principle of coherence, which ensures self-consistency of probability assessments across a whole set of such assessments. It can be shown that an agent who thinks they are gambling on the outcomes of measurements on a sufficiently quantum-like system but refuses to use the Born rule when placing their bets is vulnerable to a Dutch book.
Physical sciences
Quantum mechanics
Physics
4315137
https://en.wikipedia.org/wiki/Kiwa%20hirsuta
Kiwa hirsuta
Kiwa hirsuta is a crustacean discovered in 2005 in the South Pacific Ocean. This decapod, which is approximately long, is notable for the quantity of silky blond setae (resembling fur) covering its pereiopods (thoracic legs, including claws). Its discoverers dubbed it the "yeti lobster" or "yeti crab". Identification K. hirsuta was discovered in March 2005 by a group organized by Robert Vrijenhoek of the Monterey Bay Aquarium Research Institute in Monterey, California, Michel Segonzac of the Ifremer and a Census of Marine Life scientist using the submarine DSV Alvin, operating from RV Atlantis. The discovery was announced on 7 March 2006. It was found along the Pacific-Antarctic Ridge, south of Easter Island at a depth of , living on hydrothermal vents. Based on both morphology and molecular data, the organism was deemed to form a new biological family (Kiwaidae); a second species, Kiwa puravida, was discovered in 2006 and described in 2011. Yeti Crabs live in hydrothermal vents, which are deep within the ocean. These vents provide hot water, which makes up the environment where these crabs live. The crabs regulate their ecosystem by using their hairy arms to collect toxins released from the hydrothermal vents. Characteristics The animal has strongly reduced eyes that lack pigment, and is thought to be blind. The "hairy" pincers contain filamentous bacteria, which the creature may use to detoxify poisonous minerals from the water emitted by the hydrothermal vents where it lives. This process is known as chemosynthesis. Lipid and isotope analyses provide evidence that epibiotic bacteria are the crab's main food source and K. puravida has highly modified setae (hairs) on its 3rd maxilliped (a mouth appendage) which it uses to harvest these bacteria. Yeti crabs receive most of their essential nutrients from chemosynthetic episymbiotic bacteria which grows on hairlike setae. This chemosynthetic episymbiotic bacteria can be found growing from numerous areas of their ventral surface as well as their appendages. The ε- and γ- proteobacteria that this methane-deep species farms are closely related to hydrothermal-vent decapod epibionts. Alternatively, it may be a carnivore, although it is generally thought to feed on bacteria. Although it is often referred to as the "furry lobster" outside the scientific literature, Kiwa hirsuta is a squat lobster, more closely related to crabs and hermit crabs than true lobsters. The term "furry lobster" is more commonly used for the family Synaxidae. The "yeti crab" was found in a recently discovered family called the Kiwaidae. This family is closely associated with the two families, Epsilon and Gammaproteobacteria. Etymology Macpherson et al. named the genus Kiwa after "the god(dess) of the shellfish in the Polynesian mythology." is Latin for "hairy." Reproduction and life cycle Kiwa hirsuta exhibits a unique reproductive strategy. Unlike many other crustaceans, the females of this species carry their eggs in a specialized brooding structure on their abdomen. The eggs are attached to setae, and the female cares for them until they hatch into larvae. This method of parental care is distinctive among deep-sea organisms. Genomic studies Genomic studies of Kiwa hirsuta have provided insights into its evolutionary history and adaptation to the extreme environment of hydrothermal vents. The analysis of its genome may offer clues about the genetic basis of its unique characteristics, such as the adaptation to low-light conditions and the utilization of chemosynthetic bacteria for nutrition. Population dynamics and conservation Studies on the population dynamics of Kiwa hirsuta are ongoing to understand factors such as population size, growth rates, and potential threats to its habitat. Conservation efforts are also being explored to mitigate the impact of deep-sea mining and other human activities on the hydrothermal vent ecosystems where these crabs reside. Behavioral observations Observations of Kiwa hirsuta in its natural habitat have provided valuable information about its behavior. For example, researchers have documented interactions between individuals, including potential mating behaviors and social dynamics within populations living around hydrothermal vents.
Biology and health sciences
Crabs and hermit crabs
Animals
4315947
https://en.wikipedia.org/wiki/Honduran%20white%20bat
Honduran white bat
The Honduran white bat (Ectophylla alba), also called the Caribbean white tent-making bat, is a species of bat in the family Phyllostomatidae. It is the only member of the genus Ectophylla. The genus and the species were both scientifically described for the first time in 1892. It has distinctive, entirely white fur, which is only found in six of the roughly 1,300 known species of bat. It constructs "tents" out of understory plant leaves by strategically cutting the leaf ribs with its teeth; it roosts in these tents during the day. It is a specialist frugivore, consuming almost exclusively the fruits of one species of fig. Females can likely become pregnant twice per year, giving birth to one offspring at a time. It is found in Honduras, Nicaragua, Costa Rica and western Panama at elevations from sea level to . Due to habitat loss, it is evaluated as near-threatened by the IUCN. Its bright yellow ears, nose-leaf, and lips are a result of carotenoid deposition; the mechanism of this deposition is being researched as a way to understand and combat macular degeneration in humans. Taxonomy and phylogeny The Honduran white bat was described as a new species, Ectophylla alba, in 1892 by American zoologist Harrison Allen. The holotype that Allen used to describe the new genus and species was collected by Charles Haskins Townsend near the Coco River in Honduras in 1887. It belongs to the leaf-nosed bat family, Phyllostomidae. Within Phyllostomidae, it is in the subfamily Stenodermatinae. MacConnell's bat was once included in the genus Ectophylla, but it is now monotypic within Mesophylla. Despite no longer being classified in the same genus, MacConnell's bat and the Honduran white bat are sister taxa—they are each other's closest relative. The Honduran white bat is the only member of Ectophylla, meaning it is a monotypic genus. The genus name "Ectophylla" is from Ancient Greek "ektós" meaning "out" and "phúllon" meaning "leaf", referring to its nose-leaf. Its species name "alba" comes from Latin "albus" meaning "white". Description Like both its common name and specific epithet suggest, the Honduran white bat has bright white fur. The tips of individual hairs are gray, with the grayish coloration more pronounced towards the bat's posterior. This species, along with four Diclidurus species and the ghost bat (Macroderma gigas), is among the only currently known species of bat—more than 1,300 species have been described—where the pelage is all white. Its large nose-leaf easily distinguishes it from the northern ghost bat (Diclidurus albus), however, which is the only white bat with which it is sympatric (having an overlapping geographic range). Its wing membranes are black. Its ears, tragi (the cartilaginous projections in front of the ear openings), nose-leaf, and lips are a bright, yellowish orange. Its yellow-orange pigmentation is due to large concentrations of carotenoids, particularly xanthophyll. It is the first mammal known to have enough carotenoids in its skin to generate conspicuous color. A 2019 study found that while the brightness of the yellow pigment of the ears did not vary significantly between adults and juveniles, the yellow chroma (colorfulness relative to brightness) of the ears did differ with age. Adult bats had higher yellow chroma in their ears than did juveniles. The yellow of the nose-leaf, however, had more variation. Adult males' nose-leaves are a brighter yellow than those of adult females; juveniles of each sex did not differ in nose-leaf brightness. Adult males also had significantly brighter nose-leaves than juvenile males. Similarly to the ears, the yellow chroma of the nose-leaf was greater in adults than in juveniles, though not different between the sexes. The authors suggested that the color difference of male and female nose-leaves is indicative of sexual dichromatism, meaning that females may select for males with brighter nose-leaves. This conclusion was supported by the trend that males with brighter yellow nose-leaves tended to have better body conditions. Females could thus use nose-leaf color as an honest signal of male fitness when selecting a mate. Another 2019 study found that the distinctive yellow pigment may have been selected for as a result of the bat's tent-roosting. Reconstructions of ancestral states showed that the yellow coloration coevolved with tent-roosting. As sunlight passes through the green leaves of the tents, it results in a yellowish light; any bats with yellowish coloration would have had more effective camouflage, and thus be more likely to survive, reproduce, and pass on these genes to their offspring. It is a small species, with a head and body length of , a forearm length of , and an ear length of . Individuals weigh only . The bat's nose-leaf is erect, its tail is absent, and its ears large and rounded. The inner margin of the tragus is convex, while the outer margin is coarsely serrated with four or five small lobes. The nose-leaf also has a serrated margin. It has eight to ten small "warts" under its mouth. Its dental formula is , for a total of 28 teeth. Its skull is similar in appearance to other species in its subfamily, with the exception of its very deep basioccipital pits. The bat overall resembles a small, white Platyrrhinus. Biology and ecology Tent-making The Honduran white bat is one of approximately 22 known species of bats that roost within leaf "tents". The Honduran white bat cuts the side veins extending out from the midrib of the large leaves of the Heliconia plant causing them to fold down to form a tent. Tents are likely constructed by multiple individuals; females have been observed constructing tents, but it is likely that males do so as well. New tents are constructed throughout the year, as modifying the leaves into tents causes the leaves to die. Once modified into a tent, a leaf lives approximately 7.5 weeks, compared to 61 weeks in an unmodified leaf. Several species of Heliconia are used as roosts, including H. imbricata, H. latispatha, H. pogonantha, H. tortuosa, and H. sarapiquensis. Rarely, it has been documented using Calathea and Ischnosiphon inflatus plants as roosts. In selecting leaves to turn into tents, it appears that the age and size of the leaf is more important than the species of plant. Preferred leaves are long and less than 30 days old. Younger leaves may be preferred because they are easier to bite through and shape than older leaves. It also prefers leaves that are less than above the forest floor. Preferred leaves are in areas of low understory vegetation density, but high canopy vegetation density. Heliconia density is lower surrounding chosen leaves than would be expected if the bats selected leaves randomly. Features such as canopy density may help the tent maintain a consistent microclimate. Tents are usually , with little fluctuation. High canopy density could also protect its tent from disturbance from wind and rain. Because tent construction takes up to several weeks' worth of time from several individuals, choosing more sheltered tents could prolong the life of a tent and protect the bats' investment. Low understory vegetation density is thought to be beneficial by providing an uncluttered airspace for the bats as they exit and enter their tents. It clings to the roof of its tent in small colonies of 1-15 individuals. The tent protects it from rain and predators. Rather than roosting in a single tent consistently, the Honduran white bat has a network of tents scattered across the forest; it alternates among these tents for roosting. Single tents have been consistently occupied for up to 45 days. Although their tents are typically low to the ground, sunlight filters through the leaf which gives their white fur a greenish cast. This almost completely conceals them if they remain still. Alternately, it has been proposed that its white fur gives it the appearance of a wasp nest, which would be avoided by predators. It likely has several predators, including capuchin monkeys, Central American squirrel monkeys, and snakes. Diet and foraging The Honduran white bat is frugivorous. Along with the little white-shouldered bat, the Honduran white bat is one of the two smallest species of frugivorous bat in the world. It specializes on a species of fig, Ficus colubrinae. However, other species of figs are occasionally consumed, such as Ficus schippii. The Honduran white bat prefers F. colubrinae trees that are "high-quality," or produce many fruits at once. It also chooses fig trees that are the closest to its day roosts. F. colubrinae trees have asynchronous fruit production, so its fruits are available as a food source year-round. Because it is highly specialized on the one species of fig, it has larger foraging movements than observed in frugivorous bats that are less specialized. Individuals have an average home range of . It is unclear how it manages to survive on such a narrow diet, as it is predicted it should have to consume supplemental food sources. Reproduction Little is known about the Honduran white bat's reproductive behaviors. It has been proposed that individuals give birth in April and September, and that estrus occurs post-parturition. Pregnant females have been documented in February, March, June, July, and August in Costa Rica, with lactating females documented in March and April. Females have synchronized births, with all births in a colony occurring within the same week. Litter size is one offspring, called a pup. During lactation, mothers will return to their roosts up to six times a night to feed their pups. Pups fledge, or become capable of flight, at 3–4 weeks old. Range and habitat The Honduran white bat is found in several countries in Central America, including Costa Rica, Honduras, Nicaragua, and Panama. Unusually, it is one of four species of leaf-nosed bat endemic to Central America; most are found in South America. Its range encompasses a range of elevations from above sea level. It prefers wet evergreen forests and secondary forests, which can accommodate its specific roosting and dietary requirements. Conservation Despite being a conspicuously colored bat, over sixty years passed between the discovery of the first Honduran white bat in 1898 and the next discovery in 1963. It is currently evaluated as near-threatened by the IUCN. It meets the criteria for this designation because its population is in a "significant decline." The decline does not exceed 30% population loss over the past three generations (approximately 18 years in this species), which would qualify it for vulnerable designation. However, it is on the verge of qualifying for the vulnerable designation. Reasons for its population decline include conversion of its habitat to farmland as well as an expanding human population. It is particularly susceptible to habitat loss because it is highly specialized on a single species of fig for its food source. Human health applications In 2016, it was discovered that the Honduran white bat uses carotenoids to produce the yellow-orange coloration of its ears, nose-leaf, and lips. It was the first mammalian species to be documented with high enough concentrations of carotenoids to produce visible skin coloration. It isolates the pigments from its diet, particularly the fruits of the Ficus colubrinae tree. Lutein, the carotenoid responsible for its yellow pigmentation, is present in its skin in its esterified form, while in its free form in the liver. This suggests that Honduran white bats possess a physiological mechanism to convert free lutein to esterified lutein, which humans are unable to do. Lutein plays an important role in the eyes by preventing damage to the retina; it is hypothesized that if the free lutein in human eyes was esterified, it would be more effective at preventing damage and preserving vision. Understanding the process by which Honduran white bats convert free lutein to esterified lutein could assist in the understanding of how the stability and bioavailability of carotenoids benefit human health. In particular, the species may have research applications for understanding and treating macular degeneration in humans.
Biology and health sciences
Bats
Animals
9716092
https://en.wikipedia.org/wiki/Codeine
Codeine
Codeine is an opiate and prodrug of morphine mainly used to treat pain, coughing, and diarrhea. It is also commonly used as a recreational drug. It is found naturally in the sap of the opium poppy, Papaver somniferum. It is typically used to treat mild to moderate degrees of pain. Greater benefit may occur when combined with paracetamol (acetaminophen) or a nonsteroidal anti-inflammatory drug (NSAID) such as aspirin or ibuprofen. Evidence does not support its use for acute cough suppression in children. In Europe, it is not recommended as a cough medicine for those under 12 years of age. It is generally taken by mouth. It typically starts working after half an hour, with maximum effect at two hours. Its effects last for about four to six hours. Codeine exhibits abuse potential similar to other opioid medications, including a risk of addiction and overdose. Common side effects include vomiting, constipation, itchiness, lightheadedness, and drowsiness. Serious side effects may include breathing difficulties and addiction. Whether its use in pregnancy is safe is unclear. Care should be used during breastfeeding, as it may result in opiate toxicity in the baby. Its use as of 2016 is not recommended in children. Codeine works following being broken down by the liver into morphine; how quickly this occurs depends on a person's genetics. Codeine was discovered in 1832 by Pierre Jean Robiquet. In 2013, about 361,000 kg (795,000 lb) of codeine were produced while 249,000 kg (549,000 lb) were used, which made it the most commonly taken opiate. It is on the World Health Organization's List of Essential Medicines. Codeine occurs naturally and makes up about 2% of opium. Medical uses Pain Codeine is used to treat mild to moderate pain. It is commonly used to treat post-surgical dental pain. Weak evidence indicates that it is useful in cancer pain, but it may have increased adverse effects, especially constipation, compared to other opioids. The American Academy of Pediatrics does not recommend its use in children due to side effects. The Food and Drug Administration (FDA) lists age under 12 years old as a contraindication to use. Cough Codeine is used to relieve coughing. Evidence does not support its use for acute cough suppression in children. In Europe, it is not recommended as a cough medicine for those under 12 years of age. Some tentative evidence shows it can reduce a chronic cough in adults. Diarrhea It is used to treat diarrhea and diarrhea-predominant irritable bowel syndrome, although loperamide (which is available without a prescription for milder diarrhea), diphenoxylate, paregoric, or even laudanum are more frequently used to treat severe diarrhea. Formulations Codeine is marketed as both a single-ingredient drug and in combination preparations with paracetamol (as co-codamol: e.g., brands Paracod, Panadeine, and the Tylenol-with-codeine series, including Tylenol 3 and 1, 2, and 4); with aspirin (as co-codaprin); or with ibuprofen (as Nurofen Plus). These combinations provide greater pain relief than either agent alone (drug synergy). Codeine is also commonly marketed in products containing codeine with other pain killers or muscle relaxers, as well as codeine mixed with phenacetin (Emprazil with codeine No. 1, 2, 3, 4, and 5), naproxen, indomethacin, diclofenac, and others, as well as more complex mixtures, including such mixtures as aspirin + paracetamol + codeine ± caffeine ± antihistamines and other agents, such as those mentioned above. Codeine-only products can be obtained with a prescription as a time-release tablet. Codeine is also marketed in cough syrups with zero to a half-dozen other active ingredients, and a linctus (e.g., Paveral) for all of the uses for which codeine is indicated. Injectable codeine is available for subcutaneous or intramuscular injection only; intravenous injection is contraindicated, as this can result in nonimmune mast-cell degranulation and resulting anaphylactoid reaction. Codeine suppositories are also marketed in some countries. Side effects Common adverse effects associated with the use of codeine include drowsiness and constipation. Less common are itching, nausea, vomiting, dry mouth, miosis, orthostatic hypotension, urinary retention, euphoria, and dysphoria. Rare adverse effects include anaphylaxis, seizure, acute pancreatitis, and respiratory depression. As with all opiates, long-term effects can vary, but can include diminished libido, apathy, and memory loss. Some people may have allergic reactions to codeine, such as the swelling of the skin and rashes. Tolerance to many of the effects of codeine, including its therapeutic effects, develops with prolonged use. This occurs at different rates for different effects, with tolerance to the constipation-inducing effects developing particularly slowly for instance. As with other opioids, a potentially serious adverse drug reaction is respiratory depression. This depression is dose-related and is a mechanism for the potentially fatal consequences of overdose. As codeine is metabolized to morphine, morphine can be passed through breast milk in potentially lethal amounts, fatally depressing the respiration of a breastfed baby. In August 2012, the United States Food and Drug Administration issued a warning about deaths in pediatric patients less than 6 years old after ingesting "normal" doses of paracetamol with codeine after tonsillectomy; this warning was upgraded to a black box warning in February 2013. Some patients are very effective converters of codeine to its active form, morphine, resulting in lethal blood levels. The FDA is presently recommending very cautious use of codeine in young tonsillectomy patients; the drug should be used in the lowest amount that can control the pain, "as needed" and not "around the clock", and immediate medical attention is needed if the user responds negatively. Withdrawal and dependence As with other opiates, chronic use of codeine can cause physical dependence which can lead to severe withdrawal symptoms if a person suddenly stops the medication. Withdrawal symptoms include drug craving, runny nose, yawning, sweating, insomnia, weakness, stomach cramps, nausea, vomiting, diarrhea, muscle spasms, chills, irritability, and pain. These side effects also occur in acetaminophen/aspirin combinations, though to a lesser extent. To minimize withdrawal symptoms, long-term users should gradually reduce their codeine medication under the supervision of a healthcare professional. Also, no evidence indicates that CYP2D6 inhibition is useful in treating codeine dependence, though the metabolism of codeine to morphine (and hence further metabolism to glucuronide morphine conjugates) does have an effect on the abuse potential of codeine. However, CYP2D6 has been implicated in the toxicity and death of neonates when codeine is administered to lactating mothers, particularly those with increased enzyme activity ("ultra-rapid" metabolizers). In 2019 Ireland was said to be on the verge of a codeine addiction epidemic, according to a paper in the Irish Medical Journal. Under Irish law, codeine can be bought over the counter under the supervision of a pharmacist, but there is no mechanism to detect patients travelling to different pharmacies to purchase codeine. Pharmacology Pharmacodynamics Codeine is a nonsynthetic opioid. It is a selective agonist of the μ-opioid receptor (MOR). Codeine itself has relatively weak affinity for the MOR. Instead of acting directly on the MOR, codeine functions as a prodrug of its major active metabolites morphine and codeine-6-glucuronide, which are far more potent MOR agonists in comparison. Codeine has been found as an endogenous compound, along with morphine, in the brains of nonhuman primates with depolarized neurons, indicating that codeine may function as a neurotransmitter or neuromodulator in the central nervous system. Like morphine, codeine causes TLR4 signaling which causes allodynia and hyperalgesia. It does not need to be converted to morphine to increase pain sensitivity. Mechanism of action Codeine is an opiate and an agonist of the mu opioid receptor (MOR). It acts on the central nervous system to have an analgesic effect. It is metabolised in the liver to produce morphine which is ten times more potent against the mu receptor. Opioid receptors are G protein-coupled receptors that positively and negatively regulate synaptic transmission through downstream signalling. Binding of codeine or morphine to the mu-opioid receptor results in hyperpolarization of the neuron leading to the inhibition of the release of nociceptive neurotransmitters, causing an analgesic effect and increased pain tolerance due to reduced neuronal excitability. Pharmacokinetics The conversion of codeine to morphine occurs in the liver and is catalyzed by the cytochrome P450 enzyme CYP2D6. CYP3A4 produces norcodeine, and UGT2B7 conjugates codeine, norcodeine, and morphine to the corresponding 3- and 6-glucuronides. Srinivasan, Wielbo, and Tebbett speculate that codeine-6-glucuronide is responsible for a large percentage of the analgesia of codeine, and thus these patients should experience some analgesia. Many of the adverse effects will still be experienced in poor metabolizers. Conversely, between 0.5% and 2% of the population are "extensive metabolizers"; multiple copies of the gene for 2D6 produce high levels of CYP2D6 and will metabolize drugs through that pathway more quickly than others. Some medications are CYP2D6 inhibitors and reduce or even completely block the conversion of codeine to morphine. The best-known of these are two of the selective serotonin reuptake inhibitors, paroxetine (Paxil) and fluoxetine (Prozac) as well as the antihistamine diphenhydramine (Benadryl) and the antidepressant bupropion (Wellbutrin, also known as Zyban). Other drugs, such as rifampicin and dexamethasone, induce CYP450 isozymes and thus increase the conversion rate. CYP2D6 converts codeine into morphine, which then undergoes glucuronidation. Life-threatening intoxication, including respiratory depression requiring intubation, can develop over a matter of days in patients who have multiple functional alleles of CYP2D6, resulting in ultrarapid metabolism of opioids such as codeine into morphine. Studies on codeine's analgesic effect are consistent with the idea that metabolism by CYP2D6 to morphine is important, but some studies show no major differences between those who are poor metabolizers and extensive metabolizers. Evidence supporting the hypothesis that ultrarapid metabolizers may get greater analgesia from codeine due to increased morphine formation is limited to case reports. Due to the increased metabolism of codeine to morphine, ultrarapid metabolizers (those possessing more than two functional copies of the CYP2D6 allele) are at increased risk of adverse drug effects related to morphine toxicity. Guidelines released by the Clinical Pharmacogenomics Implementation Consortium (CPIC) advise against administering codeine to ultrarapid metabolizers, where this genetic information is available. The CPIC also suggests that codeine use be avoided in poor metabolizers, due to its lack of efficacy in this group. Codeine and its salts are readily absorbed from the gastrointestinal tract, and ingestion of codeine phosphate produces peak plasma concentrations in about one hour. Plasma half life is between 3 and 4 hours, and oral/intramuscular analgesic potency ratio is approximately equal to 1:1.5. The most common conversion ratio, given on equianalgesia charts used in the United States, Canada, the UK, Republic of Ireland, the European Union, Russia and elsewhere as 130 mg IM equals 200 mg PO—both of which are equivalent to 10 mg of morphine sulphate IV and 60 mg of morphine sulphate PO. The salt:freebase ratio of the salts of both drugs in use are roughly equivalent, and do not generally make a clinical difference. Codeine is metabolised by O- and N-demethylation in the liver to morphine and norcodeine. Hydrocodone is also a metabolite of codeine in humans. Codeine and its metabolites are mostly removed from the body by the kidneys, primarily as conjugates with glucuronic acid. The active metabolites of codeine, notably morphine, exert their effects by binding to and activating the μ-opioid receptor. In people that can extensively metabolize the codeine, a 30 mg dose could yield up to 4 mg of morphine. Chemistry While codeine can be directly extracted from opium, its source, most codeine is synthesized from the much more abundant morphine through the process of O-methylation, through a process first completed in the late 20th century by Robert C. Corcoran and Junning Ma. Relation to other opioids Codeine has been used in the past as the starting material and prototype of a large class of mainly mild to moderately strong opioids, such as hydrocodone (1920 in Germany), oxycodone (1916 in Germany), dihydrocodeine (1908 in Germany), and its derivatives such as nicocodeine (1956 in Austria). However, these opioids are no longer synthesized from codeine and are usually synthesized from other opium alkaloids, specifically thebaine. Other series of codeine derivatives include isocodeine and its derivatives, which were developed in Germany starting around 1920. In general, the various classes of morphine derivatives such as ketones, semisynthetics like dihydromorphine, halogeno-morphides, esters, ethers, and others have codeine, dihydrocodeine, and isocodeine analogues. The codeine ester acetylcodeine is a common active impurity in street heroin as some codeine tends to dissolve with the morphine when it is extracted from opium in underground heroin and morphine base labs. As an analgesic, codeine compares weakly to other opiates. Related to codeine in other ways are codoxime, thebacon, codeine-N-oxide (genocodeine), related to the nitrogen morphine derivatives as is codeine methobromide, and heterocodeine, which is a drug six times stronger than morphine and 72 times stronger than codeine due to a small re-arrangement of the molecule, namely moving the methyl group from the 3 to the 6 position on the morphine carbon skeleton. Drugs bearing resemblance to codeine in effects due to close structural relationship are variations on the methyl groups at the 3 position including ethylmorphine, also known as codethyline (Dionine), and benzylmorphine (Peronine). While having no narcotic effects of its own, the important opioid precursor thebaine differs from codeine only slightly in structure. Pseudocodeine and some other similar alkaloids not currently used in medicine are found in trace amounts in opium as well. History Codeine, or 3-methylmorphine, is an alkaloid found in the opium poppy, Papaver somniferum var. album, a plant in the family Papaveraceae. Opium poppy has been cultivated and utilized throughout human history for a variety of medicinal (analgesic, anti-tussive and anti-diarrheal) and hypnotic properties linked to the diversity of its active components, which include morphine, codeine and papaverine. Codeine is found in concentrations of 1% to 3% in opium prepared by the latex method from unripe pods of Papaver somniferum. The name codeine is derived from the Ancient Greek (, "poppy head"). The relative proportion of codeine to morphine, the most common opium alkaloid at 4% to 23%, tends to be somewhat higher in the poppy straw method of preparing opium alkaloids. Until the beginning of the 19th century, raw opium was used in diverse preparations known as laudanum (see Thomas de Quincey's Confessions of an English Opium-Eater, 1821) and paregoric elixirs, several which were popular in England since the beginning of the 18th century; the original preparation seems to have been elaborated in Leiden, the Netherlands around 1715 by a chemist Jakob Le Mort; in 1721 the London Pharmacopoeia mentions an Elixir Asthmaticum, replaced by the term Elixir Paregoricum ("pain soother") in 1746. The progressive isolation of opium's several active components opened the path to improved selectivity and safety of the opiates-based pharmacopeia. Morphine had already been isolated in Germany by in 1804. Codeine was first isolated in 1832 in France by , already famous for the discovery of alizarin, the most widespread red dye, while working on refined morphine extraction processes. Robiquet is also credited with discovering caffeine independently of Pelletier, Caventou, and Runge. Thomas Anderson determined the correct composition in 1853 but a chemical structure was proposed only in 1925 by J. M. Gulland and Robert Robinson. The first crystal structure would have to wait until 1954. Codeine and morphine, as well as opium, were used in an attempt to treat diabetes in the 1880s and thereafter, as recently as the 1950s. Numerous codeine salts have been prepared since the drug was discovered. The most commonly used are the hydrochloride (freebase conversion ratio 0.805, i.e. 10 mg of the hydrochloride salt is equivalent in effect to 8.05 mg of the freebase form), phosphate (0.736), sulphate (0.859), and citrate (0.842). Others include a salicylate NSAID, codeine salicylate (0.686), a bromide (codeine methylbromide, 0.759), and at least five codeine-based barbiturates, the phenylethylbarbiturate (0.56), cyclohexenylethylbarbiturate (0.559), cyclopentenylallylbarbiturate (0.561), (0.561), and diethylbarbiturate (0.619). The latter was introduced as Codeonal in 1912, indicated for pain with nervousness. Codeine methylbromide is also considered a separate drug for various purposes. Society and culture Codeine is the most widely used opiate in the world, and is one of the most commonly used drugs overall according to numerous reports by organizations including the World Health Organization and its League of Nations predecessor agency. Names It is often sold as a salt in the form of either codeine sulfate or codeine phosphate in the United States, United Kingdom, and Australia. Codeine hydrochloride is more common worldwide and the citrate, hydroiodide, hydrobromide, tartrate, and other salts are also seen. The chemical name for codeine is morphinan-6-ol, 7,8-didehydro-4,5-epoxy-3-methoxy-17-methyl-, (5α,6α)- Recreational use A heroin (diamorphine) or other opiate/opioid addict may use codeine to ward off the effects of withdrawal during periods where their preferred drug is unavailable or unaffordable. Codeine is also available in conjunction with the anti-nausea medication promethazine in the form of a syrup. Brand named as Phenergan with Codeine or in generic form as promethazine with Codeine, it began to be mixed with soft drinks in the 1990s as a recreational drug, called 'syrup', 'lean', or 'purple drank'. Rapper Pimp C, from the group UGK, died from an overdose of this combination. Codeine is used in illegal drug laboratories to make morphine. Detection Codeine and its major metabolites may be quantitated in blood, plasma, or urine to monitor therapy, confirm a diagnosis of poisoning, or assist in a medico-legal death investigation. Drug abuse screening programs generally test urine, hair, sweat or saliva. Many commercial opiate screening tests directed at morphine cross-react appreciably with codeine and its metabolites, but chromatographic techniques can easily distinguish codeine from other opiates and opioids. It is important to note that codeine usage results in significant amounts of morphine as an excretion product. Furthermore, heroin contains codeine (or acetyl codeine) as an impurity and its use will result in the excretion of small amounts of codeine. Poppy seed foods represent yet another source of low levels of codeine in one's biofluids. Blood or plasma codeine concentrations are typically in the 50–300 μg/L range in persons taking the drug therapeutically, 700–7,000 μg/L in chronic users, and 1,000–10,000 μg/L in cases of acute fatal over dosage. Codeine is produced in the human body along the same biosynthetic pathway as morphine. Urinary concentrations of endogenous codeine and morphine have been found to significantly increase in individuals taking L-DOPA for the treatment of Parkinson's disease. Legal status Around the world, codeine is, contingent on its concentration, a Schedule II and III drug under the Single Convention on Narcotic Drugs. In Australia, Canada, New Zealand, Sweden, the United Kingdom, the United States and many other countries, codeine is regulated under various narcotic control laws. In some countries, it is available without a medical prescription in combination preparations from licensed pharmacists in doses up to 20 mg, or 30 mg when sold combined with 500 mg paracetamol. As of 2015, of the European Union member states, 11 countries (Bulgaria, Cyprus, Denmark, Estonia, Ireland, Latvia, Lithuania, Malta, Poland, Romania, and Slovenia) allow the sale of OTC codeine solid dosage forms. Australia In Australia, since 1 February 2018, preparations containing codeine are not available without a prescription. Preparations containing pure codeine (e.g., codeine phosphate tablets or codeine phosphate linctus) are available on prescription and are considered S8 (Schedule 8, or "Controlled Drug Possession without authority illegal"). Schedule 8 preparations are subject to the strictest regulation of all medications available to consumers. Prior to 1 February 2018, Codeine was available over-the-counter (OTC). Canada In Canada, codeine is regulated under the Narcotic Control Regulations (NCR), which falls under the Controlled Drugs and Substances Act (CDSA). Regulations state the pharmacists may, without a prescription, sell low-dose codeine products (containing up to 8 mg of codeine per tablet or up to 20 mg per 30 ml in liquid preparation) if the preparation contains at least two additional medicinal ingredients other than a narcotic (S.36.1 NCR). In Canada tablets containing 8 mg of codeine combined with 15 mg of caffeine and 300 mg of acetaminophen are sold as T1s (Tylenol Number 1) without a prescription. A similar tablet called "A.C. & C." (which stands for Acetylsalicylic acid with Caffeine and Codeine) containing 325–375 mg of acetylsalicylic acid (Aspirin) instead of acetaminophen is also available without a prescription. Codeine combined with an antihistamine, and often caffeine, is sold under various trade names and is available without a prescription. These products are kept behind the counter and must be dispensed by a pharmacist who may limit quantities. Names of many codeine and dihydrocodeine products in Canada tend to follow the narcotic content number system (Tylenol With Codeine No. 1, 2, 3, 4 &c) mentioned below in the section on the United States; it came to be in its current form with the Pure Food & Drug Act of 1906. Controlled Drugs and Substances Act (S.C. 1996, c. 19) effective 28 July 2020. Codeine is now classified under Schedule 1, giving it a higher priority in the treatments of offenders of the law. Codeine became a prescription-only medication in the province of Manitoba on 1 February 2016. The number of low-dose codeine tablets sold in Manitoba decreased by 94 percent from 52.5 million tablets sold in the year prior to the policy change to 3.3 million in the year after. A pharmacist may issue a prescription, and all purchases are logged to a central database to prevent overprescribing. Saskatchewan's pharmacy college is considering enacting a similar ban to Manitoba's. On 9 May 2019, the Canadian Pharmacists Association wrote to Health Canada proposing regulations amending the NCR, the BOTSR, and the FDR - Part G, which included requiring that all products containing codeine be available by prescription only. New safety measures were issued by Health Canada on 28 July 2016; "codeine should no longer be used (contraindicated) in patients under 18 years of age to treat pain after surgery to remove tonsils or adenoids, as these patients are more susceptible to the risk of serious breathing problems. Codeine (prescription and non-prescription) is already not recommended for children under the age of 12, for any use." Denmark In Denmark codeine is sold over the counter in dosages up to 9.6 mg (with aspirin, brand name Kodimagnyl); anything stronger requires a prescription. Estonia Until 2023, in Estonia codeine was sold over the counter in dosages up to 8 mg (with paracetamol, brand name Co-Codamol). Ethiopia Approximately 30% of the Ethiopian population carry an extra copy of the gene CYP2D6, and are classified as codeine ultrametabolizers. These individuals metabolize codeine to morphine at a dangerously fast rate, leading to adverse events and potentially death. As a consequence the Ethiopian Food, Medicine and Health Care Administration and Control Authority has entirely banned the use of codeine as unsafe for the general population. France In France, most preparations containing codeine only began requiring a doctor's prescription in 2017. Products containing codeine include Néocodion (codeine and camphor), Tussipax (ethylmorphine and codeine), Paderyl (codeine alone), Codoliprane (codeine with paracetamol), Prontalgine and Migralgine (codeine, paracetamol and caffeine). The 2017 law change made a prescription mandatory for all codeine products, along with those containing ethylmorphine and dextromethorphan. Greece Codeine is classed as an illegal drug in Greece, and individuals possessing it could conceivably be arrested, even if they were legitimately prescribed it in another country. It is sold only with a doctor's prescription (Lonarid-N, Lonalgal). Hong Kong In Hong Kong, codeine is regulated under the Laws of the Hong Kong, Dangerous Drugs Ordinance, Chapter 134, Schedule 1. It can be used legally only by health professionals and for university research purposes. The substance can be given by pharmacists under a prescription. Anyone who supplies the substance without a prescription can be fined $10,000 (HKD). The maximum penalty for trafficking or manufacturing the substance is a $5,000,000 (HKD) fine and life imprisonment. Possession of the substance for consumption without license from the Department of Health is illegal with a $1,000,000 (HKD) fine and/or 7 years of jail time. However, codeine is available without prescription from licensed pharmacists in doses up to 0.1% (i.e. 5 mg/5ml) India Codeine preparations require a prescription in India. A preparation of paracetamol and codeine is available in India. Codeine is also present in various cough syrups as codeine phosphate including chlorpheniramine maleate. Pure codeine is also available as codeine sulphate tablets. Codeine containing cough medicine has been banned in India with effect from 14 March 2016. The Ministry of Health and Family Welfare has found no proof of its efficacy against cough control. Ireland In Ireland, new regulations came into effect on 1 August 2010 concerning codeine, due to worries about the overuse of the drug. Codeine remains a semi non-prescriptive, over-the-counter drug up to a limit of 12.8 mg per pill, but codeine products must be out of the view of the public to facilitate the legislative requirement that these products "are not accessible to the public for self-selection". In practice, this means customers must ask pharmacists for the product containing codeine in name, and the pharmacist makes a judgement whether it is suitable for the patient to be using codeine, and that patients are fully advised of the correct use of these products. Products containing more than 12.8 mg codeine are available on prescription only. Italy Codeine tablets or preparations require a prescription in Italy. Preparations of paracetamol and codeine are available in Italy as Co-Efferalgan and Tachidol. Japan Codeine is available over the counter at pharmacies, allowing up to 50 mg of codeine phosphate per day for adults. Latvia In Latvia codeine is sold over the counter in dosages up to 8 mg (with paracetamol, brand name Co-Codamol). Nigeria Nigeria in 2018 plans to ban the manufacture and import of cough syrup that include codeine as an ingredient. This is due to concerns regarding its use to get intoxicated. South Africa Codeine is available over the counter in South Africa. Certain pharmacies require people to write down their name and address to ensure they are not buying too much over a short period although many do not require this at all. According to Lochan Naidoo, the former president of the National Narcotics Control Board, making the drugs more difficult to obtain could lead to even worse problems where people in withdrawal would turn to illicit drugs to get their fix. Although codeine is freely available, South Africa has a fairly low annual prevalence rate of opiate use at 0.3% compared to the United States at 0.57% where all opiates are strictly regulated. United Arab Emirates The UAE takes an exceptionally strict line on medicines, with many common drugs, notably anything containing codeine being banned unless one has a notarized and authenticated doctor's prescription. Visitors breaking the rules, even inadvertently, have been deported or imprisoned. The US Embassy to the UAE maintains an unofficial list of what may not be imported. United Kingdom In the United Kingdom, the sale and possession of codeine are restricted separately under law. Neat codeine and higher-strength codeine formulations are generally prescription-only medicines (POM) meaning that the sale of such products is restricted under the Medicines Act 1968. Lower-strength products containing combinations of up to 12.8 mg of codeine per dosage unit, combined with paracetamol, ibuprofen or aspirin are available over the counter at pharmacies. Codeine linctus of 15 mg per 5 ml is also available at some pharmacies, although a purchaser would have to request it specifically from the pharmacist. Under the Misuse of Drugs Act 1971 codeine is a Class B controlled substance or a Class A drug when prepared for injection. The possession of controlled substances without a prescription is a criminal offence. However, certain preparations of codeine are exempt from this restriction under Schedule 5 of the Misuse of Drugs Regulations 2001. It is thus legal to possess codeine without a prescription, provided that it is compounded with at least one other active or inactive ingredient and that the dosage of each tablet, capsule, etc. does not exceed 100 mg or 2.5% concentration in the case of liquid preparations. The exemptions do not to apply to any preparation of codeine designed for injection. United States In the United States, codeine is regulated by the Controlled Substances Act. Federal law dictates that codeine be a Schedule II controlled substance when used in products for pain relief that contain codeine alone or more than 80 mg per dosage unit. Codeine without aspirin or acetaminophen (Tylenol) is very rarely available or prescribed to discourage abuse. Tablets of codeine in combination with aspirin or acetaminophen (paracetamol) and intended for pain relief are listed as Schedule III. Cough syrups are classed as Schedule III, IV, or V, depending on formulation. For example, the acetaminophen/codeine antitussive liquid is a Schedule IV controlled substance. Some states have chosen to reclassify codeine preparations at a more restrictive schedule to lower the instances of its abuse. Minnesota, for instance, has chosen to reclassify Schedule V some codeine preparations (e.g. Cheratussin) as a Schedule III controlled substance. Schedule V controlled substances Substances in this schedule have a low potential for abuse relative to substances listed in Schedule IV and consist primarily of preparations containing limited quantities of certain narcotics. Examples of Schedule V substances include cough preparations containing not more than 200 milligrams of codeine per 100 milliliters or per 100 grams (Robitussin AC, Phenergan with Codeine).
Biology and health sciences
Pain treatments
Health
9722260
https://en.wikipedia.org/wiki/Chemical%20substance
Chemical substance
A chemical substance is a unique form of matter with constant chemical composition and characteristic properties. Chemical substances may take the form of a single element or chemical compounds. If two or more chemical substances can be combined without reacting, they may form a chemical mixture. If a mixture is separated to isolate one chemical substance to a desired degree, the resulting substance is said to be chemically pure. Chemical substances can exist in several different physical states or phases (e.g. solids, liquids, gases, or plasma) without changing their chemical composition. Substances transition between these phases of matter in response to changes in temperature or pressure. Some chemical substances can be combined or converted into new substances by means of chemical reactions. Chemicals that do not possess this ability are said to be inert. Pure water is an example of a chemical substance, with a constant composition of two hydrogen atoms bonded to a single oxygen atom (i.e. H2O). The atomic ratio of hydrogen to oxygen is always 2:1 in every molecule of water. Pure water will tend to boil near , an example of one of the characteristic properties that define it. Other notable chemical substances include diamond (a form of the element carbon), table salt (NaCl; an ionic compound), and refined sugar (C12H22O11; an organic compound). Definitions In addition to the generic definition offered above, there are several niche fields where the term "chemical substance" may take alternate usages that are widely accepted, some of which are outlined in the sections below. Inorganic chemistry Chemical Abstracts Service (CAS) lists several alloys of uncertain composition within their chemical substance index. While an alloy could be more closely defined as a mixture, referencing them in the chemical substances index allows CAS to offer specific guidance on standard naming of alloy compositions. Non-stoichiometric compounds are another special case from inorganic chemistry, which violate the requirement for constant composition. For these substances, it may be difficult to draw the line between a mixture and a compound, as in the case of palladium hydride. Broader definitions of chemicals or chemical substances can be found, for example: "the term 'chemical substance' means any organic or inorganic substance of a particular molecular identity, including – (i) any combination of such substances occurring in whole or in part as a result of a chemical reaction or occurring in nature". Geology In the field of geology, inorganic solid substances of uniform composition are known as minerals. When two or more minerals are combined to form mixtures (or aggregates), they are defined as rocks. Many minerals, however, mutually dissolve into solid solutions, such that a single rock is a uniform substance despite being a mixture in stoichiometric terms. Feldspars are a common example: anorthoclase is an alkali aluminum silicate, where the alkali metal is interchangeably either sodium or potassium. Law In law, "chemical substances" may include both pure substances and mixtures with a defined composition or manufacturing process. For example, the EU regulation REACH defines "monoconstituent substances", "multiconstituent substances" and "substances of unknown or variable composition". The latter two consist of multiple chemical substances; however, their identity can be established either by direct chemical analysis or reference to a single manufacturing process. For example, charcoal is an extremely complex, partially polymeric mixture that can be defined by its manufacturing process. Therefore, although the exact chemical identity is unknown, identification can be made with a sufficient accuracy. The CAS index also includes mixtures. Polymer chemistry Polymers almost always appear as mixtures of molecules of multiple molar masses, each of which could be considered a separate chemical substance. However, the polymer may be defined by a known precursor or reaction(s) and the molar mass distribution. For example, polyethylene is a mixture of very long chains of -CH2- repeating units, and is generally sold in several molar mass distributions, LDPE, MDPE, HDPE and UHMWPE. History The concept of a "chemical substance" became firmly established in the late eighteenth century after work by the chemist Joseph Proust on the composition of some pure chemical compounds such as basic copper carbonate. He deduced that, "All samples of a compound have the same composition; that is, all samples have the same proportions, by mass, of the elements present in the compound." This is now known as the law of constant composition. Later with the advancement of methods for chemical synthesis particularly in the realm of organic chemistry; the discovery of many more chemical elements and new techniques in the realm of analytical chemistry used for isolation and purification of elements and compounds from chemicals that led to the establishment of modern chemistry, the concept was defined as is found in most chemistry textbooks. However, there are some controversies regarding this definition mainly because the large number of chemical substances reported in chemistry literature need to be indexed. Isomerism caused much consternation to early researchers, since isomers have exactly the same composition, but differ in configuration (arrangement) of the atoms. For example, there was much speculation about the chemical identity of benzene, until the correct structure was described by Friedrich August Kekulé. Likewise, the idea of stereoisomerism – that atoms have rigid three-dimensional structure and can thus form isomers that differ only in their three-dimensional arrangement – was another crucial step in understanding the concept of distinct chemical substances. For example, tartaric acid has three distinct isomers, a pair of diastereomers with one diastereomer forming two enantiomers. Chemical elements An element is a chemical substance made up of a particular kind of atom and hence cannot be broken down or transformed by a chemical reaction into a different element, though it can be transmuted into another element through a nuclear reaction. This is because all of the atoms in a sample of an element have the same number of protons, though they may be different isotopes, with differing numbers of neutrons. As of 2019, there are 118 known elements, about 80 of which are stable – that is, they do not change by radioactive decay into other elements. Some elements can occur as more than a single chemical substance (allotropes). For instance, oxygen exists as both diatomic oxygen (O2) and ozone (O3). The majority of elements are classified as metals. These are elements with a characteristic lustre such as iron, copper, and gold. Metals typically conduct electricity and heat well, and they are malleable and ductile. Around 14 to 21 elements, such as carbon, nitrogen, and oxygen, are classified as non-metals. Non-metals lack the metallic properties described above, they also have a high electronegativity and a tendency to form negative ions. Certain elements such as silicon sometimes resemble metals and sometimes resemble non-metals, and are known as metalloids. Chemical compounds A chemical compound is a chemical substance that is composed of a particular set of atoms or ions. Two or more elements combined into one substance through a chemical reaction form a chemical compound. All compounds are substances, but not all substances are compounds. A chemical compound can be either atoms bonded together in molecules or crystals in which atoms, molecules or ions form a crystalline lattice. Compounds based primarily on carbon and hydrogen atoms are called organic compounds, and all others are called inorganic compounds. Compounds containing bonds between carbon and a metal are called organometallic compounds. Compounds in which components share electrons are known as covalent compounds. Compounds consisting of oppositely charged ions are known as ionic compounds, or salts. Coordination complexes are compounds where a dative bond keeps the substance together without a covalent or ionic bond. Coordination complexes are distinct substances with distinct properties different from a simple mixture. Typically these have a metal, such as a copper ion, in the center and a nonmetals atom, such as the nitrogen in an ammonia molecule or oxygen in water in a water molecule, forms a dative bond to the metal center, e.g. tetraamminecopper(II) sulfate [Cu(NH3)4]SO4·H2O. The metal is known as a "metal center" and the substance that coordinates to the center is called a "ligand". However, the center does not need to be a metal, as exemplified by boron trifluoride etherate BF3OEt2, where the highly Lewis acidic, but non-metallic boron center takes the role of the "metal". If the ligand bonds to the metal center with multiple atoms, the complex is called a chelate. In organic chemistry, there can be more than one chemical compound with the same composition and molecular weight. Generally, these are called isomers. Isomers usually have substantially different chemical properties, and often may be isolated without spontaneously interconverting. A common example is glucose vs. fructose. The former is an aldehyde, the latter is a ketone. Their interconversion requires either enzymatic or acid-base catalysis. However, tautomers are an exception: the isomerization occurs spontaneously in ordinary conditions, such that a pure substance cannot be isolated into its tautomers, even if these can be identified spectroscopically or even isolated in special conditions. A common example is glucose, which has open-chain and ring forms. One cannot manufacture pure open-chain glucose because glucose spontaneously cyclizes to the hemiacetal form. Substances versus mixtures All matter consists of various elements and chemical compounds, but these are often intimately mixed together. Mixtures contain more than one chemical substance, and they do not have a fixed composition. Butter, soil and wood are common examples of mixtures. Sometimes, mixtures can be separated into their component substances by mechanical processes, such as chromatography, distillation, or evaporation. Grey iron metal and yellow sulfur are both chemical elements, and they can be mixed together in any ratio to form a yellow-grey mixture. No chemical process occurs, and the material can be identified as a mixture by the fact that the sulfur and the iron can be separated by a mechanical process, such as using a magnet to attract the iron away from the sulfur. In contrast, if iron and sulfur are heated together in a certain ratio (1 atom of iron for each atom of sulfur, or by weight, 56 grams (1 mol) of iron to 32 grams (1 mol) of sulfur), a chemical reaction takes place and a new substance is formed, the compound iron(II) sulfide, with chemical formula FeS. The resulting compound has all the properties of a chemical substance and is not a mixture. Iron(II) sulfide has its own distinct properties such as melting point and solubility, and the two elements cannot be separated using normal mechanical processes; a magnet will be unable to recover the iron, since there is no metallic iron present in the compound. Chemicals versus chemical substances While the term chemical substance is a precise technical term that is synonymous with chemical for chemists, the word chemical is used in general usage to refer to both (pure) chemical substances and mixtures (often called compounds), and especially when produced or purified in a laboratory or an industrial process. In other words, the chemical substances of which fruits and vegetables, for example, are naturally composed even when growing wild are not called "chemicals" in general usage. In countries that require a list of ingredients in products, the "chemicals" listed are industrially produced "chemical substances". The word "chemical" is also often used to refer to addictive, narcotic, or mind-altering drugs. Within the chemical industry, manufactured "chemicals" are chemical substances, which can be classified by production volume into bulk chemicals, fine chemicals and chemicals found in research only: Bulk chemicals are produced in very large quantities, usually with highly optimized continuous processes and to a relatively low price. Fine chemicals are produced at a high cost in small quantities for special low-volume applications such as biocides, pharmaceuticals and speciality chemicals for technical applications. Research chemicals are produced individually for research, such as when searching for synthetic routes or screening substances for pharmaceutical activity. In effect, their price per gram is very high, although they are not sold. The cause of the difference in production volume is the complexity of the molecular structure of the chemical. Bulk chemicals are usually much less complex. While fine chemicals may be more complex, many of them are simple enough to be sold as "building blocks" in the synthesis of more complex molecules targeted for single use, as named above. The production of a chemical includes not only its synthesis but also its purification to eliminate by-products and impurities involved in the synthesis. The last step in production should be the analysis of batch lots of chemicals in order to identify and quantify the percentages of impurities for the buyer of the chemicals. The required purity and analysis depends on the application, but higher tolerance of impurities is usually expected in the production of bulk chemicals. Thus, the user of the chemical in the US might choose between the bulk or "technical grade" with higher amounts of impurities or a much purer "pharmaceutical grade" (labeled "USP", United States Pharmacopeia). "Chemicals" in the commercial and legal sense may also include mixtures of highly variable composition, as they are products made to a technical specification instead of particular chemical substances. For example, gasoline is not a single chemical compound or even a particular mixture: different gasolines can have very different chemical compositions, as "gasoline" is primarily defined through source, properties and octane rating. Naming and indexing Every chemical substance has one or more systematic names, usually named according to the IUPAC rules for naming. An alternative system is used by the Chemical Abstracts Service (CAS). Many compounds are also known by their more common, simpler names, many of which predate the systematic name. For example, the long-known sugar glucose is now systematically named 6-(hydroxymethyl)oxane-2,3,4,5-tetrol. Natural products and pharmaceuticals are also given simpler names, for example the mild pain-killer Naproxen is the more common name for the chemical compound (S)-6-methoxy-α-methyl-2-naphthaleneacetic acid. Chemists frequently refer to chemical compounds using chemical formulae or molecular structure of the compound. There has been a phenomenal growth in the number of chemical compounds being synthesized (or isolated), and then reported in the scientific literature by professional chemists around the world. An enormous number of chemical compounds are possible through the chemical combination of the known chemical elements. As of Feb 2021, about "177 million organic and inorganic substances" (including 68 million defined-sequence biopolymers) are in the scientific literature and registered in public databases. The names of many of these compounds are often nontrivial and hence not very easy to remember or cite accurately. Also, it is difficult to keep track of them in the literature. Several international organizations like IUPAC and CAS have initiated steps to make such tasks easier. CAS provides the abstracting services of the chemical literature, and provides a numerical identifier, known as CAS registry number to each chemical substance that has been reported in the chemical literature (such as chemistry journals and patents). This information is compiled as a database and is popularly known as the Chemical substances index. Other computer-friendly systems that have been developed for substance information are: SMILES and the International Chemical Identifier or InChI. Isolation, purification, characterization, and identification Often a pure substance needs to be isolated from a mixture, for example from a natural source (where a sample often contains numerous chemical substances) or after a chemical reaction (which often gives mixtures of chemical substances). Measurement
Physical sciences
Chemistry: General
null
2301573
https://en.wikipedia.org/wiki/Daylight
Daylight
Daylight is the combination of all direct and indirect sunlight during the daytime. This includes direct sunlight, diffuse sky radiation, and (often) both of these reflected by Earth and terrestrial objects, like landforms and buildings. Sunlight scattered or reflected by astronomical objects is generally not considered daylight. Therefore, daylight excludes moonlight, despite it being reflected indirect sunlight. Definition Daylight is present at a particular location, to some degree, whenever the Sun is above the local horizon. This is true for slightly more than 50% of the Earth at any given time, since the Earth's atmosphere refracts some sunlight even when the Sun is below the horizon. Outdoor illuminance varies from 120,000 lux for direct sunlight at noon, which may cause eye pain, to less than 5 lux for thick storm clouds with the Sun at the horizon (even <1 lux for the most extreme case), which may make shadows from distant street lights visible. It may be darker under unusual circumstances like a solar eclipse or very high levels of atmospheric particulates, which include smoke (see New England's Dark Day), dust, and volcanic ash. Intensity in different conditions For comparison, nighttime illuminance levels are: For a table of approximate daylight intensity in the Solar System, see sunlight.
Physical sciences
Celestial mechanics
Astronomy
2304367
https://en.wikipedia.org/wiki/Videotelephony
Videotelephony
Videotelephony (also known as videoconferencing or video calling) is the use of audio and video for simultaneous two-way communication. Today, videotelephony is widespread. There are many terms to refer to videotelephony. Videophones are standalone devices for video calling (compare Telephone). In the present day, devices like smartphones and computers are capable of video calling, reducing the demand for separate videophones. Videoconferencing implies group communication. Videoconferencing is used in telepresence, whose goal is to create the illusion that remote participants are in the same room. The concept of videotelephony was conceived in the late 19th century, and versions were available to the public starting in the 1930s. Early demonstrations were installed at booths in post offices and shown at various world expositions. In 1970, AT&T launched the first commercial personal videotelephone system. In addition to videophones, there existed image phones which exchanged still images between units every few seconds over conventional telephone lines. The development of advanced video codecs, more powerful CPUs, and high-bandwidth Internet service in the late 1990s allowed digital videophones to provide high-quality low-cost color service between users almost any place in the world. Applications of videotelephony include sign language transmission for deaf and speech-impaired people, distance education, telemedicine, and overcoming mobility issues. News media organizations have used videotelephony for broadcasting. History Origin The concept of videotelephony was first conceived in the late 1870s, both in the United States and in Europe, although the basic sciences to permit its very earliest trials would take nearly a half century to be discovered. The prerequisite knowledge arose from intensive research and experimentation in several telecommunication fields, notably electrical telegraphy, telephony, radio, and television. Early systems Simple analog videophone communication could be established as early as the invention of the television. Such an antecedent usually consisted of two closed-circuit television systems connected via coax cable or radio. An example of that was the German Reich Postzentralamt (post office) videotelephone network serving Berlin and several German cities via coaxial cables between 1936 and 1940. The development of videotelephony as a subscription service started in the latter half of the 1920s in the United Kingdom and the United States, spurred notably by John Logie Baird and AT&T's Bell Labs. This occurred in part, at least with AT&T, to serve as an adjunct supplementing the use of the telephone. A number of organizations believed that videotelephony would be superior to plain voice communications. Attempts at using normal telephony networks to transmit slow-scan video, such as the first systems developed by AT&T Corporation, first researched in the 1950s, failed mostly due to the poor picture quality and the lack of efficient video compression techniques. During the first crewed space flights, NASA used two radio-frequency (UHF or VHF) video links, one in each direction. TV channels routinely use this type of videotelephony when reporting from distant locations. The news media were to become regular users of mobile links to satellites using specially equipped trucks, and much later via special satellite videophones in a briefcase. This technique was very expensive, though, and was not adopted for applications such as telemedicine, distance education, and business meetings. Decades of research and development culminated in the 1970 commercial launch of AT&T's Picturephone service, available in select cities. However, the system was a commercial failure, chiefly due to consumer apathy, high subscription costs, and lack of network effect—with only a few hundred Picturephones in the world, users had extremely few contacts they could actually call, and interoperability with other videophone systems would not exist for decades. Digital In the 1980s, digital telephony transmission networks became possible, such as with ISDN networks. During this time, there was also research into other forms of digital video and audio communication. Many of these technologies, such as the Media space, are not as widely used today as videoconferencing but were still an important area of research. The first dedicated systems started to appear as ISDN networks were expanding throughout the world. One of the first commercial videoconferencing systems sold to companies came from PictureTel Corp., which had an initial public offering in November, 1984. In 1984, Concept Communication in the United States created a circuit board for standard personal computers that doubled the video frame rate of typical digital videotelephone systems from 15 to 30 frames per second, and reduced the cost from $100,000 to $12,000. The company also secured a patent for a codec for full-motion videoconferencing, first demonstrated at AT&T Bell Labs in 1986. Very expensive videoconferencing systems continued to rapidly evolve throughout the 1980s and 1990s. Proprietary equipment, software, and network requirements gave way to standards-based technologies that were available for anyone to purchase at a reasonable cost. While videoconferencing technology was initially used primarily within internal corporate communication networks, one of the first community service uses of the technology started in 1992 through a unique partnership with PictureTel and IBM, which at the time were promoting a jointly developed desktop based videoconferencing product known as the PCS/1. Over the next 15 years, Project DIANE (Diversified Information and Assistance Network) grew to use a variety of videoconferencing platforms to create a multi-state cooperative public service and distance education network consisting of several hundred schools, libraries, science museums, zoos and parks, and many other community-oriented organizations. Transition to internet and mobile devices Advances in video compression allowed digital video streams to be transmitted over the Internet, which was previously difficult due to the impractically high bandwidth requirements of uncompressed video. The DCT algorithm was the basis for the first practical video coding standard that was useful for online videoconferencing, H.261, standardised by the ITU-T in 1988, and subsequent H.26x video coding standards. In 1992 CU-SeeMe was developed at Cornell by Tim Dorcey et al. In 1995 the first public videoconference between North America and Africa took place, linking a technofair in San Francisco with a techno-rave and cyberdeli in Cape Town. At the 1998 Winter Olympics opening ceremony in Nagano, Japan, Seiji Ozawa conducted the Ode to Joy from Beethoven's Ninth Symphony simultaneously across five continents in near-real-time. Kyocera conducted a two-year development campaign from 1997 to 1999 that resulted in the release of the VP-210 Visual Phone, the first mobile colour videophone that also doubled as a camera phone for still photos. The camera phone was the same size as similar contemporary mobile phones, but sported a large camera lens and a 5 cm (2 inch) colour TFT display capable of displaying 65,000 colors, and was able to process two video frames per second. Videotelephony was popularized in the 2000s via free Internet services such as Skype and iChat, web plugins supporting H.26x video standards, and online telecommunication programs that promoted low cost, albeit lower quality, videoconferencing to virtually every location with an Internet connection. Videotelephony became even more widespread through the deployment of video-enabled mobile phones such as 2010s iPhone 4, plus videoconferencing and computer webcams which use Internet telephony. In the upper echelons of government, business, and commerce, telepresence technology, an advanced form of videoconferencing, has helped reduce the need to travel. Additional history In May 2005, the first high definition videoconferencing systems, produced by Lifesize, were displayed at the Interop trade show in Las Vegas, Nevada, able to provide video at 30 frames per second with a 1280 by 720 display resolution. Polycom introduced its first high definition videoconferencing system to the market in 2006. As of the 2010s, high-definition resolution for videoconferencing became a popular feature, with most major suppliers in the videoconferencing market offering it. Technological developments by videoconferencing developers in the 2010s have extended the capabilities of videoconferencing systems beyond the boardroom for use with hand-held mobile devices that combine the use of video, audio and on-screen drawing capabilities broadcasting in real time over secure networks, independent of location. Mobile collaboration systems now allow people in previously unreachable locations, such as workers on an offshore oil rig, the ability to view and discuss issues with colleagues thousands of miles away. Traditional videoconferencing system manufacturers have begun providing mobile applications as well, such as those that allow for live and still image streaming. The highest ever video call (other than those from aircraft and spacecraft) took place on May 19, 2013, when British adventurer Daniel Hughes used a smartphone with a BGAN satellite modem to make a videocall to the BBC from the summit of Mount Everest, at above sea level. The COVID-19 pandemic resulted in a significant increase in the use of videoconferencing. Berstein Research found that Zoom added more subscribers during the first two months of 2020 alone than in the entire year 2019. GoToMeeting had a 20 percent increase in usage, according to LogMeIn. UK based StarLeaf reported a 600 percent increase in national call volumes. Videoconferencing became so widespread during the pandemic that the term Zoom fatigue came to prominence, referring to the taxing nature of spending long periods of time on videocalls. This fatigue refers to the psychological and physiological effects participants involved in videoconferencing. One experimental study from 2021 revealed a link between camera use in videoconferencing and a prediction of fatigue occurrence an individual. Furthermore, a 2022 article in the journal "Computers in Human Behaviour" highlighted a study linking negative attitudes with the use of "self-view" when videoconferencing. On 21 September 2021, Facebook launched two new versions of its Portal video-calling devices, the Portal Go and Portal Plus. The new video calling devices include the first portable variety of the hardware and number of updates. Major categories Videotelephony can be categorized by its functionality and intended purpose, and also by its method of transmission. Videophones were the earliest form of videotelephony, dating back to initial tests in 1927 by AT&T. During the late 1930s, the post offices of several European governments established public videophone services for person-to-person communications using dual cable circuit telephone transmission technology. In the present day, standalone videophones and UMTS video-enabled mobile phones are usually used on a person-to-person basis. Videoconferencing saw its earliest use with AT&T's Picturephone service in the early 1970s. Transmissions were analog over short distances, but converted to digital forms for longer calls, again using telephone transmission technology. Popular corporate video-conferencing systems in the present day have migrated almost exclusively to digital ISDN and IP transmission modes due to the need to convey the very large amounts of data generated by their cameras and microphones. These systems are often intended for use in conference mode, that is by many people in several different locations, all of whom can be viewed by every participant at each location. Telepresence systems are a newer, more advanced subset of videoconferencing systems, meant to allow higher degrees of video and audio fidelity. Such high-end systems are typically deployed in corporate settings. Mobile collaboration systems are another recent development, combining the use of video, audio, and on-screen drawing capabilities using newest generation hand-held electronic devices broadcasting over secure networks, enabling multi-party conferencing in real time, independent of location. Proximity chat is another alternative mode, focused on the flexibility of small group conversations. A more recent technology encompassing these functions is TV cams. TV cams enable people to make video calls using video calling services, like Skype on their TV, without using a PC connection. TV cams are specially designed video cameras that feed images in real time to another TV camera or other compatible computing devices like smartphones, tablets and computers. Webcams are popular, relatively low-cost devices that can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing. Each of the systems has its own advantages and disadvantages, including video quality, capital cost, degrees of sophistication, transmission capacity requirements, and cost of use. By cost and quality of service From the least to the most expensive systems: Web camera videophone and videoconferencing systems, either stand-alone or built-in, that serve as complements to personal computers, connected to other participants by computer and VoIP networks—lowest direct cost, assuming the users already possess computers at their respective locations. Quality of service can range from low to very high, including high definition video available on the latest model webcams. A related and similar device is a TV camera which is usually small, sits on top of a TV, and can connect to it via its HDMI port, similar to how a webcam attaches to a computer via a USB port. Videophones—low to midrange cost. The earliest standalone models operated over either plain old telephone service (POTS) lines on the PSTN telephone networks or more expensive ISDN lines, while newer models have largely migrated to Internet Protocol line service for higher image resolutions and sound quality. Quality of service for standalone videophones can vary from low to high; Huddle room or all-in-one systems —low to midrange cost, newer endpoint category based on standard videoconferencing systems, but defined by the camera, microphone(s), speakers, and codec contained in a single piece of hardware. Typically used in small to medium spaces where beamforming microphone arrays located in the system are sufficient, in lieu of table or ceiling microphones in closer proximity to the in-room participants. Quality of service is comparable to standard videoconferencing systems, varying from moderate to high. Some manufacturers' huddle room systems do not include the codec within the soundbar-shaped unit, rather only camera, microphone, and speakers. These systems are usually still classified as huddle room systems, but, like webcams, rely on a USB connection to an external device, usually a PC, to process the video codec responsibilities. Despite its name, video conferencing systems for Huddle Rooms prevent participants from huddling close together to be seen in the camera. All-in-one systems for these types of rooms range from wide angles such as 110° Horizontal field of view (FOV) to as much as 360° FOV that allow a full view of the room. Videoconferencing systems—midrange cost, usually using multipoint control units or other bridging services to allow multiple parties on videoconference calls. Quality of service can vary from moderate to high. Telepresence systems—highest capabilities and highest cost. Full high-end systems can involve specially built teleconference rooms to allow expansive views with very high levels of audio and video fidelity, to permit an 'immersive' videoconference. When the proper type and capacity transmission lines are provided between facilities, the quality of service reaches state-of-the-art levels. Security concerns Computer security experts have shown that poorly configured or inadequately supervised videoconferencing systems can permit an easy virtual entry by computer hackers and criminals into company premises and corporate boardrooms. Adoption For over a century, futurists have envisioned a future where telephone conversations will take place as actual face-to-face encounters with video as well as audio. Sometimes it is simply not possible or practical to have face-to-face meetings with two or more people. Sometimes a telephone conversation or conference call is adequate. Other times, e-mail exchanges are adequate. However, videoconferencing adds another option and can be considered when: A live conversation is needed Non-verbal (visual) information is an important component of the conversation The parties of the conversation cannot physically come to the same location The expense or time of travel is a consideration Bill Gates said in 2001 that he used videoconferencing "three or four times a year", because digital scheduling was difficult and "if the overhead is super high, then you might as well just have a face-to-face meeting". Some observers argue that three outstanding issues have prevented videoconferencing from becoming a widely adopted form of communication, despite the ubiquity of videoconferencing-capable systems. Eye contact: Eye contact plays a large role in conversational turn-taking, perceived attention and intent, and other aspects of group communication. While traditional telephone conversations give no eye contact cues, many videoconferencing systems are arguably worse in that they provide an incorrect impression that the remote interlocutor is avoiding eye contact. Some telepresence systems have cameras located in the screens that reduce the amount of parallax observed by the users. This issue is also being addressed through research that generates a synthetic image with eye contact using stereo reconstruction.Telcordia Technologies, formerly Bell Communications Research, owns a patent for eye-to-eye videoconferencing using rear projection screens with the video camera behind it, evolved from a 1960s U.S. military system that provided videoconferencing services between the White House and various other government and military facilities. This technique eliminates the need for special cameras or image processing. Appearance consciousness: A second psychological problem with videoconferencing is being on camera, with the video stream possibly even being recorded. The burden of presenting an acceptable on-screen appearance is not present in audio-only communication. Early studies by Alphonse Chapanis found that the addition of video actually impaired communication, possibly because of the consciousness of being on camera. Signal latency: The information transport of digital signals in many steps need time. In a telecommunicated conversation, an increased latency (time lag) larger than about 150–300 ms becomes noticeable and is soon observed as unnatural and distracting. Therefore, next to a stable large bandwidth, a small total round-trip time is another major technical requirement for the communication channel for interactive videoconferencing. Bandwidth and quality of service: In some countries, it is difficult or expensive to get a high-quality connection that is fast enough for good-quality videoconferencing. Technologies such as ADSL are usually provided as two separate lines (for uplink/downlink) because each has limited upload speeds and cannot upload and download simultaneously at full speed. As Internet speeds increase, higher quality and high-definition videoconferencing will become more readily available. Complexity of systems: Most users are not technically experienced and want a simple interface. In hardware systems, an unplugged cord or an unresponsive remote control is seen as a failure, contributing to a perceived unreliability. Successful systems are backed by support teams who can provide fast assistance when required. Perceived lack of interoperability: Not all systems can readily interconnect; for example, ISDN and IP systems require a gateway. Popular software solutions cannot easily connect to hardware systems. Some systems use different standards, features, and qualities which can require additional configuration when connecting to dissimilar systems. Free software systems circumvent this limitation by making it relatively easy for a single user to communicate over multiple incompatible platforms. Expense of commercial systems: Well-designed telepresence systems require specially designed rooms which can cost hundreds of thousands of dollars to fit out their rooms with codecs, integration equipment (such as Multipoint Control Units), high fidelity sound systems, and furniture. Monthly charges may also be required for bridging services and high-capacity broadband service. These are some of the reasons many organizations only use the systems internally, where there is less risk of loss of customers. An alternative for those lacking dedicated facilities is the rental of videoconferencing-equipped meeting rooms in cities around the world. Clients can book rooms and turn up for the meeting, with all technical aspects being prearranged and support being readily available if needed. The issue of eye contact may be solved with advancing technology, including smartphones which have the screen and camera in essentially the same place. In developed countries, the near-ubiquity of smartphones, tablet computers, and computers with built-in audio and webcams removes the need for expensive dedicated hardware. Technology Components and types The core technology used in a videotelephony system is digital compression of audio and video streams in real time. The hardware or software that performs compression is called a codec (coder/decoder). Compression rates of up to 1:500 can be achieved. The resulting digital stream of 1s and 0s is subdivided into labeled packets, which are then transmitted through a digital network of some kind (usually ISDN or IP). The other components required for a videoconferencing system include: Video input: (PTZ / 360° / Fisheye) video camera, or webcam Video output: computer monitor, television, or projector Audio input: microphones, CD/DVD player, cassette player, or any other source of PreAmp audio outlet. Audio output: usually loudspeakers associated with the display device or telephone Data transfer: analog or digital telephone network, LAN, or Internet Computer: a data processing unit that ties together the other components, does the compressing and decompressing, and initiates and maintains the data linkage via the network. There are basically three kinds of videoconferencing and videophone systems: Dedicated systems have all required components packaged into a single piece of equipment, usually a console with a high quality remote controlled video camera. These cameras can be controlled at a distance to pan left and right, tilt up and down, and zoom. They became known as PTZ cameras. The console contains all electrical interfaces, the control computer, and the software or hardware-based codec. Omnidirectional microphones are connected to the console, as well as a TV monitor with loudspeakers and/or a video projector. There are several types of dedicated videoconferencing devices: Large group videoconferencing are built-in, large, expensive devices used for large rooms such as conference rooms and auditoriums. Small group videoconferencing are either non-portable or portable, smaller, less expensive devices used for small meeting rooms. Individual videoconferencing are usually portable devices, meant for single users, and have fixed cameras, microphones, and loudspeakers integrated into the console. Desktop systems are add-ons (hardware boards or software codec) to normal PCs and laptops, transforming them into videoconferencing devices. A range of different cameras and microphones can be used with the codec, which contains the necessary codec and transmission interfaces. WebRTC platforms use a web browser instead of dedicated native application software. Solutions such as Adobe Connect and Cisco WebEx can be accessed using a URL sent by the meeting organizer, and various degrees of security can be attached to the virtual room. Often the user must download and install a browser extension to enable access to the local camera and microphone and establish a connection to the meeting. But WebRTC does not require any special software, instead a WebRTC-compliant web browser itself provides the facilities for 1-to-1 and 1-to-many videoconferencing calls. Several enhancements to WebRTC are provided by independent vendors. Videoconferencing modes Videoconferencing systems use several methods to determine which video feed or feeds to display. Continuous Presence simply displays all participants at the same time, usually with the exception that the viewer either does not see their own feed, or sees their own feed in miniature. Voice-Activated Switch selectively chooses a feed to display at each endpoint, with the goal of showing the person who is currently speaking. This is done by choosing the feed (other than the viewer) which has the loudest audio input (perhaps with some filtering to avoid switching for very short-lived volume spikes). Often, if no remote parties are currently speaking, the feed with the last speaker remains on the screen. Echo cancellation Acoustic echo cancellation (AEC) is a processing algorithm that uses the knowledge of audio output to monitor audio input and filter from it noises that echo back after some time delay. If unattended, these echoes can be re-amplified several times, leading to problems including: The remote party hearing their own voice coming back at them (usually significantly delayed) Strong reverberation, which makes the voice channel useless Howling created by feedback Echo cancellation is a processor-intensive task that usually works over a narrow range of sound delays. Bandwidth requirements Videophones have historically employed a variety of transmission and reception bandwidths, which can be understood as data transmission speeds. The lower the transmission/reception bandwidth, the lower the data transfer rate, resulting in a progressively limited and poorer image quality (i.e. lower resolution and/or frame rate). Data transfer rates and live video image quality are related but are also subject to other factors such as data compression techniques. Some early videophones employed very low data transmission rates with a resulting poor video quality. Broadband bandwidth is often called high-speed, because it usually has a high rate of data transmission. In general, any connection of 256 kbit/s (0.256 Mbit/s) or greater is more concisely considered broadband Internet. The International Telecommunication Union Telecommunication Standardization Sector (ITU-T) recommendation I.113 has defined broadband as a transmission capacity at 1.5 to 2Mbit/s. The Federal Communications Commission (United States) definition of broadband is 25 Mbit/s. Currently, adequate video for some purposes becomes possible at data rates lower than the ITU-T broadband definition, with rates of 768 kbit/s and 384 kbit/s used for some videoconferencing applications, and rates as low as 100 kbit/s used for videophones using H.264/MPEG-4 AVC compression protocols. The newer MPEG-4 video and audio compression format can deliver high-quality video at 2Mbit/s, which is at the low end of cable modem and ADSL broadband performance. Standards The International Telecommunication Union (ITU) has three umbrellas of standards for videoconferencing: ITU H.320 is known as the standard for public switched telephone networks (PSTN) or videoconferencing over integrated services digital networks. While still prevalent in Europe, ISDN was never widely adopted in the United States and Canada. ITU H.264 Scalable Video Coding (SVC) is a compression standard that enables videoconferencing systems to achieve highly error resilient Internet Protocol (IP) video transmissions over the public Internet without quality-of-service enhanced lines. This standard has enabled wide scale deployment of high definition desktop videoconferencing and made possible new architectures, which reduces latency between the transmitting sources and receivers, resulting in more fluid communication without pauses. In addition, an attractive factor for IP videoconferencing is that it is easier to set up for use along with web conferencing and data collaboration. These combined technologies enable users to have a richer multimedia environment for live meetings, collaboration and presentations. ITU-T V.80: videoconferencing is generally compatibilized with H.324 standard point-to-point videotelephony over regular (POTS) phone lines. The Unified Communications Interoperability Forum (UCIF), a non-profit alliance between communications vendors, launched in May 2010. The organization's vision is to maximize the interoperability of UC based on existing standards. Founding members of UCIF include HP, Microsoft, Polycom, Logitech/Lifesize, and Juniper Networks. Call setup Videoconferencing in the late 20th century was limited to the H.323 protocol (notably Cisco's SCCP implementation was an exception), but newer videophones often use SIP, which is often easier to set up in home networking environments. It is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol (HTTP) and the Simple Mail Transfer Protocol (SMTP). H.323 is still used, but more commonly for business videoconferencing, while SIP is more commonly used in personal consumer videophones. A number of call-setup methods based on instant messaging protocols such as Skype also now provide video. Another protocol used by videophones is H.324, which mixes call setup and video compression. Videophones that work on regular phone lines typically use H.324, but the bandwidth is limited by the modem to around 33 kbit/s, limiting the video quality and frame rate. A slightly modified version of H.324 called 3G-324M defined by 3GPP is also used by some cellphones that allow video calls, typically for use only in UMTS networks. There is also H.320 standard, which specified technical requirements for narrow-band visual telephone systems and terminal equipment, typically for videoconferencing and videophone services. It applied mostly to dedicated circuit-based switched network (point-to-point) connections of moderate or high bandwidth, such as through the medium-bandwidth ISDN digital phone protocol or a fractionated high bandwidth T1 lines. Modern products based on H.320 standard usually support also H.323 standard. The IAX2 protocol also supports videophone calls natively, using the protocol's own capabilities to transport alternate media streams. A few hobbyists obtained the Nortel 1535 Color SIP Videophone cheaply in 2010 as surplus after Nortel's bankruptcy and deployed the sets on the Asterisk (PBX) platform. While additional software is required to patch together multiple video feeds for conference calls or convert between dissimilar video standards, SIP calls between two identical handsets within the same PBX were relatively straightforward. Conferencing layers The components within a videoconferencing system can be divided up into several different layers: User Interface, Conference Control, Control or Signaling Plane, and Media Plane. Videoconferencing User Interfaces (VUI) can be either graphical or voice-responsive. Many in the industry have encountered both types of interface, and normally a graphical interface is encountered on a computer. User interfaces for conferencing have a number of different uses; they can be used for scheduling, setup, and making a video call. Through the user interface, the administrator is able to control the other three layers of the system. Conference Control performs resource allocation, management, and routing. This layer along with the User Interface creates meetings (scheduled or unscheduled) or adds and removes participants from a conference. Control (Signaling) Plane contains the stacks that signal different endpoints to create a call and/or a conference. Signals can be, but are not limited to, H.323 and Session Initiation Protocol (SIP) Protocols. These signals control incoming and outgoing connections as well as session parameters. The Media Plane controls the audio and video mixing and streaming. This layer manages Real-Time Transport Protocols, User Datagram Packets (UDP) and Real-Time Transport Control Protocol (RTCP). The RTP and UDP normally carry information such the payload type which is the type of codec, frame rate, video size, and many others. RTCP on the other hand acts as a quality control Protocol for detecting errors during streaming. Multipoint control Simultaneous videoconferencing among three or more remote points is possible in a hardware-based system by means of a Multipoint Control Unit (MCU). This is a bridge that interconnects calls from several sources (in a similar way to the audio conference call). All parties call the MCU, or the MCU can also call the parties which are going to participate, in sequence. There are MCU bridges for IP and ISDN-based videoconferencing. There are MCUs which are pure software and others that are a combination of hardware and software. An MCU is characterized according to the number of simultaneous calls it can handle, its ability to conduct transposing of data rates and protocols, and features such as Continuous Presence, in which multiple parties can be seen on-screen at once. MCUs can be stand-alone hardware devices, or they can be embedded into dedicated videoconferencing units. The MCU consists of two logical components: A single multipoint controller (MC), and Multipoint Processors (MP), sometimes referred to as the mixer. The MC controls the conferencing while it is active on the signaling plane, which is simply where the system manages conferencing creation, endpoint signaling and in-conferencing controls. This component negotiates parameters with every endpoint in the network and controls conferencing resources. While the MC controls resources and signaling negotiations, the MP operates on the media plane and receives media from each endpoint. The MP generates output streams from each endpoint and redirects the information to other endpoints in the conference. Some systems are capable of multipoint conferencing with no MCU, stand-alone, embedded or otherwise. These use a standards-based H.323 technique known as decentralized multipoint, where each station in a multipoint call exchanges video and audio directly with the other stations with no central manager or other bottleneck. The advantages of this technique are that the video and audio will generally be of higher quality because they do not have to be relayed through a central point. Also, users can make ad hoc multipoint calls without any concern for the availability or control of an MCU. This added convenience and quality comes at the expense of some increased network bandwidth, because every station must transmit to every other station directly. Cloud storage Cloud-based videoconferencing can be used without the hardware generally required by other videoconferencing systems, and can be designed for use by SMEs, or larger international or multinational corporations like Facebook. Cloud-based systems can handle either 2D or 3D video broadcasting. Cloud-based systems can also implement mobile calls, VOIP, and other forms of video calling. They can also come with a video recording function to archive past meetings. Impact High speed Internet connectivity has become more widely available and affordable, as has good-quality video capture and display hardware. Consequently, personal videoconferencing systems based on webcams, personal computer systems, software compression, and the Internet have become progressively more affordable by the general public. The availability of freeware (often as part of chat programs) has made software based videoconferencing accessible to many. The widest deployment of videotelephony now occurs in mobile phones. Nearly all mobile phones supporting UMTS networks can work as videophones using their internal cameras and are able to make video calls wirelessly to other UMTS users anywhere. As of the second quarter of 2007, there are over 131 million UMTS users (and hence potential videophone users), on 134 networks in 59 countries. Mobile phones can also use broadband wireless Internet, whether through the cell phone network or over a local Wi-Fi connection, along with software-based videophone apps to make calls to any video-capable Internet user, whether mobile or fixed. Deaf, hard-of-hearing, and mute individuals have a particular role in the development of affordable high-quality videotelephony as a means of communicating with each other in sign language. Unlike Video Relay Service, which is intended to support communication between a caller using sign language and another party using spoken language, videoconferencing can be used directly between two deaf signers. Videophones are increasingly used in the provision of telemedicine to the elderly, disabled, and to those in remote locations, where the ease and convenience of quickly obtaining diagnostic and consultative medical services are readily apparent. In one single instance quoted in 2006: "A nurse-led clinic at Letham has received positive feedback on a trial of a video-link which allowed 60 pensioners to be assessed by medics without traveling to a doctor's office or medical clinic." A further improvement in telemedical services has been the development of new technology incorporated into special videophones to permit remote diagnostic services, such as blood sugar level, blood pressure, and vital signs monitoring. Such units are capable of relaying both regular audio-video plus medical data over either standard (POTS) telephone or newer broadband lines. Videotelephony has also been deployed in corporate teleconferencing, also available through the use of public access videoconferencing rooms. A higher level of videoconferencing that employs advanced telecommunication technologies and high-resolution displays is called telepresence. Today the principles, if not the precise mechanisms, of a videophone are employed by many users worldwide in the form of webcam videocalls using personal computers, with inexpensive webcams, microphones, and free video calling Web client programs. Thus an activity that was disappointing as a separate service has found a niche as a minor feature in software products intended for other purposes. According to Juniper Research, smartphone videophone users will reach 29 million by 2015 globally. A study conducted by Pew Research in 2010, revealed that 7% of Americans have made a mobile video call. Government and law In the United States, videoconferencing has allowed testimony to be used for an individual who is unable or prefers not to attend the physical legal settings or would be subjected to severe psychological stress in doing so, however, there is a controversy on the use of testimony by foreign or unavailable witnesses via video transmission, regarding the violation of the Confrontation Clause of the Sixth Amendment of the U.S. Constitution. In a military investigation in North Carolina, Afghan witnesses have testified via videoconferencing. In Hall County, Georgia, videoconferencing systems are used for initial court appearances. The systems link jails with courtrooms, reducing the expenses and security risks of transporting prisoners to the courtroom. The U.S. Social Security Administration (SSA), which oversees the world's largest administrative judicial system under its Office of Disability Adjudication and Review (ODAR), has made extensive use of videoconferencing to conduct hearings at remote locations. In Fiscal Year (FY) 2009, the U.S. Social Security Administration (SSA) conducted 86,320 videoconferenced hearings, a 55% increase over FY 2008. In August 2010, the SSA opened its fifth and largest videoconferencing-only National Hearing Center (NHC), in St. Louis, Missouri. This continues the SSA's effort to use video hearings as a means to clear its substantial hearing backlog. Since 2007, the SSA has also established NHCs in Albuquerque, New Mexico, Baltimore, Maryland, Falls Church, Virginia, and Chicago. Education Videoconferencing provides students with the chance to learn by participating in two-way communication forums. Because it is live, videotelephony allows teachers to access remote or otherwise isolated learners. Students from diverse communities and backgrounds can come together to learn about one another through practices known as telecollaboration (in foreign language education) and virtual exchange, although language barriers will continue to be present. Such students are able to explore, communicate, analyze, and share information and ideas with one another. Through videoconferencing, students can visit other parts of the world, including museums and other cultural and educational sites. Such virtual field trips can provide enriched learning opportunities to students, especially those who are geographically isolated or economically disadvantaged. Small schools can use these technologies to pool resources and provide courses, such as in foreign languages, which could not otherwise be offered. Some benefits that videoconferencing can provide to education include: faculty members keeping in touch with classes while attending conferences; faculty members attending conferences 'virtually' guest lecturers brought in classes from other institutions; researchers collaborating with colleagues at other institutions on a regular basis without loss of time due to travel; schools with multiple campuses collaborating and sharing professors; schools from two separate nations engaging in cross-cultural exchanges; faculty members participating in thesis defenses at other institutions; administrators on tight schedules collaborating on budget preparation from different parts of campus; faculty committee auditioning scholarship candidates; researchers answering questions about grant proposals from agencies or review committees; alternative enrollment structures to purely in-person attendance; student interviews with employers in other cities, and teleseminars. Medicine and health Videoconferencing is a highly useful technology for real time telemedicine and telenursing applications, such as diagnosis, consulting, prevention, treatment, and transmission of medical images. With videoconferencing, patients may contact nurses and physicians in emergency or routine situations; physicians and other paramedical professionals can discuss cases across large distances. Rural areas can use this technology for diagnostic purposes, thus saving lives and making more efficient use of health care money. For example, a rural medical center in Ohio used videoconferencing to successfully cut the number of transfers of sick infants to a hospital away. This had previously cost nearly $10,000 per transfer. Special peripherals such as microscopes fitted with digital cameras, videoendoscopes, medical ultrasound imaging devices, otoscopes, etc., can be used in conjunction with videoconferencing equipment to transmit data about a patient. Recent developments in mobile collaboration on hand-held mobile devices have also extended video-conferencing capabilities to locations previously unreachable, such as a remote community, long-term care facility, or a patient's home. Business Videoconferencing can enable individuals in distant locations to participate in meetings on short notice, with time and money savings. Technology such as VoIP can be used in conjunction with desktop videoconferencing to enable low-cost face-to-face business meetings without leaving the desk, especially for businesses with widespread offices. The technology is also used for remote work. One research report based on a sampling of 1,800 corporate employees showed that, as of June 2010, 54% of the respondents with access to videoconferencing used it "all of the time" or "frequently". Intel Corporation have used videoconferencing to reduce both costs and environmental impacts of its business operations. Videoconferencing is also currently being introduced on online networking websites, in order to help businesses form profitable relationships quickly and efficiently without leaving their place of work. This has been leveraged by banks to connect busy banking professionals with customers in various locations using video banking technology. Videoconferencing on hand-held mobile devices (mobile collaboration technology) is being used in industries such as manufacturing, energy, healthcare, insurance, government, and public safety. Live, visual interaction removes traditional restrictions of distance and time, often in locations previously unreachable, such as a manufacturing plant floor thousands of miles away. In the increasingly globalized film industry, videoconferencing has become useful as a method by which creative talent in many different locations can collaborate closely on the complex details of film production. For example, for the 2013 award-winning animated film Frozen, Burbank-based Walt Disney Animation Studios hired the New York City-based husband-and-wife songwriting team of Robert Lopez and Kristen Anderson-Lopez to write the songs, which required two-hour-long transcontinental videoconferences nearly every weekday for about 14 months. With the development of lower-cost endpoints, the integration of video cameras into personal computers and mobile devices, and software applications such as FaceTime, Skype, Teams, BlueJeans and Zoom, videoconferencing has changed from just a business-to-business offering to include business-to-consumer (and consumer-to-consumer) use. Although videoconferencing has frequently proven its value, research has shown that some non-managerial employees prefer not to use it due to several factors, including anxiety. Some such anxieties can often be avoided if managers use the technology as part of the normal course of business. Remote workers can also adopt certain behaviors and best practices to stay connected with their co-workers and company. Researchers also find that attendees of business and medical videoconferences must work harder to interpret information delivered during a conference than they would if they attended face-to-face. They recommend that those coordinating videoconferences make adjustments to their conferencing procedures and equipment. Press The concept of press videoconferencing was developed in October 2007 by the PanAfrican Press Association (APPA), a Paris France-based non-governmental organization, to allow African journalists to participate in international press conferences on developmental and good governance issues. Press videoconferencing permits international press conferences via videoconferencing over the Internet. Journalists can participate on an international press conference from any location, without leaving their offices or countries. They need only be seated by a computer connected to the Internet in order to ask their questions. In 2004, the International Monetary Fund introduced the Online Media Briefing Center, a password-protected site available only to professional journalists. The site enables the IMF to present press briefings globally and facilitates direct questions to briefers from the press. The site has been copied by other international organizations since its inception. More than 4,000 journalists worldwide are currently registered with the IMF. Sign language One of the first demonstrations of the ability for telecommunications to help sign language users communicate with each other occurred when AT&T's videophone (trademarked as the Picturephone) was introduced to the public at the 1964 New York World's Fair—two deaf users were able to communicate freely with each other between the fair and another city. Various universities and other organizations, including British Telecom's Martlesham facility, have also conducted extensive research on signing via video telephony. The use of sign language via videotelephony was hampered for many years due to the difficulty of its use over slow analog copper phone lines, coupled with the high cost of better quality ISDN (data) phone lines. Those factors largely disappeared with the introduction of more efficient and powerful video codecs and the advent of lower-cost high-speed ISDN data and IP (Internet) services in the 1990s. 21st-century improvements Significant improvements in video call quality of service for the deaf occurred in the United States in 2003 when Sorenson Media Inc. (formerly Sorenson Vision Inc.), a video compression software coding company, developed its VP-100 model stand-alone videophone specifically for the deaf community. It was designed to output its video to the user's television in order to lower the cost of acquisition and to offer remote control and a powerful video compression codec for unequaled video quality and ease of use with video relay services. Favorable reviews quickly led to its popular usage at educational facilities for the deaf, and from there to the greater deaf community. Coupled with similar high-quality videophones introduced by other electronics manufacturers, the availability of high-speed Internet, and sponsored video relay services authorized by the U.S. Federal Communications Commission in 2002, VRS services for the deaf underwent rapid growth in that country. Using such video equipment in the present day, the deaf, hard-of-hearing, and speech-impaired can communicate between themselves and with hearing individuals using sign language. The United States and several other countries compensate companies to provide video relay services (VRS). Telecommunication equipment can be used to talk to others via a sign language interpreter, who uses a conventional telephone at the same time to communicate with the deaf person's party. Video equipment is also used to do on-site sign language translation via Video Remote Interpreting (VRI). The relatively low cost and widespread availability of 3G mobile phone technology with video calling capabilities have given deaf and speech-impaired users a greater ability to communicate with the same ease as others. Some wireless operators have even started free sign language gateways. Sign language interpretation services via VRS or by VRI are useful in the present day where one of the parties is deaf, hard-of-hearing, or speech-impaired (mute). In such cases the interpretation flow is normally within the same principal language, such as French Sign Language (LSF) to spoken French, Spanish Sign Language (LSE) to spoken Spanish, British Sign Language (BSL) to spoken English, and American Sign Language (ASL) also to spoken English (since BSL and ASL are completely distinct from each other), German Sign Language (DGS) to spoken German, and so on. Multilingual sign language interpreters, who can also translate as well across principal languages (such as a multilingual interpreter interpreting a call from a deaf person using ASL to reserve a hotel room at a hotel in the Dominican Republic whose staff speaks Spanish only, therefore the interpreter has to use ASL, spoken Spanish, and spoken English to facilitate the call for the deaf person), are also available, albeit less frequently. Such activities involve considerable mental processing efforts on the part of the translator, since sign languages are distinct natural languages with their own construction, semantics and syntax, different from the aural version of the same principal language. With video interpreting, sign language interpreters work remotely with live video and audio feeds, so that the interpreter can see the deaf or mute party, and converse with the hearing party, and vice versa. Much like telephone interpreting, video interpreting can be used for situations in which no on-site interpreters are available. However, video interpreting cannot be used for situations in which all parties are speaking via telephone alone. VRS and VRI interpretation requires all parties to have the necessary equipment. Some advanced equipment enables interpreters to control the video camera remotely, in order to zoom in and out or to point the camera toward the party that is signing. Comparison of Sign Language communication tools Descriptive names and terminology The name videophone never became as standardized as its earlier counterpart telephone, resulting in a variety of names and terms being used worldwide, and even within the same region or country. Videophones are also known as video phones, videotelephones (or video telephones) and often by an early trademarked name Picturephone, which was the world's first commercial videophone produced in volume. The compound name videophone slowly entered into general use after 1950, although video telephone likely entered the lexicon earlier after video was coined in 1935. Videophone calls (also: videocalls, video chat) as well as Skype and Skyping in verb form differ from videoconferencing in that they expect to serve individuals, not groups. However that distinction has become increasingly blurred with technology improvements such as increased bandwidth and sophisticated software clients that can allow for multiple parties on a call. In general everyday usage the term videoconferencing is now frequently used instead of videocall for point-to-point calls between two units. Both videophone calls and videoconferencing are also now commonly referred to as a video link. Webcams are popular, relatively low-cost devices that can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing. A videoconference system is generally higher cost than a videophone and deploys greater capabilities. A videoconference (also known as a videoteleconference) allows two or more locations to communicate via live, simultaneous two-way video and audio transmissions. This is often accomplished by the use of a multipoint control unit (a centralized distribution and call management system) or by a similar non-centralized multipoint capability embedded in each videoconferencing unit. Again, technology improvements have circumvented traditional definitions by allowing multiple-party videoconferencing via web-based applications. A telepresence system is a high-end videoconferencing system and service usually employed by enterprise-level corporate offices. Telepresence conference rooms use state-of-the-art room designs, video cameras, displays, sound systems and processors, coupled with high-to-very-high capacity bandwidth transmissions. Typical uses of the various technologies described above include calling one-to-one or conferencing one-to-many or many-to-many for personal, business, educational, deaf Video Relay Service and tele-medical, diagnostic and rehabilitative purposes. personal videocalls to inmates incarcerated in penitentiaries, and videoconferencing to resolve airline engineering issues at maintenance facilities, are being created or evolving on an ongoing basis. Other names for videophone that have been used in English are: Viewphone (the British Telecom equivalent to AT&T's Picturephone), and visiophone, a common French translation that has also crept into limited English usage, as well as over twenty less common names and expressions. Latin-based translations of videophone in other languages include vidéophone (French), Bildtelefon (German), videotelefono (Italian), both videófono and videoteléfono (Spanish), both beeldtelefoon and videofoon (Dutch), and videofonía (Catalan). A telepresence robot (also telerobotics) is a robotically controlled and motorized videoconferencing display to help give a better sense of remote physical presence for communication and collaboration in an office, home, school, etc. when one cannot be there in person. The robotic avatar device can move about and look around at the command of the remote person it represents. Popular culture In science fiction literature, names commonly associated with videophones include telephonoscope, telephote, viewphone, vidphone, vidfone, and visiphone. The first example was probably the cartoon "Edison's Telephonoscope" by George du Maurier in Punch 1878. In «In the year 2889», published 1889, the French author Jules Verne predicts that «The transmission of speech is an old story; the transmission of images by means of sensitive mirrors connected by wires is a thing but of yesterday.» In many science fiction movies and TV programs that are set in the future, videophones were used as a primary method of communication. One of the first movies where a videophone was used was Fritz Lang's Metropolis (1927). Other notable examples of videophones in popular culture include an iconic scene from the 1968 film 2001: A Space Odyssey set on Space Station V. The movie was released shortly before AT&T began its efforts to commercialize its Picturephone Mod II service in several cities and depicts a video call to Earth using an advanced AT&T videophone—which it predicts will cost $1.70 for a two-minute call in 2001 (a fraction of the company's real rates on Earth in 1968). Film director Stanley Kubrick strove for scientific accuracy, relying on interviews with scientists and engineers at Bell Labs in the United States. Dr. Larry Rabiner of Bell Labs, discussing videophone research in the documentary 2001: The Making of a Myth, stated that in the mid-to late-1960s videophones "...captured the imagination of the public and... of Mr. Kubrick and the people who reported to him". In one 2001 movie scene a central character, Dr. Heywood Floyd, calls home to contact his family, a social feature noted in the Making of a Myth. Floyd talks with and views his daughter from a space station in orbit above the Earth, discussing what type of present he should bring home for her. Other earlier examples of videophones in popular culture included a videophone that was featured in the Warner Bros. cartoon, Plane Daffy, in which the female spy Hatta Mari used a videophone to communicate with Adolf Hitler (1944), as well as a device with the same functionality has been used by the comic strip character Dick Tracy, who often used his "2-way wrist TV" to communicate with police headquarters. (1964–1977). By the early 2010s videotelephony and videophones had become commonplace and unremarkable in various forms of media, in part due to their real and ubiquitous presence in common electronic devices and laptop computers. Additionally, TV programming increasingly used videophones to interview subjects of interest and to present live coverage by news correspondents, via the Internet or by satellite links. In the mass market media, the popular U.S. TV talk show hostess Oprah Winfrey incorporated videotelephony into her TV program on a regular basis from May 21, 2009, with an initial episode called Where the Skype Are You?, as part of a marketing agreement with the Internet telecommunication company Skype.
Technology
Telecommunications
null
1075071
https://en.wikipedia.org/wiki/Transcriptome
Transcriptome
The transcriptome is the set of all RNA transcripts, including coding and non-coding, in an individual or a population of cells. The term can also sometimes be used to refer to all RNAs, or just mRNA, depending on the particular experiment. The term transcriptome is a portmanteau of the words transcript and genome; it is associated with the process of transcript production during the biological process of transcription. The early stages of transcriptome annotations began with cDNA libraries published in the 1980s. Subsequently, the advent of high-throughput technology led to faster and more efficient ways of obtaining data about the transcriptome. Two biological techniques are used to study the transcriptome, namely DNA microarray, a hybridization-based technique and RNA-seq, a sequence-based approach. RNA-seq is the preferred method and has been the dominant transcriptomics technique since the 2010s. Single-cell transcriptomics allows tracking of transcript changes over time within individual cells. Data obtained from the transcriptome is used in research to gain insight into processes such as cellular differentiation, carcinogenesis, transcription regulation and biomarker discovery among others. Transcriptome-obtained data also finds applications in establishing phylogenetic relationships during the process of evolution and in in vitro fertilization. The transcriptome is closely related to other -ome based biological fields of study; it is complementary to the proteome and the metabolome and encompasses the translatome, exome, meiome and thanatotranscriptome which can be seen as ome fields studying specific types of RNA transcripts. There are quantifiable and conserved relationships between the Transcriptome and other -omes, and Transcriptomics data can be used effectively to predict other molecular species, such as metabolites. There are numerous publicly available transcriptome databases. Etymology and history The word transcriptome is a portmanteau of the words transcript and genome. It appeared along with other neologisms formed using the suffixes -ome and -omics to denote all studies conducted on a genome-wide scale in the fields of life sciences and technology. As such, transcriptome and transcriptomics were one of the first words to emerge along with genome and proteome. The first study to present a case of a collection of a cDNA library for silk moth mRNA was published in 1979. The first seminal study to mention and investigate the transcriptome of an organism was published in 1997 and it described 60,633 transcripts expressed in S. cerevisiae using serial analysis of gene expression (SAGE). With the rise of high-throughput technologies and bioinformatics and the subsequent increased computational power, it became increasingly efficient and easy to characterize and analyze enormous amount of data. Attempts to characterize the transcriptome became more prominent with the advent of automated DNA sequencing during the 1980s. During the 1990s, expressed sequence tag sequencing was used to identify genes and their fragments. This was followed by techniques such as serial analysis of gene expression (SAGE), cap analysis of gene expression (CAGE), and massively parallel signature sequencing (MPSS). Transcription The transcriptome encompasses all the ribonucleic acid (RNA) transcripts present in a given organism or experimental sample. RNA is the main carrier of genetic information that is responsible for the process of converting DNA into an organism's phenotype. A gene can give rise to a single-stranded messenger RNA (mRNA) through a molecular process known as transcription; this mRNA is complementary to the strand of DNA it originated from. The enzyme RNA polymerase II attaches to the template DNA strand and catalyzes the addition of ribonucleotides to the 3' end of the growing sequence of the mRNA transcript. In order to initiate its function, RNA polymerase II needs to recognize a promoter sequence, located upstream (5') of the gene. In eukaryotes, this process is mediated by transcription factors, most notably Transcription factor II D (TFIID) which recognizes the TATA box and aids in the positioning of RNA polymerase at the appropriate start site. To finish the production of the RNA transcript, termination takes place usually several hundred nuclecotides away from the termination sequence and cleavage takes place. This process occurs in the nucleus of a cell along with RNA processing by which mRNA molecules are capped, spliced and polyadenylated to increase their stability before being subsequently taken to the cytoplasm. The mRNA gives rise to proteins through the process of translation that takes place in ribosomes. Types of RNA transcripts Almost all functional transcripts are derived from known genes. The only exceptions are a small number of transcripts that might play a direct role in regulating gene expression near the prompters of known genes. (See Enhancer RNA.) Gene occupy most of prokaryotic genomes so most of their genomes are transcribed. Many eukaryotic genomes are very large and known genes may take up only a fraction of the genome. In mammals, for example, known genes only account for 40-50% of the genome. Nevertheless, identified transcripts often map to a much larger fraction of the genome suggesting that the transcriptome contains spurious transcripts that do not come from genes. Some of these transcripts are known to be non-functional because they map to transcribed pseudogenes or degenerative transposons and viruses. Others map to unidentified regions of the genome that may be junk DNA. Spurious transcription is very common in eukaryotes, especially those with large genomes that might contain a lot of junk DNA. Some scientists claim that if a transcript has not been assigned to a known gene then the default assumption must be that it is junk RNA until it has been shown to be functional. This would mean that much of the transcriptome in species with large genomes is probably junk RNA. (See Non-coding RNA) The transcriptome includes the transcripts of protein-coding genes (mRNA plus introns) as well as the transcripts of non-coding genes (functional RNAs plus introns). Ribosomal RNA/rRNA: Usually the most abundant RNA in the transcriptome. Long non-coding RNA/lncRNA: Non-coding RNA transcripts that are more than 200 nucleotides long. Members of this group comprise the largest fraction of the non-coding transcriptome other than introns. It is not known how many of these transcripts are functional and how many are junk RNA. transfer RNA/tRNA micro RNA/miRNA: 19-24 nucleotides (nt) long. Micro RNAs up- or downregulate expression levels of mRNAs by the process of RNA interference at the post-transcriptional level. small interfering RNA/siRNA: 20-24 nt small nucleolar RNA/snoRNA Piwi-interacting RNA/piRNA: 24-31 nt. They interact with Piwi proteins of the Argonaute family and have a function in targeting and cleaving transposons. enhancer RNA/eRNA: Scope of study In the human genome, all genes get transcribed into RNA because that's how the molecular gene is defined. (See Gene.) The transcriptome consists of coding regions of mRNA plus non-coding UTRs, introns, non-coding RNAs, and spurious non-functional transcripts. Several factors render the content of the transcriptome difficult to establish. These include alternative splicing, RNA editing and alternative transcription among others. Additionally, transcriptome techniques are capable of capturing transcription occurring in a sample at a specific time point, although the content of the transcriptome can change during differentiation. The main aims of transcriptomics are the following: "catalogue all species of transcript, including mRNAs, non-coding RNAs and small RNAs; to determine the transcriptional structure of genes, in terms of their start sites, 5′ and 3′ ends, splicing patterns and other post-transcriptional modifications; and to quantify the changing expression levels of each transcript during development and under different conditions". The term can be applied to the total set of transcripts in a given organism, or to the specific subset of transcripts present in a particular cell type. Unlike the genome, which is roughly fixed for a given cell line (excluding mutations), the transcriptome can vary with external environmental conditions. Because it includes all mRNA transcripts in the cell, the transcriptome reflects the genes that are being actively expressed at any given time, with the exception of mRNA degradation phenomena such as transcriptional attenuation. The study of transcriptomics, (which includes expression profiling, splice variant analysis etc.), examines the expression level of RNAs in a given cell population, often focusing on mRNA, but sometimes including others such as tRNAs and sRNAs. Methods of construction Transcriptomics is the quantitative science that encompasses the assignment of a list of strings ("reads") to the object ("transcripts" in the genome). To calculate the expression strength, the density of reads corresponding to each object is counted. Initially, transcriptomes were analyzed and studied using expressed sequence tags libraries and serial and cap analysis of gene expression (SAGE). Currently, the two main transcriptomics techniques include DNA microarrays and RNA-Seq. Both techniques require RNA isolation through RNA extraction techniques, followed by its separation from other cellular components and enrichment of mRNA. There are two general methods of inferring transcriptome sequences. One approach maps sequence reads onto a reference genome, either of the organism itself (whose transcriptome is being studied) or of a closely related species. The other approach, de novo transcriptome assembly, uses software to infer transcripts directly from short sequence reads and is used in organisms with genomes that are not sequenced. DNA microarrays The first transcriptome studies were based on microarray techniques (also known as DNA chips). Microarrays consist of thin glass layers with spots on which oligonucleotides, known as "probes" are arrayed; each spot contains a known DNA sequence. When performing microarray analyses, mRNA is collected from a control and an experimental sample, the latter usually representative of a disease. The RNA of interest is converted to cDNA to increase its stability and marked with fluorophores of two colors, usually green and red, for the two groups. The cDNA is spread onto the surface of the microarray where it hybridizes with oligonucleotides on the chip and a laser is used to scan. The fluorescence intensity on each spot of the microarray corresponds to the level of gene expression and based on the color of the fluorophores selected, it can be determined which of the samples exhibits higher levels of the mRNA of interest. One microarray usually contains enough oligonucleotides to represent all known genes; however, data obtained using microarrays does not provide information about unknown genes. During the 2010s, microarrays were almost completely replaced by next-generation techniques that are based on DNA sequencing. RNA sequencing RNA sequencing is a next-generation sequencing technology; as such it requires only a small amount of RNA and no previous knowledge of the genome. It allows for both qualitative and quantitative analysis of RNA transcripts, the former allowing discovery of new transcripts and the latter a measure of relative quantities for transcripts in a sample. The three main steps of sequencing transcriptomes of any biological samples include RNA purification, the synthesis of an RNA or cDNA library and sequencing the library. The RNA purification process is different for short and long RNAs. This step is usually followed by an assessment of RNA quality, with the purpose of avoiding contaminants such as DNA or technical contaminants related to sample processing. RNA quality is measured using UV spectrometry with an absorbance peak of 260 nm. RNA integrity can also be analyzed quantitatively comparing the ratio and intensity of 28S RNA to 18S RNA reported in the RNA Integrity Number (RIN) score. Since mRNA is the species of interest and it represents only 3% of its total content, the RNA sample should be treated to remove rRNA and tRNA and tissue-specific RNA transcripts. The step of library preparation with the aim of producing short cDNA fragments, begins with RNA fragmentation to transcripts in length between 50 and 300 base pairs. Fragmentation can be enzymatic (RNA endonucleases), chemical (trismagnesium salt buffer, chemical hydrolysis) or mechanical (sonication, nebulisation). Reverse transcription is used to convert the RNA templates into cDNA and three priming methods can be used to achieve it, including oligo-DT, using random primers or ligating special adaptor oligos. Single-cell transcriptomics Transcription can also be studied at the level of individual cells by single-cell transcriptomics. Single-cell RNA sequencing (scRNA-seq) is a recently developed technique that allows the analysis of the transcriptome of single cells, including bacteria. With single-cell transcriptomics, subpopulations of cell types that constitute the tissue of interest are also taken into consideration. This approach allows to identify whether changes in experimental samples are due to phenotypic cellular changes as opposed to proliferation, with which a specific cell type might be overexpressed in the sample. Additionally, when assessing cellular progression through differentiation, average expression profiles are only able to order cells by time rather than their stage of development and are consequently unable to show trends in gene expression levels specific to certain stages. Single-cell trarnscriptomic techniques have been used to characterize rare cell populations such as circulating tumor cells, cancer stem cells in solid tumors, and embryonic stem cells (ESCs) in mammalian blastocysts. Although there are no standardized techniques for single-cell transcriptomics, several steps need to be undertaken. The first step includes cell isolation, which can be performed using low- and high-throughput techniques. This is followed by a qPCR step and then single-cell RNAseq where the RNA of interest is converted into cDNA. Newer developments in single-cell transcriptomics allow for tissue and sub-cellular localization preservation through cryo-sectioning thin slices of tissues and sequencing the transcriptome in each slice. Another technique allows the visualization of single transcripts under a microscope while preserving the spatial information of each individual cell where they are expressed. Analysis A number of organism-specific transcriptome databases have been constructed and annotated to aid in the identification of genes that are differentially expressed in distinct cell populations. RNA-seq is emerging (2013) as the method of choice for measuring transcriptomes of organisms, though the older technique of DNA microarrays is still used. RNA-seq measures the transcription of a specific gene by converting long RNAs into a library of cDNA fragments. The cDNA fragments are then sequenced using high-throughput sequencing technology and aligned to a reference genome or transcriptome which is then used to create an expression profile of the genes. Applications Mammals The transcriptomes of stem cells and cancer cells are of particular interest to researchers who seek to understand the processes of cellular differentiation and carcinogenesis. A pipeline using RNA-seq or gene array data can be used to track genetic changes occurring in stem and precursor cells and requires at least three independent gene expression data from the former cell type and mature cells. Analysis of the transcriptomes of human oocytes and embryos is used to understand the molecular mechanisms and signaling pathways controlling early embryonic development, and could theoretically be a powerful tool in making proper embryo selection in in vitro fertilisation. Analyses of the transcriptome content of the placenta in the first-trimester of pregnancy in in vitro fertilization and embryo transfer (IVT-ET) revealed differences in genetic expression which are associated with higher frequency of adverse perinatal outcomes. Such insight can be used to optimize the practice. Transcriptome analyses can also be used to optimize cryopreservation of oocytes, by lowering injuries associated with the process. Transcriptomics is an emerging and continually growing field in biomarker discovery for use in assessing the safety of drugs or chemical risk assessment. Transcriptomes may also be used to infer phylogenetic relationships among individuals or to detect evolutionary patterns of transcriptome conservation. Transcriptome analyses were used to discover the incidence of antisense transcription, their role in gene expression through interaction with surrounding genes and their abundance in different chromosomes. RNA-seq was also used to show how RNA isoforms, transcripts stemming from the same gene but with different structures, can produce complex phenotypes from limited genomes. Plants Transcriptome analysis have been used to study the evolution and diversification process of plant species. In 2014, the 1000 Plant Genomes Project was completed in which the transcriptomes of 1,124 plant species from the families viridiplantae, glaucophyta and rhodophyta were sequenced. The protein coding sequences were subsequently compared to infer phylogenetic relationships between plants and to characterize the time of their diversification in the process of evolution. Transcriptome studies have been used to characterize and quantify gene expression in mature pollen. Genes involved in cell wall metabolism and cytoskeleton were found to be overexpressed. Transcriptome approaches also allowed to track changes in gene expression through different developmental stages of pollen, ranging from microspore to mature pollen grains; additionally such stages could be compared across species of different plants including Arabidopsis, rice and tobacco. Relation to other ome fields Similar to other -ome based technologies, analysis of the transcriptome allows for an unbiased approach when validating hypotheses experimentally. This approach also allows for the discovery of novel mediators in signaling pathways. As with other -omics based technologies, the transcriptome can be analyzed within the scope of a multiomics approach. It is complementary to metabolomics but contrary to proteomics, a direct association between a transcript and metabolite cannot be established. There are several -ome fields that can be seen as subcategories of the transcriptome. The exome differs from the transcriptome in that it includes only those RNA molecules found in a specified cell population, and usually includes the amount or concentration of each RNA molecule in addition to the molecular identities. Additionally, the transcritpome also differs from the translatome, which is the set of RNAs undergoing translation. The term meiome is used in functional genomics to describe the meiotic transcriptome or the set of RNA transcripts produced during the process of meiosis. Meiosis is a key feature of sexually reproducing eukaryotes, and involves the pairing of homologous chromosome, synapse and recombination. Since meiosis in most organisms occurs in a short time period, meiotic transcript profiling is difficult due to the challenge of isolation (or enrichment) of meiotic cells (meiocytes). As with transcriptome analyses, the meiome can be studied at a whole-genome level using large-scale transcriptomic techniques. The meiome has been well-characterized in mammal and yeast systems and somewhat less extensively characterized in plants. The thanatotranscriptome consists of all RNA transcripts that continue to be expressed or that start getting re-expressed in internal organs of a dead body 24–48 hours following death. Some genes include those that are inhibited after fetal development. If the thanatotranscriptome is related to the process of programmed cell death (apoptosis), it can be referred to as the apoptotic thanatotranscriptome. Analyses of the thanatotranscriptome are used in forensic medicine. eQTL mapping can be used to complement genomics with transcriptomics; genetic variants at DNA level and gene expression measures at RNA level. Relation to proteome The transcriptome can be seen as a subset of the proteome, that is, the entire set of proteins expressed by a genome. However, the analysis of relative mRNA expression levels can be complicated by the fact that relatively small changes in mRNA expression can produce large changes in the total amount of the corresponding protein present in the cell. One analysis method, known as gene set enrichment analysis, identifies coregulated gene networks rather than individual genes that are up- or down-regulated in different cell populations. Although microarray studies can reveal the relative amounts of different mRNAs in the cell, levels of mRNA are not directly proportional to the expression level of the proteins they code for. The number of protein molecules synthesized using a given mRNA molecule as a template is highly dependent on translation-initiation features of the mRNA sequence; in particular, the ability of the translation initiation sequence is a key determinant in the recruiting of ribosomes for protein translation. Transcriptome databases Ensembl: OmicTools: Transcriptome Browser: ArrayExpress:
Biology and health sciences
Molecular biology
Biology
1076188
https://en.wikipedia.org/wiki/Xenopeltis
Xenopeltis
Xenopeltis, the sunbeam snakes, are the sole genus of the monotypic family Xenopeltidae, the species of which are found in Southeast Asia. Sunbeam snakes are known for their highly iridescent scales. Three species are recognized, each one with no subspecies. Studies of DNA suggest that the xenopeltids are most closely related to the Mexican burrowing python (Loxocemus bicolor) and to the true pythons (Pythonidae). Description Adults can grow up to in length. The head scales are made up of large plates much like those of the Colubridae, while the ventral scales are only slightly reduced. Pelvic vestiges are not present. The dorsal color pattern is a reddish-brown, brown, or blackish color. The belly is an unpatterned whitish-gray. The scales are highly iridescent. Geographic range They are found in Southeast Asia from the Andaman and Nicobar Islands, east through Myanmar to southern China, Thailand, Laos, Cambodia, Vietnam, the Malay Peninsula and the East Indies to Sulawesi, as well as the Philippines. Behavior and diet These snakes are fossorial, spending much of their time hidden. They emerge at dusk to actively forage for frogs, other snakes, and small mammals. They are not venomous, and kill their prey with constriction. Species T) Type species. Captivity These snakes are not very commonly kept as pets because of their high mortality rate in captivity. Shipping and the first six months in captivity are very stressful and often kill captive snakes. They also have very little tolerance of handling, with the resulting stress leading to premature death. Captive specimens should be provided with a temperature gradient and an easy to burrow substrate. The cage should be kept warm, but not hot, and they should be left alone.
Biology and health sciences
Snakes
Animals
1076314
https://en.wikipedia.org/wiki/Ailuridae
Ailuridae
Ailuridae is a family in the mammal order Carnivora. The family consists of the red panda (the sole living representative) and its extinct relatives. Georges Cuvier first described Ailurus as belonging to the raccoon family in 1825; this classification has been controversial ever since. It was classified in the raccoon family because of morphological similarities of the head, colored ringed tail, and other morphological and ecological characteristics. Somewhat later, it was assigned to the bear family. Molecular phylogenetic studies had shown that, as an ancient species in the order Carnivora, the red panda is relatively close to the American raccoon and may be either a monotypic family or a subfamily within the procyonid family. An in-depth mitochondrial DNA population analysis study stated: "According to the fossil record, the Red Panda diverged from its common ancestor with bears about 40 million years ago." With this divergence, by comparing the sequence difference between the red panda and the raccoon, the observed mutation rate for the red panda was calculated to be on the order of 109, which is apparently an underestimate compared with the average rate in mammals. This underestimation is probably due to multiple recurrent mutations as the divergence between the red panda and the raccoon is extremely deep. The most recent molecular-systematic DNA research places the red panda into its own independent family, Ailuridae. Ailuridae are, in turn, part of a trichotomy within the broad superfamily Musteloidea that also includes the Procyonidae (raccoons) and a group that further subdivides into the Mephitidae (skunks) and Mustelidae (weasels); but it is not a bear (Ursidae). Ailurids appear to have originated during the Late Oligocene to Early Miocene in Europe. The earliest known member, Amphictis, was likely an unspecialised carnivore, based on its dentition. Ailurids subsequently dispersed into Asia and North America. The puma-sized Simocyon found in Middle Miocene-Early Pliocene of Europe, North America and China was likely a hypercarnivore. Like modern red panda it had a "false thumb" to aid in climbing. Members of the subfamily Ailurinae, which includes the modern red panda as well as the extinct genera Pristinailurus and Parailurus, developed a specialised dental morphology with blunted cusps, creating an effective grinding surface to process plant material. Classification The relationship of the Ailuridae with other carnivorans is shown in the following phylogenetic tree, which is based on the molecular phylogenetic analysis of six genes in Flynn (2005), with the musteloids updated following the multigene analysis of Law et al. (2018). In addition to Ailurus, the family Ailuridae includes seven extinct genera, most of which are assigned to three subfamilies: Amphictinae, Simocyoninae, and Ailurinae. Family Ailuridae J.E. Gray, 1843 Subfamily †Amphictinae ?Winge, 1896 †Amphictis ?Pomel, 1853 †Amphictis borbonica Viret, 1929 †Amphictis ambigua (Gervais, 1872) †Amphictis milloquensis (Helbing, 1936) †Amphictis antiqua (de Blainville, 1842) †Amphictis schlosseri Heizmann & Morlo, 1994 †Amphictis prolongata Morlo, 1996 †Amphictis wintershofensis Roth, 1994 †Amphictis cuspida Nagel, 2003 †Amphictis timucua J.A. Baskin, 2017 Subfamily †Simocyoninae Dawkins, 1868 †Actiocyon Stock, 1947 †Actiocyon parverratis Smith et al., 2016 †Actiocyon leardi Stock, 1947 †Alopecocyon Camp & Vanderhoof, 1940 †Alopecocyon getti Mein, 1958 †Alopecocyon goeriachensis (Toula, 1884) †Protursus Crusafont & Kurtén, 1976 †Protursus simpsoni Crusafont & Kurtén, 1976 †Simocyon Wagner, 1858 †Simocyon primigenius (Roth & Wagner, 1854) †Simocyon diaphorus (Kaup, 1832) †Simocyon batalleri Viret, 1929 †Simocyon hungaricus Kadic & Kretzoi, 1927 Subfamily Ailurinae J.E. Gray, 1843 †Magerictis Ginsburg et al., 1997 †Magerictis imperialensis Ginsburg et al., 1997 Tribe Pristinailurini Wallace & Lyon, 2022 †Pristinailurus Wallace & Wang, 2004 †Pristinailurus bristoli Wallace & Wang, 2004 †Parailurus Schlosser, 1899 †Parailurus anglicus (Dawkins, 1888) [Parailurus hungaricus Kormos, 1935] †Parailurus tedfordi Wallace & Lyon, 2022 †Parailurus baikalicus Sotnikova, 2008 Tribe Ailurini Ailurus F. Cuvier, 1825 Ailurus fulgens - Red panda Ailurus fulgens styani Thomas, 1902 – Eastern red panda Ailurus fulgens fulgens F. Cuvier, 1825 – Western red panda An additional, unnamed taxon called only "Ailurinae indet." was described in 2001 based on an upper molar from Four, a Middle Miocene-age locality near Isère, France.
Biology and health sciences
Other carnivora
Animals
1077261
https://en.wikipedia.org/wiki/Spin%20quantum%20number
Spin quantum number
In physics and chemistry, the spin quantum number is a quantum number (designated ) that describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particle. It has the same value for all particles of the same type, such as = for all electrons. It is an integer for all bosons, such as photons, and a half-odd-integer for all fermions, such as electrons and protons. The component of the spin along a specified axis is given by the spin magnetic quantum number, conventionally written . The value of is the component of spin angular momentum, in units of the reduced Planck constant , parallel to a given direction (conventionally labelled the –axis). It can take values ranging from + to − in integer increments. For an electron, can be either or . Nomenclature The phrase spin quantum number refers to quantized spin angular momentum. The symbol is used for the spin quantum number, and is described as the spin magnetic quantum number or as the -component of spin . Both the total spin and the z-component of spin are quantized, leading to two quantum numbers spin and spin magnet quantum numbers. The (total) spin quantum number has only one value for every elementary particle. Some introductory chemistry textbooks describe as the spin quantum number, and is not mentioned since its value is a fixed property of the electron; some even use the variable in place of . The two spin quantum numbers and are the spin angular momentum analogs of the two orbital angular momentum quantum numbers and . Spin quantum numbers apply also to systems of coupled spins, such as atoms that may contain more than one electron. Capitalized symbols are used: for the total electronic spin, and or for the -axis component. A pair of electrons in a spin singlet state has = 0, and a pair in the triplet state has = 1, with = −1, 0, or +1. Nuclear-spin quantum numbers are conventionally written for spin, and or for the -axis component. The name "spin" comes from a geometrical spinning of the electron about an axis, as proposed by Uhlenbeck and Goudsmit. However, this simplistic picture was quickly realized to be physically unrealistic, because it would require the electrons to rotate faster than the speed of light. It was therefore replaced by a more abstract quantum-mechanical description. History During the period between 1916 and 1925, much progress was being made concerning the arrangement of electrons in the periodic table. In order to explain the Zeeman effect in the Bohr atom, Sommerfeld proposed that electrons would be based on three 'quantum numbers', n, k, and m, that described the size of the orbit, the shape of the orbit, and the direction in which the orbit was pointing. Irving Langmuir had explained in his 1919 paper regarding electrons in their shells, "Rydberg has pointed out that these numbers are obtained from the series . The factor two suggests a fundamental two-fold symmetry for all stable atoms." This configuration was adopted by Edmund Stoner, in October 1924 in his paper 'The Distribution of Electrons Among Atomic Levels' published in the Philosophical Magazine. The qualitative success of the Sommerfeld quantum number scheme failed to explain the Zeeman effect in weak magnetic field strengths, the anomalous Zeeman effect. In December 1924, Wolfgang Pauli showed that the core electron angular momentum was not related to the effect as had previously been assumed. Rather he proposed that only the outer "light" electrons determined the angular momentum and he hypothesized that this required a fourth quantum number with a two-valuedness. This fourth quantum number became the spin magnetic quantum number. Electron spin A spin- particle is characterized by an angular momentum quantum number for spin = . In solutions of the Schrödinger-Pauli equation, angular momentum is quantized according to this number, so that magnitude of the spin angular momentum is The hydrogen spectrum fine structure is observed as a doublet corresponding to two possibilities for the z-component of the angular momentum, where for any given direction : whose solution has only two possible -components for the electron. In the electron, the two different spin orientations are sometimes called "spin-up" or "spin-down". The spin property of an electron would give rise to magnetic moment, which was a requisite for the fourth quantum number. The magnetic moment vector of an electron spin is given by: where is the electron charge, is the electron mass, and is the electron spin g-factor, which is approximately 2.0023. Its z-axis projection is given by the spin magnetic quantum number according to: where is the Bohr magneton. When atoms have even numbers of electrons the spin of each electron in each orbital has opposing orientation to that of its immediate neighbor(s). However, many atoms have an odd number of electrons or an arrangement of electrons in which there is an unequal number of "spin-up" and "spin-down" orientations. These atoms or electrons are said to have unpaired spins that are detected in electron spin resonance. Nuclear spin Atomic nuclei also have spins. The nuclear spin is a fixed property of each nucleus and may be either an integer or a half-integer. The component of nuclear spin parallel to the –axis can have (2 + 1) values , –1, ..., . For example, a N nucleus has = 1, so that there are 3 possible orientations relative to the –axis, corresponding to states = +1, 0 and −1. The spins of different nuclei are interpreted using the nuclear shell model. Even-even nuclei with even numbers of both protons and neutrons, such as C and O, have spin zero. Odd mass number nuclei have half-integer spins, such as for Li, for C and for O, usually corresponding to the angular momentum of the last nucleon added. Odd-odd nuclei with odd numbers of both protons and neutrons have integer spins, such as 3 for B, and 1 for N. Values of nuclear spin for a given isotope are found in the lists of isotopes for each element. (See isotopes of oxygen, isotopes of aluminium, etc. etc.) Detection of spin When lines of the hydrogen spectrum are examined at very high resolution, they are found to be closely spaced doublets. This splitting is called fine structure, and was one of the first experimental evidences for electron spin. The direct observation of the electron's intrinsic angular momentum was achieved in the Stern–Gerlach experiment. Stern–Gerlach experiment The theory of spatial quantization of the spin moment of the momentum of electrons of atoms situated in the magnetic field needed to be proved experimentally. In 1922 (two years before the theoretical description of the spin was created) Otto Stern and Walter Gerlach observed it in the experiment they conducted. Silver atoms were evaporated using an electric furnace in a vacuum. Using thin slits, the atoms were guided into a flat beam and the beam sent through an in-homogeneous magnetic field before colliding with a metallic plate. The laws of classical physics predict that the collection of condensed silver atoms on the plate should form a thin solid line in the same shape as the original beam. However, the in-homogeneous magnetic field caused the beam to split in two separate directions, creating two lines on the metallic plate. The phenomenon can be explained with the spatial quantization of the spin moment of momentum. In atoms the electrons are paired such that one spins upward and one downward, neutralizing the effect of their spin on the action of the atom as a whole. But in the valence shell of silver atoms, there is a single electron whose spin remains unbalanced. The unbalanced spin creates spin magnetic moment, making the electron act like a very small magnet. As the atoms pass through the in-homogeneous magnetic field, the force moment in the magnetic field influences the electron's dipole until its position matches the direction of the stronger field. The atom would then be pulled toward or away from the stronger magnetic field a specific amount, depending on the value of the valence electron's spin. When the spin of the electron is the atom moves away from the stronger field, and when the spin is the atom moves toward it. Thus the beam of silver atoms is split while traveling through the in-homogeneous magnetic field, according to the spin of each atom's valence electron. In 1927 Phipps and Taylor conducted a similar experiment, using atoms of hydrogen with similar results. Later scientists conducted experiments using other atoms that have only one electron in their valence shell: (copper, gold, sodium, potassium). Every time there were two lines formed on the metallic plate. The atomic nucleus also may have spin, but protons and neutrons are much heavier than electrons (about 1836 times), and the magnetic dipole moment is inversely proportional to the mass. So the nuclear magnetic dipole momentum is much smaller than that of the whole atom. This small magnetic dipole was later measured by Stern, Frisch and Easterman. Electron paramagnetic resonance For atoms or molecules with an unpaired electron, transitions in a magnetic field can also be observed in which only the spin quantum number changes, without change in the electron orbital or the other quantum numbers. This is the method of electron paramagnetic resonance (EPR) or electron spin resonance (ESR), used to study free radicals. Since only the magnetic interaction of the spin changes, the energy change is much smaller than for transitions between orbitals, and the spectra are observed in the microwave region. Relation to spin vectors For a solution of either the nonrelativistic Pauli equation or the relativistic Dirac equation, the quantized angular momentum (see angular momentum quantum number) can be written as: where is the quantized spin vector or spinor is the norm of the spin vector is the spin quantum number associated with the spin angular momentum is the reduced Planck constant. Given an arbitrary direction (usually determined by an external magnetic field) the spin -projection is given by where is the magnetic spin quantum number, ranging from − to + in steps of one. This generates different values of . The allowed values for are non-negative integers or half-integers. Fermions have half-integer values, including the electron, proton and neutron which all have Bosons such as the photon and all mesons) have integer spin values. Algebra The algebraic theory of spin is a carbon copy of the angular momentum in quantum mechanics theory. First of all, spin satisfies the fundamental commutation relation: where is the (antisymmetric) Levi-Civita symbol. This means that it is impossible to know two coordinates of the spin at the same time because of the restriction of the uncertainty principle. Next, the eigenvectors of and satisfy: where are the ladder (or "raising" and "lowering") operators. Energy levels from the Dirac equation In 1928, Paul Dirac developed a relativistic wave equation, now termed the Dirac equation, which predicted the spin magnetic moment correctly, and at the same time treated the electron as a point-like particle. Solving the Dirac equation for the energy levels of an electron in the hydrogen atom, all four quantum numbers including occurred naturally and agreed well with experiment. Total spin of an atom or molecule For some atoms the spins of several unpaired electrons (, , ...) are coupled to form a total spin quantum number . This occurs especially in light atoms (or in molecules formed only of light atoms) when spin–orbit coupling is weak compared to the coupling between spins or the coupling between orbital angular momenta, a situation known as coupling because and are constants of motion. Here is the total orbital angular momentum quantum number. For atoms with a well-defined , the multiplicity of a state is defined as . This is equal to the number of different possible values of the total (orbital plus spin) angular momentum for a given (, ) combination, provided that ≤ (the typical case). For example, if = 1, there are three states which form a triplet. The eigenvalues of for these three states are and . The term symbol of an atomic state indicates its values of , , and . As examples, the ground states of both the oxygen atom and the dioxygen molecule have two unpaired electrons and are therefore triplet states. The atomic state is described by the term symbol P, and the molecular state by the term symbol Σ.
Physical sciences
Atomic physics
Physics
1077335
https://en.wikipedia.org/wiki/Polar%20stratospheric%20cloud
Polar stratospheric cloud
Polar stratospheric clouds (PSCs) are clouds in the winter polar stratosphere at altitudes of . They are best observed during civil twilight, when the Sun is between 1 and 6 degrees below the horizon, as well as in winter and in more northerly latitudes. One main type of PSC is made up mostly of supercooled droplets of water and nitric acid and is implicated in the formation of ozone holes. The other main type consists only of ice crystals which are not harmful. This type of PSC is also referred to as nacreous (, from nacre, or mother of pearl, due to its iridescence). Formation The stratosphere is very dry; unlike the troposphere, it rarely allows clouds to form. In the extreme cold of the polar winter, however, stratospheric clouds of different types may form, which are classified according to their physical state (super-cooled liquid or ice) and chemical composition. Due to their high altitude and the curvature of the surface of the Earth, these clouds will receive sunlight from below the horizon and reflect it to the ground, shining brightly well before dawn or after dusk. PSCs form at very low temperatures, below . These temperatures can occur in the lower stratosphere in polar winter. In the Antarctic, temperatures below frequently cause type II PSCs. Such low temperatures are rarer in the Arctic. In the Northern hemisphere, the generation of lee waves by mountains may locally cool the lower stratosphere and lead to the formation of lenticular (lens-shaped) PSCs. Forward scattering of sunlight within the clouds produces a pearly-white appearance. Particles within the optically thin clouds cause colored interference fringes by diffraction. The visibility of the colors may be enhanced with a polarising filter. Types PSCs are classified into two main types each of which consists of several sub-types Type I clouds have a generally stratiform appearance resembling cirrostratus or haze. They are sometimes sub-classified according to their chemical composition which can be measured using LIDAR. The technique also determines the height and ambient temperature of the cloud. They contain water, nitric acid and/or sulfuric acid and are a source of polar ozone depletion. The effects on ozone depletion arise because they support chemical reactions that produce active chlorine which catalyzes ozone destruction, and also because they remove gaseous nitric acid, perturbing nitrogen and chlorine cycles in a way which increases ozone depletion. Type Ia clouds consist of large, aspherical particles, consisting of nitric acid trihydrate (NAT). Type Ib clouds contain small, spherical particles (non-depolarising), of a liquid supercooled ternary solution (STS) of sulfuric acid, nitric acid, and water. Type Ic clouds consist of metastable water-rich nitric acid in a solid phase. Type II clouds, which are very rarely observed in the Arctic, have cirriform and lenticular sub-types and consist of water ice only. Only Type II clouds are necessarily nacreous whereas Type I clouds can be iridescent under certain conditions, just as any other cloud. The World Meteorological Organization no longer uses the alpha-numeric nomenclature seen in this article, and distinguishes only between super-cooled stratiform acid-water PSCs and cirriform-lenticular water ice nacreous PSCs.
Physical sciences
Clouds
Earth science
1077688
https://en.wikipedia.org/wiki/Catechin
Catechin
Catechin is a flavan-3-ol, a type of secondary metabolite providing antioxidant roles in plants. It belongs to the subgroup of polyphenols called flavonoids. The name of the catechin chemical family derives from catechu, which is the tannic juice or boiled extract of Mimosa catechu (Acacia catechu L.f). Chemistry Catechin possesses two benzene rings (called the A and B rings) and a dihydropyran heterocycle (the C ring) with a hydroxyl group on carbon 3. The A ring is similar to a resorcinol moiety while the B ring is similar to a catechol moiety. There are two chiral centers on the molecule on carbons 2 and 3. Therefore, it has four diastereoisomers. Two of the isomers are in trans configuration and are called catechin and the other two are in cis configuration and are called epicatechin. The most common catechin isomer is (+)-catechin. The other stereoisomer is (−)-catechin or ent-catechin. The most common epicatechin isomer is (−)-epicatechin (also known under the names L-epicatechin, epicatechol, (−)-epicatechol, L-acacatechin, L-epicatechol, epicatechin, 2,3-cis-epicatechin or (2R,3R)-(−)-epicatechin). The different epimers can be separated using chiral column chromatography. Making reference to no particular isomer, the molecule can just be called catechin. Mixtures of the different enantiomers can be called (±)-catechin or DL-catechin and (±)-epicatechin or DL-epicatechin. Catechin and epicatechin are the building blocks of the proanthocyanidins, a type of condensed tannin. Moreover, the flexibility of the C-ring allows for two conformation isomers, putting the B-ring either in a pseudoequatorial position (E conformer) or in a pseudoaxial position (A conformer). Studies confirmed that (+)-catechin adopts a mixture of A- and E-conformers in aqueous solution and their conformational equilibrium has been evaluated to be 33:67. As flavonoids, catechins can act as antioxidants when in high concentration in vitro, but compared with other flavonoids, their antioxidant potential is low. The ability to quench singlet oxygen seems to be in relation with the chemical structure of catechin, with the presence of the catechol moiety on ring B and the presence of a hydroxyl group activating the double bond on ring C. Oxidation Electrochemical experiments show that (+)-catechin oxidation mechanism proceeds in sequential steps, related with the catechol and resorcinol groups and the oxidation is pH-dependent. The oxidation of the catechol 3′,4′-dihydroxyl electron-donating groups occurs first, at very low positive potentials, and is a reversible reaction. The hydroxyl groups of the resorcinol moiety oxidised afterwards were shown to undergo an irreversible oxidation reaction. The laccase/ABTS system oxidizes (+)-catechin to oligomeric products of which proanthocyanidin A2 is a dimer. Spectral data Natural occurrences (+)-Catechin and (−)-epicatechin as well as their gallic acid conjugates are ubiquitous constituents of vascular plants, and frequent components of traditional herbal remedies, such as Uncaria rhynchophylla. The two isomers are mostly found as cacao and tea constituents, as well as in Vitis vinifera grapes. In food The main dietary sources of catechins in Europe and the United States are tea and pome fruits. Catechins and epicatechins are found in cocoa, which, according to one database, has the highest content (108 mg/100 g) of catechins among foods analyzed, followed by prune juice (25 mg/100 ml) and broad bean pod (16 mg/100 g). Açaí oil, obtained from the fruit of the açaí palm (Euterpe oleracea), contains (+)-catechins (67 mg/kg). Catechins are diverse among foods, from peaches to green tea and vinegar. Catechins are found in barley grain, where they are the main phenolic compound responsible for dough discoloration. The taste associated with monomeric (+)-catechin or (−)-epicatechin is described as slightly astringent, but not bitter. Metabolism Biosynthesis The biosynthesis of catechin begins with ma 4-hydroxycinnamoyl CoA starter unit which undergoes chain extension by the addition of three malonyl-CoAs through a PKSIII pathway. 4-Hydroxycinnamoyl CoA is biosynthesized from L-phenylalanine through the Shikimate pathway. L-Phenylalanine is first deaminated by phenylalanine ammonia lyase (PAL) forming cinnamic acid which is then oxidized to 4-hydroxycinnamic acid by cinnamate 4-hydroxylase. Chalcone synthase then catalyzes the condensation of 4-hydroxycinnamoyl CoA and three molecules of malonyl-CoA to form chalcone. Chalcone is then isomerized to naringenin by chalcone isomerase which is oxidized to eriodictyol by flavonoid 3′-hydroxylase and further oxidized to taxifolin by flavanone 3-hydroxylase. Taxifolin is then reduced by dihydroflavanol 4-reductase and leucoanthocyanidin reductase to yield catechin. The biosynthesis of catechin is shown below Leucocyanidin reductase (LCR) uses 2,3-trans-3,4-cis-leucocyanidin to produce (+)-catechin and is the first enzyme in the proanthocyanidin (PA) specific pathway. Its activity has been measured in leaves, flowers, and seeds of the legumes Medicago sativa, Lotus japonicus, Lotus uliginosus, Hedysarum sulfurescens, and Robinia pseudoacacia. The enzyme is also present in Vitis vinifera (grape). Biodegradation Catechin oxygenase, a key enzyme in the degradation of catechin, is present in fungi and bacteria. Among bacteria, degradation of (+)-catechin can be achieved by Acinetobacter calcoaceticus. Catechin is metabolized to protocatechuic acid (PCA) and phloroglucinol carboxylic acid (PGCA). It is also degraded by Bradyrhizobium japonicum. Phloroglucinol carboxylic acid is further decarboxylated to phloroglucinol, which is dehydroxylated to resorcinol. Resorcinol is hydroxylated to hydroxyquinol. Protocatechuic acid and hydroxyquinol undergo intradiol cleavage through protocatechuate 3,4-dioxygenase and hydroxyquinol 1,2-dioxygenase to form β-carboxy-cis,cis-muconic acid and maleyl acetate. Among fungi, degradation of catechin can be achieved by Chaetomium cupreum. Metabolism in humans Catechins are metabolised upon uptake from the gastrointestinal tract, in particular the jejunum, and in the liver, resulting in so-called structurally related epicatechin metabolites (SREM). The main metabolic pathways for SREMs are glucuronidation, sulfation and methylation of the catechol group by catechol-O-methyl transferase, with only small amounts detected in plasma. The majority of dietary catechins are however metabolised by the colonic microbiome to gamma-valerolactones and hippuric acids which undergo further biotransformation, glucuronidation, sulfation and methylation in the liver. The stereochemical configuration of catechins has a strong impact on their uptake and metabolism as uptake is highest for (−)-epicatechin and lowest for (−)-catechin. Biotransformation Biotransformation of (+)-catechin into taxifolin by a two-step oxidation can be achieved by Burkholderia sp. (+)-Catechin and (−)-epicatechin are transformed by the endophytic filamentous fungus Diaporthe sp. into the 3,4-cis-dihydroxyflavan derivatives, (+)-(2R,3S,4S)-3,4,5,7,3′,4′-hexahydroxyflavan (leucocyanidin) and (−)-(2R,3R,4R)-3,4,5,7,3′,4′-hexahydroxyflavan, respectively, whereas (−)-catechin and (+)-epicatechin with a (2S)-phenyl group resisted the biooxidation. Leucoanthocyanidin reductase (LAR) uses (2R,3S)-catechin, NADP+ and H2O to produce 2,3-trans-3,4-cis-leucocyanidin, NADPH, and H+. Its gene expression has been studied in developing grape berries and grapevine leaves. Glycosides (2R,3S)-Catechin-7-O-β-D-glucopyranoside can be isolated from barley (Hordeum vulgare L.) and malt. Epigeoside (catechin-3-O-α-L-rhamnopyranosyl-(1–4)-β-D-glucopyranosyl-(1–6)-β-D-glucopyranoside) can be isolated from the rhizomes of Epigynum auritum. Research Vascular function Only limited evidence from dietary studies indicates that catechins may affect endothelium-dependent vasodilation which could contribute to normal blood flow regulation in humans. Green tea catechins may improve blood pressure, especially when systolic blood pressure is above 130 mmHg. Due to extensive metabolism during digestion, the fate and activity of catechin metabolites responsible for this effect on blood vessels, as well as the actual mode of action, are unknown. Adverse events Catechin and its metabolites can bind tightly to red blood cells and thereby induce the development of autoantibodies, resulting in haemolytic anaemia and renal failure. This resulted in the withdrawal of the catechin-containing drug Catergen, used to treat viral hepatitis, from market in 1985. Catechins from green tea can be hepatotoxic and the European Food Safety Authority has recommended not to exceed 800 mg per day. Other One limited meta-analysis showed that increasing consumption of green tea and its catechins to seven cups per day provided a small reduction in prostate cancer. Nanoparticle methods are under preliminary research as potential delivery systems of catechins. Botanical effects Catechins released into the ground by some plants may hinder the growth of their neighbors, a form of allelopathy. Centaurea maculosa, the spotted knapweed often studied for this behavior, releases catechin isomers into the ground through its roots, potentially having effects as an antibiotic or herbicide. One hypothesis is that it causes a reactive oxygen species wave through the target plant's root to kill root cells by apoptosis. Most plants in the European ecosystem have defenses against catechin, but few plants are protected against it in the North American ecosystem where Centaurea maculosa is an invasive, uncontrolled weed. Catechin acts as an infection-inhibiting factor in strawberry leaves. Epicatechin and catechin may prevent coffee berry disease by inhibiting appressorial melanization of Colletotrichum kahawae.
Physical sciences
Polyphenols
Chemistry
1078092
https://en.wikipedia.org/wiki/Dental%20restoration
Dental restoration
Dental restoration, dental fillings, or simply fillings are treatments used to restore the function, integrity, and morphology of missing tooth structure resulting from caries or external trauma as well as to the replacement of such structure supported by dental implants. They are of two broad types—direct and indirect—and are further classified by location and size. Root canal therapy, for example, is a restorative technique used to fill the space where the dental pulp normally resides and are more hectic than a normal filling. History In Italy evidence dated to the Paleolithic, around 13,000 years ago, points to bitumen used to fill a tooth and in Neolithic Slovenia, 6500 years ago, beeswax was used to close a fracture in a tooth. In Graeco-Roman literature, such as Pliny the Elder's Naturalis Historia (AD 23–79), contains references to filling materials for hollow teeth. Tooth preparation Restoring a tooth to good form and function requires two steps: preparing the tooth for placement of restorative material or materials, and placement of these materials. The process of preparation usually involves cutting the tooth with a rotary dental handpiece and dental burrs, a dental laser, or through air abrasion (or in the case of atraumatic restorative treatment, hand instruments), to make space for the planned restorative materials and to remove any dental decay or portions of the tooth that are structurally unsound. If permanent restoration cannot be carried out immediately after tooth preparation, temporary restoration may be performed. The prepared tooth, ready for placement of restorative materials, is generally called a tooth preparation. Materials used may be gold, amalgam, dental composites, glass ionomer cement, or porcelain, among others. Preparations may be intracoronal or extracoronal. Intracoronal preparations are those which serve to hold restorative material within the confines of the structure of the crown of a tooth. Examples include all classes of cavity preparations for composite or amalgam as well as those for gold and porcelain inlays. Intracoronal preparations are also made as female recipients to receive the male components of removable partial dentures. Extracoronal preparations provide a core or base upon which restorative material will be placed to bring the tooth back into a functional and aesthetic structure. Examples include crowns and onlays, as well as veneers. In preparing a tooth for a restoration, a number of considerations will determine the type and extent of the preparation. The most important factor to consider is decay. For the most part, the extent of the decay will define the extent of the preparation, and in turn, the subsequent method and appropriate materials for restoration. Another consideration is unsupported tooth structure. When preparing the tooth to receive a restoration, unsupported enamel is removed to allow for a more predictable restoration. While enamel is the hardest substance in the human body, it is particularly brittle, and unsupported enamel fractures easily. A systematic review concluded that for decayed baby (primary) teeth, putting an off‐the‐shelf metal crown over the tooth (Hall technique) or only partially removing decay (also referred to as "selective removal") before placing a filling may be better than the conventional treatment of removing all decay before filling. For decayed adult (permanent) teeth, partial removal (also referred to as "selective removal") of decay before filling the tooth, or adding a second stage to this treatment where more decay is removed after several months, may be better than conventional treatment. Direct restorations This technique involves placing a soft or malleable filling into the prepared tooth and building up the tooth. The material is then set hard and the tooth is restored. Where a wall of the tooth is missing and needs to be rebuilt, a matrix should be used before placing the material to recreate the shape of the tooth, so it is cleansable and to prevent the teeth from sticking together. Sectional matrices are generally preferred to circumferential matrices when placing composite restorations in that they favour the formation of a contact point. This is important to reduce patient complaints of food impaction between the teeth. However, sectional matrices can be more technique sensitive to use, so care and skill is required to prevent problems occurring in the final restoration. The advantage of direct restorations is that they are usually set quickly and can be placed in a single procedure. The dentist has a variety of different filling options to choose from. A decision is usually made based on the location and severity of the associated cavity. Since the material is required to set while in contact with the tooth, limited energy (heat) is passed to the tooth from the setting process. Indirect restorations In this technique the restoration is fabricated outside of the mouth using the dental impressions of the prepared tooth. Common indirect restorations include inlays and onlays, crowns, bridges, and veneers. Usually a dental technician fabricates the indirect restoration from records the dentist has provided. The finished restoration is usually bonded permanently with a dental cement. It is often done in two separate visits to the dentist. Common indirect restorations are done using gold or ceramics. While the indirect restoration is being prepared, a provisory/temporary restoration is sometimes used to cover the prepared tooth to help maintain the surrounding dental tissues. Removable dental prostheses (mainly dentures) are sometimes considered a form of indirect dental restoration, as they are made to replace missing teeth. There are numerous types of precision attachments (also known as combined restorations) to aid removable prosthetic attachment to teeth, including magnets, clips, hooks, and implants which may themselves be seen as a form of dental restoration. The CEREC method is a chairside CAD/CAM restorative procedure. An optical impression of the prepared tooth is taken using a camera. Next, the specific software takes the digital picture and converts it into a 3D virtual model on the computer screen. A ceramic block that matches the tooth shade is placed in the milling machine. An all-ceramic, tooth-colored restoration is finished and ready to bond in place. Another fabrication method is to import STL and native dental CAD files into CAD/CAM software products that guide the user through the manufacturing process. The software can select the tools, machining sequences and cutting conditions optimized for particular types of materials, such as titanium and zirconium, and for particular prostheses, such as copings and bridges. In some cases, the intricate nature of some implants requires the use of 5-axis machining methods to reach every part of the job. Cavity classifications Greene Vardiman Black classification: G.V. Black classified the cavities depending on their site: Class I Caries affecting pit and fissure, on occlusal, buccal, and lingual surfaces of molars and premolars, and palatal of maxillary incisors. Class II Caries affecting proximal surfaces of molars and premolars. Class III Caries affecting proximal surfaces of centrals, laterals, and cuspids. Class IV Caries affecting proximal including incisal edges of anterior teeth. Class V Caries affecting gingival 1/3 of facial or lingual surfaces of anterior or posterior teeth. Class VI Caries affecting cusp tips of molars, premolars, and cuspids. Graham J. Mount's classification: Mount classified cavities depending on their site and size. The proposed classification was designed to simplify the identification of lesions and to define their complexity as they enlarge. Site: Pit/Fissure: 1 Contact area: 2 Cervical: 3 Size: Minimal: 1 Moderate: 2 Enlarged: 3 Extensive: 4 Materials used Alloys The following casting alloys are mostly used for making crowns, bridges and dentures. Titanium, usually commercially pure but sometimes a 90% alloy, is used as the anchor for dental implants as it is biocompatible and can integrate into bone. Precious metallic alloys gold (high purity: 99.7%) gold alloys (with high gold content) gold-platina alloy silver-palladium alloy Base metallic alloys cobalt-chrome alloy nickel-chrome alloy Amalgam Amalgams are alloys formed by a reaction between two or more metals, one of which is mercury. It is a hard restorative material and is silvery-grey in colour. One of the oldest direct restorative materials still in use, dental amalgam was widely used in the past with a high degree of success, although recently its popularity has declined due to a number of reasons, including the development of alternative bonded restorative materials, increase in demand for more aesthetic restorations and public perceptions concerning the potential health risks of the material. The composition of dental amalgam is controlled by the ISO Standard for dental amalgam alloy (ISO 1559). The major components of amalgam are silver, tin and copper. Other metals and small amounts of minor elements such as zinc, mercury, palladium, platinum and indium are also present. Earlier versions of dental amalgams, known as 'conventional' amalgams consisted of at least 65 wt% silver, 29 wt% tin, and less than 6 wt% copper. Improvements in the understanding of the structure of amalgam post-1986 gave rise to copper-enriched amalgam alloys, which contain between 12 wt% and 30 wt% copper and at least 40 wt% silver. The higher level of copper improved the setting reaction of amalgam, giving greater corrosion resistance and early strength after setting. Possible indications for amalgam are for load-bearing restorations in medium to large sized cavities in posterior teeth, and in core build-ups when a definitive restoration will be an indirect cast restoration such as a crown or bridge retainer. Contraindications for amalgam are if aesthetics are paramount to patient due to the colour of the material. Amalgams should be avoided if the patient has a history of sensitivity to mercury or other amalgam components. Besides that, amalgam is avoided if there is extensive loss of tooth substance such that a retentive cavity cannot be produced, or if excessive removal of health tooth substance would be required to produce a retentive cavity. Advantages of amalgam include durability - if placed under ideal conditions, there is evidence of good long term clinical performance of the restorations. Placement time of amalgam is shorter compared to that of composites and the restoration can be completed in a single appointment. The material is also more technique-forgiving compared to composite restorations used for that purpose. Dental amalgam is also radiopaque which is beneficial for differentiating the material between tooth tissues on radiographs for diagnosing secondary caries. The cost of the restoration is typically cheaper than composite restorations. Disadvantages of amalgam include poor aesthetic qualities due to its colour. Amalgam does not bond to tooth easily, hence it relies on mechanical forms of retention. Examples of this are undercuts, slots/grooves or root canal posts. In some cases this may necessitate excessive amounts of healthy tooth structure to be removed. Hence, alternative resin-based or glass-ionomer cement-based materials are used instead for smaller restorations including pit and small fissure caries. There is also a risk of marginal breakdown in the restorations. This could be due to corrosion which may result in "creep" and "ditching" of the restoration. Creep can be defined as the slow internal stressing and deformation of amalgam under stress. This effect is reduced by incorporating copper into amalgam alloys. Some patients may experience local sensitivity reactions to amalgam. Although the mercury in cured amalgam is not available as free mercury, concern of its toxicity has existed since the invention of amalgam as a dental material. It is banned or restricted in Norway, Sweden and Finland. See dental amalgam controversy. Direct gold Direct gold fillings were practiced during the times of the Civil War in America. Although rarely used today, due to expense and specialized training requirements, gold foil can be used for direct dental restorations. Composite resin Dental composites, commonly described to patients as "tooth-colored fillings", are a group of restorative materials used in dentistry. They can be used in direct restorations to fill in the cavities created by dental caries and trauma, minor buildup for restoring tooth wear (non-carious tooth surface loss) and filling in small gaps between teeth (labial veneer). Dental composites are also used as indirect restoration to make crowns and inlays in the laboratory. These materials are similar to those used in direct fillings and are tooth-colored. Their strength and durability is not as high as porcelain or metal restorations and they are more prone to wear and discolouration. As with other composite materials, a dental composite typically consists of a resin-based matrix, which contains a modified methacrylate or acrylate. Two examples of such commonly used monomers include bisphenol A-glycidyl methacrylate (BISMA) and urethane dimethacrylate (UDMA), together with tri-ethylene glycol dimethacrylate (TEGMA). TEGMA is a comonomer which can be used to control viscosity, as Bis GMA is a large molecule with high viscosity, for easier clinical handling. Inorganic filler such as silica, quartz or various glasses, are added to reduce polymerization shrinkage by occupying volume and to confirm radio-opacity of products due to translucency in property, which can be helpful in diagnosis of dental caries around dental restorations. The filler particles give the composites wear resistance as well. Compositions vary widely, with proprietary mixes of resins forming the matrix, as well as engineered filler glasses and glass ceramics. A coupling agent such as silane is used to enhance the bond between resin matrix and filler particles. An initiator package begins the polymerization reaction of the resins when external energy (light/heat, etc.) is applied. For example, camphorquinone can be excited by visible blue light with critical wavelength of 460-480 nm to yield necessary free radicals to start the process. After tooth preparation, a thin primer or bonding agent is used. Modern photo-polymerised composites are applied and cured in relatively thin layers as determined by their opacity. After some curing, the final surface will be shaped and polished. Glass ionomer cement A glass ionomer cement (GIC) is a class of materials commonly used in dentistry as direct filling materials and/or for luting indirect restorations. GIC can also be placed as a lining material in some restorations for extra protection. These tooth-coloured materials were introduced in 1972 for use as restorative materials for anterior teeth (particularly for eroded areas). The material consists of two main components: Liquid and powder. The liquid is the acidic component containing of polyacrylic acid and tartaric acid (added to control the setting characteristics). The powder is the basic component consisting of sodium alumino-silicate glass. The desirable properties of glass ionomer cements make them useful materials in the restoration of carious lesions in low-stress areas such as smooth-surface and small anterior proximal cavities in primary teeth. Advantages of using glass ionomer cement: The addition of tartaric acid to GIC leads to a shortened setting time, hence providing better handling properties. This makes it easier for the operator to use the material in clinic. GIC does not require bond, it can bond to enamel and dentine without the need for use of an intermediate material. Conventional GIC also has a good sealing ability providing little leakage around restoration margins and reducing the risk of secondary caries. GIC contains and releases fluoride after being placed therefore it helps in preventing carious lesions in teeth. It has good thermal properties as the expansion under stimulus is similar to dentine. The material does not contract on setting meaning it is not subject to shrinkage and microleakage. GIC is also less susceptible to staining and colour change than composite. Disadvantages of using Glass ionomer cement: GIC have poor wear resistance, they are usually weak after setting and are not stable in water however this improves when time goes on and progression reactions take place. Due to their low strength GICs are not appropriate to be placed in cavities in areas which bear an increase amount of occlusal load or wear. The material is susceptible to moisture when it is first placed. GIC varies in translucency therefore it can have poor aesthetics, especially noticeable if placed on anterior teeth. Resin Modified Glass Ionomer Resin modified glass ionomer was developed to combine the properties of glass ionomer cement with composite technology. It comes in a powder-liquid form. The powder contains fluro-alumino-silicate glass, barium glass (provides radiopacity), potassium persulphate (a redox catalyst to provide resin cure in the dark) and other components such as pigments. The liquid consists of HEMA (water miscible resin), polyacrylic acid (with pendant methacrylate groups) and tartaric acid. This can undergo both acid base and polymerisation reactions. It also has photoinitiators present which enable light curing. The ionomer has a number of uses in dentistry. It can be applied as fissure sealant, placed in endodontic access cavity as a temporary filling and a luting agent. It can also be used to restore lesions in both primary and permanent dentition. They are easier to use and are a very popular group of materials. Advantages of using RMGIC: Provides a good bond to enamel and dentine. It has better physical properties than GIC. A Lower solubility in moisture. It also releases fluoride over time. Provided better translucency and aesthetics as compared to GIC. Better handling properties making it easier to use. Disadvantages of using RMGIC: Polymerisation Contraction can cause microleakage around restoration margins It has an exothermic setting reaction which can cause potential damage to tooth tissue. The material swells due to uptake of water as HEMA is extremely hydrophilic.  Monomer leaching : HEMA is toxic to the pulp therefore it must be polymerised completely. The strength of the material reduces if its not light-cured. GIC and RMGIC are used in dentistry, there will be times when one of these materials is better than the other but that is dependent upon the clinical situation. However, in most cases the ease of use is deciding factor. Compomer Dental compomers are another type of white filling material although their use is not as widespread. Compomers were formed by modifying dental composites with poly-acid in an effort to combine the desirable properties of dental composites, namely their good aesthetics, and glass ionomer cements, namely their ability to release fluoride over a long time. Whilst this combination of good aesthetics and fluoride release may seem to give compomers a selective advantage, their poor mechanical properties (detailed below) limits their use. Compomers have a lower wear resistance and a lower compressive, flexural and tensile strength than dental composites, although their wear resistance is greater than resin-modified and conventional glass ionomer cements. Compomers cannot adhere directly to tooth tissue like glass ionomer cements; they require a bonding agent like dental composites. Compomers may be used as a cavity lining material and a restorative material for non-load bearing cavities. In Paediatric dentistry, they can also be used as a fissure sealant material. The luting version of compomer may be used to cement cast alloy and ceramic-metal restorations, and to cement orthodontic bands in Paediatric patients. However, compomer luting cement should not be used with all-ceramic crowns. Porcelain (ceramics) Full-porcelain dental materials include dental porcelain (porcelain meaning a high-firing-temperature ceramic), other ceramics, sintered-glass materials, and glass-ceramics as indirect fillings and crowns or metal-free "jacket crowns". They are also used as inlays, onlays, and aesthetic veneers. A veneer is a very thin shell of porcelain that can replace or cover part of the enamel of the tooth. Full-porcelain restorations are particularly desirable because their color and translucency mimic natural tooth enamel. Another type is known as porcelain-fused-to-metal, which is used to provide strength to a crown or bridge. These restorations are very strong, durable and resistant to wear, because the combination of porcelain and metal creates a stronger restoration than porcelain used alone. One of the advantages of computerized dentistry (CAD/CAM technologies) involves the use of machinable ceramics which are sold in a partially sintered, machinable state that is fired again after machining to form a hard ceramic. Some of the materials used are glass-bonded porcelain (Vitablock), lithium disilicate glass-ceramic (a ceramic crystallizing from a glass by special heat treatment), and phase stabilized zirconia (zirconium dioxide, ZrO2). Previous attempts to utilize high-performance ceramics such as zirconium-oxide were thwarted by the fact that this material could not be processed using the traditional methods used in dentistry. Because of its high strength and comparatively much higher fracture toughness, sintered zirconium oxide can be used in posterior crowns and bridges, implant abutments, and root dowel pins. Lithium disilicate (used in the latest Chairside Economical Restoration of Esthetic Ceramics CEREC product) also has the fracture resistance needed for use on molars. Some all-ceramic restorations, such as porcelain-fused-to-alumina set the standard for high aesthetics in dentistry because they are strong and their color and translucency mimic natural tooth enamel. Cast metals and porcelain-on-metal were the standard material for crowns and bridges for long time. The full ceramic restorations are now the major choice of patients and are of commonly applied by dentists. Comparison Composites and amalgam are used mainly for direct restoration. Composites can be made of color matching the tooth, and the surface can be polished after the filling procedure has been completed. Amalgam fillings expand with age, possibly cracking the tooth and requiring repair and filling replacement, but chance of leakage of filling is less. Composite fillings shrink with age and may pull away from the tooth allowing leakage. If leakage is not noticed early, recurrent decay may occur. A 2003 study showed that fillings have a finite lifespan: an average of 12.8 years for amalgam and 7.8 years for composite resins. Fillings fail because of changes in the filling, tooth or the bond between them. Secondary cavity formation can also affect the structural integrity the original filling. Fillings are recommended for small to medium-sized restorations. Inlays and onlays are more expensive indirect restoration alternative to direct fillings. They are supposed to be more durable, but long-term studies did not always detect a significantly lower failure rate of ceramic or composite inlays compared to composite direct fillings. Porcelain, cobalt-chrome, and gold are used for indirect restorations like crowns and partial coverage crowns (onlays). Traditional porcelains are brittle and are not always recommended for molar restorations. Some hard porcelains cause excessive wear on opposing teeth. Experimental The US National Institute of Dental Research and international organizations as well as commercial suppliers conduct research on new materials. In 2010, researchers reported that they were able to stimulate mineralization of an enamel-like layer of fluorapatite in vivo. Filling material that is compatible with pulp tissue has been developed; it could be used where previously a root canal or extraction was required, according to 2016 reports. Restoration using dental implants Dental implants are anchors placed in bone, usually made from titanium or titanium alloy. They can support dental restorations which replace missing teeth. Some restorative applications include supporting crowns, bridges, or dental prostheses. Complications Irritation of the nerve When a deep cavity had been filled, there is a possibility that the nerve may have been irritated. This can result in short term sensitivity to cold and hot substances, and pain when biting down on the specific tooth. It may settle down on its own. If not, then alternative treatment such as root canal treatment may be considered to resolve the pain while keeping the tooth. Weakening of tooth structure In situations where a relatively larger amount of tooth structure has been lost or replaced with a filling material, the overall strength of the tooth may be affected. This significantly increases the risk of the tooth fracturing off in the future when excess force is placed on the tooth, such as trauma or grinding teeth at night, leading to cracked tooth syndrome.
Biology and health sciences
Dentistry
null
369232
https://en.wikipedia.org/wiki/Leafcutter%20ant
Leafcutter ant
Leafcutter ants are fungus-growing ants that share the behaviour of cutting leaves which they carry back to their nests to farm fungus. Next to humans, leafcutter ants form some of the largest and most complex animal societies on Earth. In a few years, the central mound of their underground nests can grow to more than across, with smaller radiating mounds extending out to a radius of , taking up and converted into 3.55 m individuals. Leafcutting groups Leafcutter ants are any of at least 55 species of leaf-chewing ants belonging to the three genera Atta, Acromyrmex, and Amoimyrmex, within the tribe Attini. These species of tropical, fungus-growing ants are all endemic to South and Central America, Mexico, and parts of the southern United States. Leafcutter ants can carry twenty times their body weight and cut and process fresh vegetation (leaves, flowers, and grasses) to serve as the nutritional substrate for their fungal cultivates. Acromyrmex and Atta ants have much in common anatomically; however, the two can be identified by their external differences. Atta ants have three pairs of spines and a smooth exoskeleton on the upper surface of the thorax, while Acromyrmex ants have four pairs and a rough exoskeleton. The exoskeleton itself is covered in a thin layer of mineral coating, composed of rhombohedral crystals that are generated by the ants. Amoimyrmex and Acromyrmex differ in that Amoimyrmex lacks tubercles on the first gastral segment, and recent phylogenetic evidence shows that Amoimyrmex diverged before the other two genera of leafcutter ants. Colony lifecycle Reproduction and colony founding Winged females and males leave their respective nests en masse and engage in a nuptial flight known as the revoada (Portuguese) or vuelo nupcial (Spanish). Each female mates with multiple males to collect the 300 million sperm she needs to set up a colony. Once on the ground, the female loses her wings and searches for a suitable underground lair in which to found her colony. The success rate of these young queens is very low, and only 2.5% will go on to establish a long-lived colony. To start her own fungus garden, the queen stores bits of the parental fungus garden mycelium in her infrabuccal pocket, which is located within her oral cavity. Colonies are generally founded by individual queens haplometrosis. Because colonies with multiple queens over the lifespan of the colony have been found by a large number of investigators by Weber 1937, Jonkman 1977, Huber 1907, Moser & Lewis 1981, Mariconi & Zamith 1963, Moser 1963, and Walter et al 1938 it is believable that some colonies have multiple foundresses termed pleometrosis. Pleometrosis is confirmed only for Atta texana by Vinson 1985. Colony hierarchy In leafcutter colonies, ants are divided into castes, based mostly on size, that perform different functions. Acromyrmex and Atta exhibit a high degree of polymorphism, four castes being present in established colonies—minims, minors, mediae, and majors. Majors are also known as soldiers or dinergates. Atta ants are more polymorphic than Acromyrmex, meaning comparatively less difference occurs in size from the smallest to largest types of Acromyrmex. Minims are the smallest workers, and tend to the growing brood or care for the fungus gardens. Head width is less than 1 mm. Minors are slightly larger than minim workers, and are present in large numbers in and around foraging columns. These ants are the first line of defense and continuously patrol the surrounding terrain and vigorously attack any enemies that threaten the foraging lines. Head width is around 1.8–2.2 mm. Mediae are the generalized foragers, which cut leaves and bring the leaf fragments back to the nest. Majors, the largest worker ants, act as soldiers, defending the nest from intruders, although recent evidence indicates majors participate in other activities, such as clearing the main foraging trails of large debris and carrying bulky items back to the nest. The largest soldiers (Atta laevigata) may have total body lengths up to 16 mm and head widths of 7 mm. Ant–fungus mutualism Their societies are based on an ant–fungus mutualism, and different species of ants use different species of fungus, but all of the fungi the ants use are members of the family Lepiotaceae. The ants actively cultivate their fungus, feeding it with freshly cut plant material and keeping it free from pests and molds. This mutualistic relationship is further augmented by another symbiotic partner, a bacterium that grows on the ants and secretes chemicals; essentially, the ants use portable antimicrobials. Leaf cutter ants are sensitive enough to adapt to the fungi's reaction to different plant material, apparently detecting chemical signals from the fungus. If a particular type of leaf is toxic to the fungus, the colony will no longer collect it. The only two other groups of insects to use fungus-based agriculture are ambrosia beetles and termites. The fungus cultivated by the adults is used to feed the ant larvae, and the adult ants feed on leaf sap. The fungus needs the ants to stay alive, and the larvae need the fungus to stay alive, so mutualism is obligatory. The fungi used by the higher attine ants no longer produce spores. These ants fully domesticated their fungal partner 15 million years ago, a process that took 30 million years to complete. Their fungi produce nutritious and swollen hyphal tips (gongylidia) that grow in bundles called staphylae, to specifically feed the ants. Leucoagaricus gongylophorus is the most commonly documented fungi farmed by higher attine ant species. Behaviour Leafcutter ants have very specific roles in taking care of the fungal garden and dumping the refuse. Waste management is a key role for each colony's longevity. The necrotrophic parasitic fungus Escovopsis threatens the ants' food source and thus is a constant danger to the ants. The waste transporters and waste-heap workers are the older, more dispensable leafcutter ants, ensuring the healthier and younger ants can work on the fungal garden. The Atta colombica species, unusually for the Attine tribe, have an external waste heap. Waste transporters take the waste, which consists of used substrate and discarded fungus, to the waste heap. Once dropped off at the refuse dump, the heap workers organise the waste and constantly shuffle it around to aid decomposition. A. colombica have been observed placing dead ants around the perimeter of the waste heap. In addition to feeding the fungal garden with foraged food, mainly consisting of leaves, it is protected from Escovopsis by the antibiotic secretions of Actinomycetota (genus Pseudonocardia). This mutualistic micro-organism lives in the metapleural glands of the ant. Actinomycetota are responsible for producing the majority of the world's antibiotics today. Leafcutter ants use chemical communication and stridulation (substrate-borne vibrations) to communicate with each other. Leafcutter ants prefer disturbed habitats, likely due to higher concentrations of pioneer plant species. These are more attractive food sources because pioneer plants have lower levels of secondary metabolites and higher nutrient concentrations than the shade-tolerant species that will come later. Parasites When the ants are out collecting leaves, they are at risk of attack by some species of phorid flies, parasitoids that lay eggs into the crevices of the worker ants' heads. Often, a minim will sit on a worker ant and ward off any attack. Also, the wrong type of fungus can grow during cultivation. Escovopsis, a highly virulent fungus, has the potential to devastate an ant garden, as it is horizontally transmitted. Escovopsis was cultured, during colony foundation, in 6.6% of colonies. However, in one- to two-year-old colonies, almost 60% had Escovopsis growing in the fungal garden. Nevertheless, leafcutter ants have many adaptive mechanisms to recognize and control infections by Escovopsis and other micro-organisms. The most common known behaviors rely on workers reducing the number of fungal spores by grooming, or removing an infected piece of the fungus garden and throwing it away at the waste dump (described as weeding). Interactions with humans In some parts of their range, leafcutter ants can be a serious agricultural pest, defoliating crops and damaging roads and farmland with their nest-making activities. For example, some Atta species are capable of defoliating an entire citrus tree in less than 24 hours. A promising approach to deterring attacks of the leafcutter ant Acromyrmex lobicornis on crops has been demonstrated. Collecting the refuse from the nest and placing it over seedlings or around crops resulted in a deterrent effect over a period of 30 days.
Biology and health sciences
Hymenoptera
Animals
369236
https://en.wikipedia.org/wiki/Hereford%20cattle
Hereford cattle
The Hereford is a British breed of beef cattle originally from Herefordshire in the West Midlands of England. It was the result of selective breeding from the mid-eighteenth century by a few families in Herefordshire, beginning some decades before the noted work of Robert Bakewell. It has spread to many countries; in 2023 the populations reported by 62 countries totalled over seven million head; populations of over were reported by Uruguay, Brazil and Chile. The breed reached Ireland in 1775, and a few went to Kentucky in the United States in 1817; the modern American Hereford derives from a herd established in 1840 in Albany, New York. It was present in Australia before 1850, and in Argentina from 1858. In the twenty-first century there are breed societies in those countries and in the Czech Republic, Denmark, Estonia, France, Hungary, the Netherlands, Norway, Portugal, Spain and Sweden in Europe; in Brazil, Chile, Paraguay and Uruguay in South America; in New Zealand; and in South Africa. History Until the 18th century, the cattle of Herefordshire resembled other cattle of southern England, being wholly red with a white switch, similar to the modern North Devon and Sussex breeds. In the 18th and early 19th centuries, other cattle (mainly Shorthorns) were used to create a new type of draught and beef cattle which at first varied in colour, with herds ranging from yellow to grey and light brown, and with varying amounts of white. By the end of the 18th century the white face characteristic of the modern breed was well established, as was the modern colour during the 19th century. The Hereford is still seen in the Herefordshire countryside today and featured strongly at agricultural shows. The first imports of Herefords to the United States were made about 1817 by the politician Henry Clay, with larger importation beginning in the 1840s. Polled Hereford The Polled Hereford is an American hornless variant of Hereford with a polled gene, a natural genetic mutation selected into a separate breed from 1889. Iowa cattle rancher Warren Gammon capitalised on the idea of breeding Polled Herefords and started the registry with 11 naturally polled cattle. The American Polled Hereford Association (APHA) was formed in 1910. The American Polled Hereford and American Hereford breeds have been combined since 1995 under the same American Hereford Association name. In Australia the breed is known as the Poll Hereford. Traditional Hereford Many strains of Hereford have used other cattle breeds to import desired characteristics, which has led to changes in the breed as a whole. However, some strains have been kept separate and retained characteristics of the earlier breed, such as hardiness and thriftiness. The Traditional Hereford is now treated as a minority breed of value for genetic conservation. Health Eye cancer (ocular squamous cell carcinoma) occurs in Herefords, notably in countries with continued bright sunlight and among those that prefer traits of low levels of red pigmentation round the eye. Studies of eye cancer in Hereford cattle in the US and Canada showed lid and corneoscleral pigment to be heritable and likely to decrease the risk of cancer. Vaginal prolapse is considered a heritable problem, but may also be influenced by nutrition. Another problem is exposed skin on the udder being of light pigmentation and so vulnerable to sunburn. Dwarfism is known to occur in Hereford cattle, caused by an autosomal recessive gene. Equal occurrence in heifers and bulls means that dwarfism is not considered a sex-linked characteristic.
Biology and health sciences
Cattle
null
369482
https://en.wikipedia.org/wiki/Addition%20reaction
Addition reaction
In organic chemistry, an addition reaction is an organic reaction in which two or more molecules combine to form a larger molecule called the adduct. An addition reaction is limited to chemical compounds that have multiple bonds. Examples include a molecule with a carbon–carbon double bond (an alkene) or a triple bond (an alkyne). Another example is a compound that has rings (which are also considered points of unsaturation). A molecule that has carbon—heteroatom double bonds, such as a carbonyl group () or imine group (), can undergo an addition reaction because its double-bond. An addition reaction is the reverse of an elimination reaction, in which one molecule divides into two or more molecules. For instance, the hydration of an alkene to an alcohol is reversed by dehydration. There are two main types of polar addition reactions: electrophilic addition and nucleophilic addition. Two non-polar addition reactions exist as well, called free-radical addition and cycloadditions. Addition reactions are also encountered in polymerizations and called addition polymerization. Depending on the product structure, it could promptly react further to eject a leaving group to give the addition–elimination reaction sequence. Addition reactions are useful in analytic chemistry, as they can identify the existence and number of double bonds in a molecule. For example, bromine addition will consume a bromine solution, resulting in a color change: RR'C=CR''R''' + Br2(orange-brown) ->[{}\atop\ce{CCl4}] RR'CBr-BrCR''R'''(typically\ colorless) Likewise hydrogen addition often proceeds on all double-bonds of a molecule, and thus gives a count of the number of a double and triple bonds through stoichiometry: {(H2C=CH)2} + 2H2 ->[{}\atop\ce{Pt}/\ce{Pd}] (H3C-CH2)2
Physical sciences
Organic reactions
Chemistry
369685
https://en.wikipedia.org/wiki/Walkie-talkie
Walkie-talkie
A walkie-talkie, more formally known as a handheld transceiver, HT, or handheld radio, is a hand-held, portable, two-way radio transceiver. Its development during the Second World War has been variously credited to Donald Hings, radio engineer Alfred J. Gross, Henryk Magnuski and engineering teams at Motorola. First used for infantry, similar designs were created for field artillery and tank units, and after the war, walkie-talkies spread to public safety and eventually commercial and jobsite work. Typical walkie-talkies resemble a telephone handset, with a speaker built into one end and a microphone in the other (in some devices the speaker also is used as the microphone) and an antenna mounted on the top of the unit. They are held up to the face to talk. A walkie-talkie is a half-duplex communication device. Multiple walkie-talkies use a single radio channel, and only one radio on the channel can transmit at a time, although any number can listen. The transceiver is normally in receive mode; when the user wants to talk they must press a "push-to-talk" (PTT) button that turns off the receiver and turns on the transmitter. Some units have additional features such as sending calls, call reception with vibration alarm, keypad locking, and a stopwatch. Smaller walkie-talkies are also very popular among young children. In accordance with ITU Radio Regulations, article 1.73, a walkie-talkie is classified as radio station/land mobile station. History Handheld two-way radios were developed by the military from backpack radios carried by a soldier in an infantry squad to keep the squad in contact with their commanders. The Canadian inventor Donald Hings was the first to create a portable radio signaling system for his employer CM&S in 1937. He called the system a "packset", although it later became known as a "walkie-talkie". In 2001, Hings received the Order of Canada for the device's significance to the war effort. Hings' model C-58 "Handie-Talkie" was in military service by 1942, the result of a secret R&D effort that began in 1940. Alfred J. Gross, a radio engineer and one of the developers of the Joan-Eleanor system, also worked on the early technology behind the walkie-talkie between 1938 and 1941, and is sometimes credited with inventing it. The first device to be widely nicknamed a "walkie-talkie" was developed by the US military during World War II, the backpacked Motorola SCR-300. It was created by an engineering team in 1940 at the Galvin Manufacturing Company (forerunner of Motorola). The team consisted of Marion Bond, Lloyd Morris, Bill Vogel, Dan Noble, who conceived of the design using frequency modulation, and Henryk Magnuski, who was the principal RF engineer. The first handheld walkie-talkie was the AM SCR-536 transceiver from 1941, also made by Motorola, named the Handie-Talkie (HT). The terms are often confused today, but the original walkie-talkie referred to the back mounted model, while the handie-talkie was the device which could be held entirely in the hand. Both devices used vacuum tubes and were powered by high voltage dry cell batteries. Following World War II, Raytheon developed the SCR-536's military replacement, the AN/PRC-6. The AN/PRC-6 circuit used 13 vacuum tubes (receiver and transmitter); a second set of thirteen tubes was supplied with the unit as running spares. The unit was factory set with one crystal which could be changed to a different frequency in the field by replacing the crystal and re-tuning the unit. It used a 24-inch whip antenna. There was an optional handset that could be connected to the AN/PRC-6 by a 5-foot cable. An adjustable strap was provided for carrying and support while operating. In the mid-1970s, the United States Marine Corps initiated an effort to develop a squad radio to replace the unsatisfactory helmet-mounted AN/PRR-9 receiver and receiver/transmitter handheld AN/PRT-4 (both developed by the US Army). The AN/PRC-68, first produced in 1976 by Magnavox, was issued to the Marines in the 1980s, and was adopted by the US Army as well. The abbreviation HT, derived from Motorola's "Handie-Talkie" trademark, is commonly used to refer to portable handheld ham radios, with "walkie-talkie" often used as a layman's term or specifically to refer to a toy. Public safety and commercial users generally refer to their handhelds simply as "radios". Surplus Motorola Handie-Talkies found their way into the hands of ham radio operators immediately following World War II. Motorola's public safety radios of the 1950s and 1960s were loaned or donated to ham groups as part of the Civil Defense program. To avoid trademark infringement, other manufacturers use designations such as "Handheld Transceiver" or "Handie Transceiver" for their products. Uses Walkie-talkies are widely used in any setting where portable radio communications are necessary, including business, public safety, military, outdoor recreation, and the like, and devices are available at numerous price points from inexpensive analog units sold as toys up to ruggedized (i.e. waterproof or intrinsically safe) analog and digital units for use on boats or in heavy industry. Most countries allow the sale of walkie-talkies for, at least, business, marine communications, and some limited personal uses such as CB radio, as well as for amateur radio designs. Walkie-talkies for public safety, and commercial and industrial uses may be part of trunked radio systems, which dynamically allocate radio channels for more efficient use of the limited radio spectrum. Such systems always work with a base station that acts as a repeater and controller, although individual handsets and mobiles may have a mode that bypasses the base station. Walkie-talkies, thanks to increasing use of miniaturized electronics, can be made very small, with some personal two-way UHF radio models being smaller than a deck of cards (though VHF and HF units can be substantially larger due to the need for larger antennas and battery packs). In addition, as costs come down, it is possible to add advanced squelch capabilities such as CTCSS (analog squelch) and DCS (digital squelch) (often marketed as "privacy codes") to inexpensive radios, as well as voice scrambling and trunking capabilities. Some units (especially amateur HTs) also include DTMF keypads for remote operation of various devices such as repeaters. Some models include VOX capability for hands-free operation, as well as the ability to attach external microphones and speakers. Consumer and commercial equipment differ in a number of ways; commercial gear is generally ruggedized, with metal cases, and often has only a few specific frequencies programmed into it (often, though not always, with a computer or other outside programming device; older units can simply swap crystals), since a given business or public safety agent must often abide by a specific frequency allocation. Consumer gear, on the other hand, is generally made to be small, lightweight, and capable of accessing any channel within the specified band, not just a subset of assigned channels. Military Military organizations use handheld radios for a variety of purposes. Modern units such as the AN/PRC-148 Multiband Inter/Intra Team Radio (MBITR) can communicate on a variety of bands and modulation schemes and include encryption capabilities. Amateur radio Walkie-talkies (also known as HTs or "handheld transceivers") are widely used among amateur radio operators. While converted commercial gear by companies such as Motorola are not uncommon, many companies such as Yaesu, Icom, and Kenwood design models specifically for amateur use. While superficially similar to commercial and personal units (including such things as CTCSS and DCS squelch functions, used primarily to activate amateur radio repeaters), amateur gear usually has a number of features that are not common to other gear, including: Wide-band receivers, often including radio scanner functionality, for listening to non-amateur radio bands. Multiple bands; while some operate only on specific bands such as 2 meters or 70 cm, others support several UHF and VHF amateur allocations available to the user. Since amateur allocations usually are not channelized, the user can dial in any frequency desired in the authorized band (whereas commercial HTs usually only allow the user to tune the radio into a number of already programmed channels). This is known as variable frequency operation ("VFO") mode. Multiple modulation schemes: a few amateur HTs may allow modulation modes other than FM, including AM, SSB, and CW, and digital modes such as radioteletype or PSK31. Some may have TNCs built in to support packet radio data transmission without additional hardware. Digital voice modes are available on some amateur HTs. For example, newer additions to the Amateur Radio service are Next Generation Digital Narrowband (NXDN) and Digital Smart Technology for Amateur Radio or D-STAR. Handheld radios with these technologies have several advanced features, including narrower bandwidth, simultaneous voice and messaging, GPS position reporting, and callsign routed radio calls over a wide-ranging international network. As mentioned, commercial walkie-talkies can sometimes be reprogrammed to operate on amateur frequencies. Amateur radio operators may do this for cost reasons or due to the fact that Public Safety grade commercial gear is more solidly constructed and better designed than purpose-built amateur gear that is built to a price. Personal use The personal walkie-talkie has become popular also because of licence-free services (such as the U.S. FRS, Europe's PMR446 and Australia's UHF CB) in other countries. While FRS walkie-talkies are also sometimes used as toys because mass-production makes them low in cost, they have proper superheterodyne receivers and are a useful communication tool for both business and personal use. The boom in licence-free transceivers has, however, been a source of frustration to users of licensed services which are sometimes interfered with. For example, FRS and GMRS overlap in the United States, resulting in substantial pirate use of the GMRS frequencies. Use of the GMRS frequencies (USA) requires a license; however most users either disregard this requirement or are unaware. Canada reallocated frequencies for licence-free use due to heavy interference from US GMRS users. The European PMR446 channels fall in the middle of a United States UHF amateur allocation, and the US FRS channels interfere with public safety communications in the United Kingdom. Designs for personal walkie-talkies are in any case tightly regulated, generally requiring non-removable antennas (with a few exceptions such as CB radio and the United States MURS allocation) and forbidding modified radios. Most personal walkie-talkies sold are designed to operate in UHF allocations, and are designed to be very compact, with buttons for changing channels and other settings on the face of the radio and a short, fixed antenna. Most such units are made of heavy, often brightly colored plastic, though some more expensive units have ruggedized metal or plastic cases. Commercial-grade radios are often designed to be used on allocations such as GMRS or MURS (the latter of which has had very little readily available purpose-built equipment). In addition, CB walkie-talkies are available, but less popular due to the propagation characteristics of the 27 MHz band and the general bulkiness of the gear involved. Personal walkie-talkies are generally designed to give easy access to all available channels (and, if supplied, squelch codes) within the device's specified allocation. Personal two-way radios are also sometimes combined with other electronic devices; Garmin's Rino series combine a GPS receiver in the same package as an FRS/GMRS walkie-talkie (allowing Rino users to transmit digital location data to each other) Some personal radios also include receivers for AM and FM broadcast radio and, where applicable, NOAA Weather Radio and similar systems broadcasting on the same frequencies. Some designs also allow the sending of text messages and pictures between similarly equipped units. While jobsite and government radios are often rated in power output, consumer radios are frequently and controversially rated in mile or kilometer ratings. Because of the line of sight propagation of UHF signals, experienced users consider such ratings to be wildly exaggerated, and some manufacturers have begun printing range ratings on the package based on terrain as opposed to simple power output. While the bulk of personal walkie-talkie traffic is in the 27 MHz and 400–500 MHz area of the UHF spectrum, there are some units that use the "Part 15" 49 MHz band (shared with cordless phones, baby monitors, and similar devices) as well as the "Part 15" 900 MHz band; in the US at least, units in these bands do not require licenses as long as they adhere to FCC Part 15 power output rules. A company called TriSquare is, as of July 2007, marketing a series of walkie-talkies in the United States, based on frequency-hopping spread spectrum technology operating in this frequency range under the name eXRS (eXtreme Radio Service—despite the name, a proprietary design, not an official allocation of the US FCC). The spread-spectrum scheme used in eXRS radios allows up to 10 billion virtual "channels" and ensures private communications between two or more units. Recreation Low-power versions, exempt from licence requirements, are also popular children's toys such as the Fisher Price Walkie-Talkie for children illustrated in the top image on the right. Prior to the change of CB radio from licensed to "permitted by part" (FCC rules Part 95) status, the typical toy walkie-talkie available in North America was limited to 100 milliwatts of power on transmit and using one or two crystal-controlled channels in the 27 MHz citizens' band using amplitude modulation (AM) only. Later toy walkie-talkies operated in the 49 MHz band, some with frequency modulation (FM), shared with cordless phones and baby monitors. The lowest cost devices are very simple electronically (single-frequency, crystal-controlled, generally based on a simple discrete transistor circuit where "grown-up" walkie-talkies use chips), may employ superregenerative receivers, and may lack even a volume control, but they may nevertheless be elaborately decorated, often superficially resembling more "grown-up" radios such as FRS or public safety gear. Unlike more costly units, low-cost toy walkie-talkies may not have separate microphones and speakers; the receiver's speaker sometimes doubles as a microphone while in transmit mode. An unusual feature, common on children's walkie-talkies but seldom available otherwise even on amateur models, is a "code key", that is, a button allowing the operator to transmit Morse code or similar tones to another walkie-talkie operating on the same frequency. Generally the operator depresses the PTT button and taps out a message using a Morse Code crib sheet attached as a sticker to the radio. However, as Morse Code has fallen out of wide use outside amateur radio circles, some such units either have a grossly simplified code label or no longer provide a sticker at all. In addition, Family Radio Service UHF radios will sometimes be bought and used as toys, though they are not generally explicitly marketed as such (but see Hasbro's ChatNow line, which transmits both voice and digital data on the FRS band). Some cellular telephone networks offer a push-to-talk handset that allows walkie-talkie-like operation over the cellular network, without dialing a call each time. However, the cellphone provider must be accessible. Specialized uses In addition to land mobile use, waterproof walkie talkie designs are also used for marine VHF and aviation communications, especially on smaller boats and ultralight aircraft where mounting a fixed radio might be impractical or expensive. Often such units will have switches to provide quick access to emergency and information channels. They are also used in recreational UTVs to coordinate logistics, keep riders out of the dust and are usually connected to an intercom and headsets Intrinsically safe walkie-talkies are often required in heavy industrial settings where the radio may be used around flammable vapors. This designation means that the knobs and switches in the radio are engineered to avoid producing sparks as they are operated. Software emulation A variety of mobile apps exist that mimic a walkie-talkie/push-to-talk style interaction. They are marketed as low-latency, asynchronous communication. The advantages touted over two-way voice calls include: the asynchronous nature not requiring full user interaction (like SMS) and it is voice over IP (VOIP) so it does not use minutes on a cellular plan. Applications on the market that offer this walkie-talkie style interaction for audio include Hytera, Voxer, Zello, Orion Labs, Motorola Wave, and HeyTell, among others. Other smartphone-based walkie-talkie products are made by companies like goTenna, Fantom Dynamics and BearTooth, and offer a radio interface. Unlike mobile data dependent applications, these products work by pairing to an app on the user's smartphone and working over a radio interface. Accessories There are various types of accessories available for walkie-talkies such as rechargeable batteries, drop-in rechargers, multi-unit rechargers for charging as many as six units at a time, and an audio accessory jack that can be used for headsets or speaker microphones. Newer models allow the connection to wireless headsets via Bluetooth. Some models also came up with the wifi integration such as Motorola XIRP 8600i series.
Technology
Broadcasting
null
369730
https://en.wikipedia.org/wiki/Body%20dysmorphic%20disorder
Body dysmorphic disorder
Body dysmorphic disorder (BDD), also known in some contexts as dysmorphophobia, is a mental disorder defined by an overwhelming preoccupation with a perceived flaw in one's physical appearance. In BDD's delusional variant, the flaw is imagined. When an actual visible difference exists, its importance is disproportionately magnified in the mind of the individual. Whether the physical issue is real or imagined, ruminations concerning this perceived defect become pervasive and intrusive, consuming substantial mental bandwidth for extended periods each day. This excessive preoccupation not only induces severe emotional distress but also disrupts daily functioning and activities. The DSM-5 places BDD within the obsessive–compulsive spectrum, distinguishing it from disorders such as anorexia nervosa. BDD is estimated to affect from 0.7% to 2.4% of the population. It usually starts during adolescence and affects both men and women. The BDD subtype muscle dysmorphia, perceiving the body as too small, affects mostly males. In addition to thinking about it, the sufferer typically checks and compares the perceived flaw repetitively and can adopt unusual routines to avoid social contact that exposes it. Fearing the stigma of vanity, they usually hide this preoccupation. Commonly overlooked even by psychiatrists, BDD has been underdiagnosed. As the disorder severely impairs quality of life due to educational and occupational dysfunction and social isolation, those experiencing BDD tend to have high rates of suicidal thoughts and may attempt suicide. Signs and symptoms Dislike of one's appearance is common, but individuals with BDD have extreme misperceptions about their physical appearance. Whereas vanity involves a quest to aggrandize the appearance, BDD is experienced as a quest to merely normalize the appearance. Although delusional in about one of three cases, the appearance concern is usually non-delusional, an overvalued idea. The bodily area of focus is commonly face, skin, stomach, arms and legs, but can be nearly any part of the body. In addition, multiple areas can be focused on simultaneously. A subtype of body dysmorphic disorder is bigorexia (anorexia reverse or muscle dysphoria). In muscular dysphoria, patients perceive their body as excessively thin despite being muscular and trained. Many seek dermatological treatment or cosmetic surgery, which typically does not resolve the distress. On the other hand, attempts at self-treatment, as by skin picking, can create lesions where none previously existed. BDD is a disorder in the obsessive–compulsive spectrum, but involves more depression and social avoidance despite a degree of overlap with obsessive–compulsive disorder (OCD). BDD often associates with social anxiety disorder (SAD). Some experience delusions that others are covertly pointing out their flaws. Cognitive testing and neuroimaging suggest both a bias toward detailed visual analysis and a tendency toward emotional hyper-arousal. Most generally, one experiencing BDD ruminates over the perceived bodily defect several hours daily or longer, uses either social avoidance or camouflaging with cosmetics or apparel, repetitively checks the appearance, compares it to that of other people, and might often seek verbal reassurances. One might sometimes avoid mirrors, repetitively change outfits, groom excessively, or restrict eating. BDD's severity can wax and wane, and flareups tend to yield absences from school, work, or socializing, sometimes leading to protracted social isolation, with some becoming housebound for extended periods. Social impairment is usually greatest, sometimes approaching avoidance of all social activities. Poor concentration and motivation impair academic and occupational performance. The distress of BDD tends to exceed that of major depressive disorder and rates of suicidal ideation and attempts are especially high. Cause As with most mental disorders, BDD's cause is likely intricate, altogether biopsychosocial, through an interaction of multiple factors, including genetic, developmental, psychological, social, and cultural. BDD usually develops during early adolescence, although many patients note earlier trauma, abuse, neglect, teasing, or bullying. In many cases, social anxiety earlier in life precedes BDD. Though twin studies on BDD are few, one estimated its heritability at 43%. Yet other factors may be introversion, negative body image, perfectionism, heightened aesthetic sensitivity, and childhood abuse and neglect. Childhood trauma The development of body dysmorphia can stem from trauma caused by parents/guardians, family, or close friends. In a study published in 2021 about the prevalence of childhood maltreatment among adults with body dysmorphia, researchers found that more than 75% of respondents had experienced some form of abuse as children. Indeed, the researchers found that adults who had a history of emotional neglect as children were especially vulnerable to BDD, though other forms of abuse, including physical and sexual abuse, were also identified as significant risk factors. As the children progress into their adult years, they start to visualise the abuse that has been done to their bodies, and start finding ways to hide, cover, or change it so they are not reminded of the trauma that they endured as an adolescent. Social media Constant use of social media and "selfie taking" may translate into low self-esteem and body dysmorphic tendencies. The sociocultural theory of self-esteem states that the messages given by media and peers about the importance of appearance are internalized by individuals who adopt others' standards of beauty as their own. Due to excessive social media use and selfie taking, individuals may become preoccupied about presenting an ideal photograph for the public. Specifically, females' mental health has been the most affected by persistent exposure to social media. Girls with BDD present symptoms of low self-esteem and negative self-evaluation. Due to social media’s expectations, a factor of why individuals have body dysmorphia can come from women comparing themselves with media images of ideal female attractiveness, a perceived discrepancy between their actual attractiveness and the media’s standard of attractiveness is likely to result. Researchers in Istanbul Bilgi University and Bogazici University in Turkey found that individuals who have low self-esteem participate more often in trends of taking selfies along with using social media to mediate their interpersonal interaction in order to fulfill their self-esteem needs. The self-verification theory, explains how individuals use selfies to gain verification from others through likes and comments. Social media may therefore trigger one's misconception about their physical look. Similar to those with body dysmorphic tendencies, such behavior may lead to constant seeking of approval, self-evaluation and even depression. In 2019 systematic review using Web of Science, PsycINFO, and PubMed databases was used to identify social networking site patterns. In particular appearance focused social media use was found to be significantly associated with greater body image dissatisfaction. It is highlighted that comparisons appear between body image dissatisfaction and BDD symptomatology. They concluded that heavy social media use may mediate the onset of sub-threshold BDD. Individuals with BDD tend to engage in heavy plastic surgery use. In 2018, the plastic surgeon Dr. Tijon Esho coined term "Snapchat Dysmorphia" to describe a trend of patients seeking plastic surgeries to mimic "filtered" pictures. Filtered photos, such as those on Instagram and Snapchat, often present unrealistic and unattainable looks that may be a causal factor in triggering BDD. Sociocultural perspective Historically, body dysmorphic disorder (BDD) was originally coined "dysmorphophobia", a term which was widely applied in research literature among the Japanese, Russians, and Europeans. However, in American literature, the appearance of BDD was still overlooked in the 1980s. It was introduced in the DSM-III by the APA, and the diagnostic criteria were not properly defined, as the non-delusional and delusional factors were not separated. This was later resolved with the revision of the DSM-III, which aided many by providing appropriate treatment for patients. BDD was initially considered non-delusional in European research, and was grouped with "monosymptomatic hypochondriacal psychoses" – delusional paranoia disorders, before being introduced in the DSM-III. In 1991, the demographics of individuals who experience BDD were primarily single women aged 19 or older. This statistic has not changed over the decades; women are still considered the predominant gender to experience BDD. With the rise of social media platforms, individuals are easily able to seek validation and openly compare their physical appearance to online influences, finding more flaws and defects in their own appearance. This leads to attempts to conceal the defect such as seeking out surgeons to resolve the issue of perceived ugliness. Universally, it is evident that different cultures place much emphasis on correcting the human body aesthetic, and that this preoccupation with body image is not exclusive to just one society; one example is the binding of women's feet in Chinese culture. Whilst physically editing the body is not unique to any one culture, research suggests that it is more common throughout Western society and is on the rise. On close observation of contemporary Western societies, there has been an increase in disorders such as Body dysmorphic disorder, arising from ideals around the aesthetic of the human body. Scholars such as Nancy Scheper-Hughes have suggested such demand placed upon Western bodies has been around since the beginning of the 19th century, and that it has been driven by sexuality. Research also shows that BDD is linked to high comorbidity and suicidality rates. Furthermore, it appears that Caucasian women show higher rates of body dissatisfaction than women of different ethnic backgrounds and societies. Socio-cultural models depict and emphasise the way thinness is valued, and beauty is obsessed over in Western culture, where advertising, marketing, and social media play a large role in manicuring the "perfect" body shape, size, and look. The billions of dollars spent to sell products become causal factors of image conscious societies. Advertising also supports a specific ideal body image and creates a social capital in how individuals can acquire this ideal. However, personal attitudes towards the body do vary cross-culturally. Some of this variability can be accounted for due to factors such as food insecurity, poverty, climate, and fertility management. Cultural groups who experience food insecurity generally prefer larger-bodied women. However, many societies that have abundant access to food also value moderate to larger bodies. This is evident in a comparative study of body image, body perception, body satisfaction, body-related self-esteem, and overall self-esteem of German, Guatemalan Q’eqchi’ and Colombian women. Unlike the German and Colombian women, the Q’eqchi’ women in this study live in the jungles of Guatemala and remain relatively removed from modern technology and secure food resources. The study found that the Q’eqchi’ women did not have notably higher body satisfaction when compared to the German or Colombian women. Nevertheless, the Q’eqchi’ women also showed the greatest distortion in their own body perception, estimating their physique to be slimmer than it actually was. It is thought this could be due to a lack of access to body monitoring tools such as mirrors, scales, technology, and clothing choices, but in this instance, body distortion does not seem to influence body satisfaction. This has also been shown in groups of lower-income African American women, where the acceptance of larger bodies is not necessarily equivalent to positive body image. Similar studies have noted a high prevalence of BDD in East Asian societies, where facial dissatisfaction is especially common, indicating that this is not just a Western phenomenon. Diagnosis Estimates of prevalence and gender distribution have varied widely via discrepancies in diagnosis and reporting. In American psychiatry, BDD gained diagnostic criteria in the DSM-IV, having been historically unrecognized, only making its first appearance in the DSM in 1987, but clinicians' knowledge of it, especially among general practitioners, is constricted. Meanwhile, shame about having the bodily concern, and fear of the stigma of vanity, makes many hide even having the concern. Via shared symptoms, BDD is commonly misdiagnosed as social anxiety disorder, obsessive–compulsive disorder, major depressive disorder, or social phobia. Social anxiety disorder and BDD are highly comorbid (within those with BDD, 12–68.8% also have SAD; within those with SAD, 4.8-12% also have BDD), developing similarly in patients -BDD is even classified as a subset of SAD by some researchers. Correct diagnosis can depend on specialized questioning and correlation with emotional distress or social dysfunction. Estimates place the Body Dysmorphic Disorder Questionnaire's sensitivity at 100% (0% false negatives) and specificity at 92.5% (7.5% false positives). BDD is also comorbid with eating disorders, up to 12% comorbidity in one study. Both eating and body dysmorphic disorders are concerned with physical appearance, but eating disorders tend to focus more on weight rather than one's general appearance. BDD is classified as an obsessive–compulsive disorder in DSM-5. It is important to treat people with BDD as soon as possible because the person may have already been suffering for an extended period of time and as BDD has a high suicide rate, at 2–12 times higher than the national average. Treatment Medication and psychotherapy Anti-depressant medication, such as selective serotonin reuptake inhibitors (SSRIs), and cognitive-behavioral therapy (CBT) are considered effective. SSRIs can help relieve obsessive–compulsive and delusional traits, while cognitive-behavioral therapy can help patients recognize faulty thought patterns. A study was done by Dr. Sabine Wilhelm where she and her colleagues created and tested a treatment manual specializing in BDD symptoms that resulted in improved symptoms with no asymptomatic decline. Core treatment elements include Psychoeducation and Case Formulation, Cognitive Restructuring, Exposure and Ritual Prevention and Mindfulness/Perceptual Retraining. Before treatment, it can help to provide psychoeducation, as with self-help books and support websites. Self-improvement For many people with BDD, cosmetic surgery does not work to alleviate the symptoms of BDD as their opinion of their appearance is not grounded in reality. It is recommended that cosmetic surgeons and psychiatrists work together in order to screen surgery patients to see if they have BDD, as the results of the surgery could be harmful for them. History In 1886, Enrico Morselli reported a disorder that he termed dysmorphophobia, which described the disorder as a feeling of being ugly even though there does not appear to be anything wrong with the person's appearance. In 1980, the American Psychiatric Association recognized the disorder, while categorizing it as an atypical somatoform disorder, in the third edition of its Diagnostic and Statistical Manual of Mental Disorders (DSM). Classifying it as a distinct somatoform disorder, the DSM-III's 1987 revision switched the term to body dysmorphic disorder. Published in 1994, DSM-IV defines BDD as a preoccupation with an imagined or trivial defect in appearance, a preoccupation causing social or occupational dysfunction, and not better explained as another disorder, such as anorexia nervosa. Published in 2013, the DSM-5 shifts BDD to a new category (obsessive–compulsive spectrum), adds operational criteria (such as repetitive behaviors or intrusive thoughts), and notes the subtype muscle dysmorphia (preoccupation that one's body is too small or insufficiently muscular or lean). The term "dysmorphic" is derived from the Greek word, 'dusmorphíā' – the prefix 'dys-' meaning abnormal or apart, and 'morphḗ' meaning shape. Morselli described people who felt a subjective feeling of ugliness as people who were tormented by a physical deficit. Sigmund Freud (1856–1939), once called one of his patients, a Russian aristocrat named Sergei Pankejeff, "Wolf Man," as he was experiencing classical symptoms of BDD.
Biology and health sciences
Mental disorders
Health
369836
https://en.wikipedia.org/wiki/Holstein%20Friesian
Holstein Friesian
The Holstein Friesian is an international breed or group of breeds of dairy cattle. It originated in Frisia, stretching from the Dutch province of North Holland to the German state of Schleswig-Holstein. It is the dominant breed in industrial dairy farming worldwide, and is found in more than 160 countries. It is known by many names, among them Holstein, Friesian and Black and White. With the growth of the New World, a demand for milk developed in North America and South America, and dairy breeders in those regions at first imported their livestock from the Netherlands. However, after about 8,800 Friesians (black pied German cows) had been imported, Europe stopped exporting dairy animals due to disease problems. Today, the breed is used for milk in the north of Europe, and for meat in the south of Europe. After 1945, European cattle breeding and dairy products became increasingly confined to certain regions due to the development of national infrastructure. This change led to the need to designate some animals for dairy production and others for beef production; previously, milk and beef had been produced from dual-purpose animals. Today, more than 80% of dairy production takes place north of the line between Bordeaux and Venice, and more than 60% of the cattle in Europe are found there as well. Today's European breeds, national derivatives of the Dutch Friesian, have become very different animals from those developed by breeders in the United States, who use Holsteins only for dairy production. As a result, breeders have imported specialized dairy Holsteins from the United States to cross-breed them with European black-and-whites. Today, the term "Holstein" is used to describe North or South American stock and the use of that stock in Europe, particularly in Northern Europe. "Friesian" is used to describe animals of traditional European ancestry that are bred for both dairy and beef use. Crosses between the two are described as "Holstein-Friesian". Breed characteristics The Holstein is arguably the most well-known and easily recognized breed of cattle on Earth, and their appearance is an iconic component of many depictions of pastoral life in art and media. Holsteins have very distinctive markings, usually black and white or red and white in colour, typically exhibiting piebald patterns. On rare occasions, some have both black and red colouring with white. The red factor causes this unique colouring. 'Blue' is also a known colour. This colour is produced by white hairs mixed with the black hairs giving the cow a bluish tint. This colouring is also known as 'blue roan' in some farm circles. No two Holsteins will look exactly identical due to the inherent randomness of the manifestation of the piebald gene. Holsteins are famed for their high dairy production, averaging of milk per year. Of this milk, 858 pounds (3.7%) are butterfat and 719 pounds (3.1%) are protein. A healthy calf weighs or more at birth. A mature Holstein cow typically weighs , and stands tall at the shoulder. Holstein should be bred by 11 to 14 months of age, when they weigh or 55% of adult weight. Generally, breeders plan for Holstein heifers to calve for the first time between 21 and 24 months of age and 80% of adult body weight. The gestation period is about nine and a half months. History Near 100 BC, a displaced group of people from Hesse migrated with their cattle to the shores of the North Sea near the Frisii tribe, occupying the island of Batavia, between the Rhine, Maas, and Waal. Historical records suggest these cattle were black, and the Friesian cattle at this time were "pure white and light coloured". Crossbreeding may have led to the foundation of the present Holstein-Friesian breed, as the cattle of these two tribes from then are described identically in historical records. The portion of the country bordering on the North Sea, called Frisia, was situated within the provinces of North Holland, Friesland, and Groningen, and in Germany to the River Ems. The people were known for their care and breeding of cattle. The Frisii, preferring pastoral pursuits to warfare, paid a tax of ox hides and ox horns to the Roman government, whereas the Batavii furnished soldiers and officers to the Roman army; these fought successfully in the various Roman wars. The Frisii bred the same strain of cattle unadulterated for 2,000 years, except from accidental circumstances. In 1282 AD, floods produced the Zuiderzee, a formed body of water that had the effect of separating the cattle breeders of the modern day Frisians into two groups. The western group occupied West Friesland, now part of North Holland; the eastern occupied the present provinces of Friesland and Groningen, also in the Netherlands. The rich polder land in the Netherlands is unsurpassed for the production of grass, cattle, and dairy products. Between the 13th and 16th centuries, the production of butter and cheese was enormous. Historic records describe heavy beef cattle, weighing from 2,600 to 3,000 pounds each. The breeders had the goal of producing as much milk and beef as possible from the same animal. The selection, breeding and feeding have been carried out with huge success. Inbreeding was not tolerated, and (distinct) families never arose, although differences in soil in different localities produced different sizes and variations. A Corporate Watch report on Dystopian Farming cited a 2004 study from the Journal of Dairy Science identified that between 96 and 98% of UK Holsteins were inbred to some degree, compared with around 50% in 1990. More generally the rate of inbreeding in the UK has risen significantly since 1990. United Kingdom Up to the 18th century, the British Isles imported Dutch cattle, using them as the basis of several breeds in England and Scotland. The eminent David Low recorded, "the Dutch breed was especially established in the district of Holderness, on the north side of the Humber; northward through the plains of Yorkshire. The finest dairy cattle in England...", of Holderness in 1840 still retained the distinct traces of their Dutch origin. Further north in the Tees area, farmers imported continental cattle from the Netherlands and German territories on the Elbe. Low wrote, "Of the precise extent of these early importations we are imperfectly informed, but that they exercised a great influence on the native stock appears from this circumstance, that the breed formed by the mixture became familiarly known as the Dutch or Holstein breed". Holstein-Friesians were found throughout the rich lowlands of the Netherlands, northwestern provinces of Germany, Belgium and northern France. The breed did not become established in Great Britain at the time, nor was it used in the islands of Jersey or of Guernsey, which bred their own special cattle named after the islands. Their laws prohibited using imports from the continent for breeding purposes. After World War II, breeders on the islands needed to restore their breeds, which had been severely reduced during the war, and imported almost 200 animals. Canadian breeders sent a gift of three yearling bulls to help establish the breed. The pure Holstein Breed Society was started in 1946 in Great Britain, following the British Friesian Cattle Society. The breed was developed slowly up to the 1970s, after which there was an explosion in its popularity, and additional animals were imported. More recently, the two societies merged in 1999 to establish Holstein UK. Numbers Records on 1 April 2005 from Nomenclature for Units of Territorial Statistics level 1 show Holstein influence appearing in 61% of all 3.47 million dairy cattle in the UK: Holstein-Friesian (Friesian with more than 12.5% and less than 87.5% of Holstein blood): 1,765,000 (51%) Friesian (more than 87.5% Friesian blood): 1,079,000 (31%) Holstein (more than 87.5% of Holstein blood): 254,000 (7%) Holstein-Friesian cross (any of the above crossed with other breeds): 101,000 (3%) Other dairy breeds: 278,000 (7%) The above statistics are for all dairy animals possessing passports at the time of the survey, i.e. including young stock. DEFRA lists just over 2 million adult dairy cattle in the UK. Definition Holstein in this instance, and indeed in all modern discussion, refers to animals traced from North American bloodlines, while Friesian refers to indigenous European black and white cattle. Criteria for inclusion in the Supplementary Register (i.e. not purebred) of the Holstein UK herd book are: Class A is for a typical representative of the Holstein or Friesian breed, as to type, size and constitution, with no obvious signs of crossbreeding, or be proved from its breeding records to contain between 50% and 74.9% Holstein genes or Friesian genes. If the breeding records show that one parent is of a breed other than Holstein-Friesian, Holstein, or Friesian, then such parent must be a purebred animal fully registered in a herd book of a dairy breed society recognized by the Society. Class B is for a calf by a bull registered or dual registered in the Herd Book or in the Supplementary Register and out of a foundation cow or heifer registered in Class A or B of the Supplementary Register and containing between 75% and 87.4% Holstein genes or Frisian genes. For inclusion in the Pure (Holstein or Friesian) herd book, a heifer or bull calf from a cow or heifer in Class B of the Supplementary Register and by a bull registered or dual registered in the Herd Book or the Supplementary Register, and containing 87.5% or more Holstein genes or Frisian genes will be eligible to have its entry registered in the Herd Book. Production The breed currently averages 7,655 litres/year throughout 3.2 lactations with pedigree animals averaging 8,125 litres/year over an average of 3.43 lactations. By adding, lifetime production therefore stands at around 26,000 litres. United States History Black and white cattle from Europe were introduced into the US from 1621 to 1664. The eastern part of New Netherland (modern day New York and Connecticut), where many Dutch farmers settled along the Hudson and Mohawk River valleys. They probably brought cattle with them from their native land and crossed them with cattle purchased in the colony. For many years afterwards, the cattle here were called Dutch cattle and were renowned for their milking qualities. The first recorded imports were more than 100 years later, consisting of six cows and two bulls. These were sent in 1795 by the Holland Land Company, which then owned large tracts in New York, to their agent, Mr. John Lincklaen of Cazenovia. A settler described them thus, "the cows were of the size of oxen, their colors clear black and white in large patches; very handsome". In 1810, a bull and two cows were imported by the Hon. William Jarvis for his farm at Wethersfield, Vermont. About the year 1825, another importation was made by Herman Le Roy, a part of which was sent into the Genesee River valley. The rest were kept near New York City. Still later, an importation was made into Delaware. No records were kept of the descendants of these cattle. Their blood was mingled and lost in that of the native cattle. The first permanent introduction of this breed was due to the perseverance of Hon. Winthrop W. Chenery, of Belmont, Massachusetts. The animals of his first two importations, and their offspring, were destroyed by the government in Massachusetts because of a contagious disease. He made a third importation in 1861. This was followed in 1867 by an importation for the Hon. Gerrit S. Miller, of Peterboro, New York, made by his brother, Dudley Miller, who had been attending the noted agricultural school at Eldena (Königlich Preußische Staats- und landwirthschaftliche Akademie zu Greifswald und Eldena; the latter today a locality of the former), Prussia, where this breed was highly regarded. These two importations, by Hon. William A. Russell, of Lawrence, Mass., and three animals from East Friesland, imported by Gen. William S. Tilton of the National Military Asylum, Togus, Maine, formed the nucleus of the Holstein Herd Book. The Trina Holstein breed was established by the Merrill farming family in Maine in the early 20th century, begun by "Trina Redstone Marvel" (or "Old Trina") and continued at Wilsondale Farm in Gray, Maine. Trina has traced back sixteen generations to one of the first cows imported into the United States. There are thirty generations of Trina Holstein offspring today. After about 8,800 Holsteins had been imported, a cattle disease broke out in Europe and importation ceased. In the late 19th century, there was enough interest among Friesian breeders to form associations to record pedigrees and maintain herd books. These associations merged in 1885, to found the Holstein-Friesian Association of America. In 1994, the name was changed to Holstein Association USA. Production The 2008 average actual production for all USA Holstein herds that were enrolled in production-testing programs and eligible for genetic evaluations was of milk, of butterfat, and of protein per year. Total lifetime productivity can be inferred from the average lifetime of US cows. This has been decreasing regularly in recent years and now stands at around 2.75 lactations, which when multiplied by average lactation yield above gives around of milk. The current national Holstein milk production leader is Bur-Wall Buckeye Gigi EX-94 3E, which produced of milk in 365 days, completing her record in 2016. The considerable advantage, compared to the UK, for example, can be explained by several factors: Use of milk production hormone, recombinant bST: A study in February 1999 determined the "response to bST over a 305-day lactation equaled 894 kg of milk, 27 kg of fat, and 31 kg of protein". Monsanto Company estimates a figure of about 1.5 million of 9 million dairy cows are being treated with rBST, or about 17% of cows nationally. Greater use of three-times-per-day milking: In a study performed in Florida between 1984 and 1992 using 4293 Holstein lactation records from eight herds, 48% of cows were milked three times a day. The practice was responsible for an extra 17.3% milk, 12.3% fat, and 8.8% protein. Three-times-a-day milking has become a common practice in recent years. Twice-a-day milking is the most common milking schedule of dairy cattle. In Europe, Australia, and New Zealand, milking at 10- to 14-hour intervals is common. Higher cow potential (100% Holstein herds): European Friesian types traditionally had lower production performances than their North American Holstein counterparts. Despite Holstein influence over the last 50 years, a large genetic trace of these cattle is still present. Greater use of total mixed ration (TMR) feeding systems: TMR systems continue to expand in use on dairy farms. A 1993 Hoard's Dairyman survey reported 29.2% of surveyed US dairy farms had adopted this system of feeding dairy cows. A 1991 Illinois dairy survey found 26% of Illinois dairy farmers used TMR rations with 300 kg more milk per cow compared to other feeding systems. The American type of operation (North and South America) is characterised by large, loose-housing operations, TMR feeding, and relatively many employees. However, dairy farms in the northeast US and parts of Canada differ from the typical American operation. There, many smaller family farms with either loose-housing or stanchion barns are found. These operations are quite similar to the European type, which is characterised by relatively small operations where each cow is fed and treated individually. Genetics The golden age of Friesian breeding occurred during the last 50 years, greatly helped lately by embryo transfer techniques, which permitted a huge multiplication of bulls entering progeny testing of elite, bull-mother cows. Friesian bull, Osborndale Ivanhoe, b. 1952, brought stature, angularity, good udder conformation, and feet and leg conformation, but his daughters lacked strength and depth. His descendants included: Round Oak Rag Apple Elevation, b. 1965, often abbreviated RORA Elevation, was another top-notch bull. He sired over 70,000 Holstein cattle, with descendants numbering over 5 million; Elevation was named Bull of the Century by Holstein International Association in 1999. Elevation was the result of a cross of Tidy Burke Elevation being used on one of the best ever Ivanhoe daughters, Round Oak Ivanhoe Eve. He was unsurpassed at the time for type and production. Penstate Ivanhoe Star, b. 1963, sired daughters with similar stature and dairy traits as the Ivanhoes, but with higher production. He also notably sired Carlin-M Ivanhoe Bell, the great production bull of the 80s, known also for good udders, feet and legs. A present-day genetic disorder, complex vertebral malformation, has been traced to Carlin-M Ivanhoe Bell and Penstate Ivanhoe Star. Hilltop Apollo Ivanhoe, b. 1960, sire of Whittier Farms Apollo Rocket, b. 1967, was the highest milk production bull of the 1970s, and Wayne Spring Fond Apollo, b. 1970, was the first bull ever to have a milk transmission index of over 2,000 M and have a positive type index. "Wayne" had a very famous daughter, To-Mar Wayne Hay, that was dam of the great To-Mar Blackstar, b. 1983. Hanoverhill Starbuck, b. 1979, from sire RORA Elevation and dam Anacres Astronaut Ivanhoe. Between 1985 and 1995, Starbuck sired over 200,000 female offspring and 209 proven male offspring. Genetic anomalies Cloning Starbuck (2)II, clone of the famous CIAQR sire Hanoverhill Starbuck, was born on 7 September 2000 in Saint-Hyacinthe. The clone is a result of the combined efforts of CIAQ, L'Alliance Boviteq Inc, and the Faculté de médecine vétérinaire de l'Université de Montréal. The cloned calf was born 21 years and 5 months after Starbuck's own birth date and just under 2 years after his death (17 September 1998). The calf weighed 54.2 kg at birth and showed the same vital signs as calves produced from regular AI or ET. Starbuck II is derived from frozen fibroblast cells, recovered one month before the death of Starbuck. The Semex Alliance also cloned other bulls, such as Hartline Titanic, Canyon-Breeze Allen, Ladino-Park Talent, and Braedale Goldwyn. A huge controversy in the UK in January 2007 linked the cloning company Smiddiehill and Humphreston Farm owned by father-and-son team Michael and Oliver Eaton (also owners of the large, Birmingham-based stone product business, BS Eaton) with a calf that was cloned from a cow in Canada. Despite their efforts to block the farm from view of the press, news cameras broadcast this as breaking news among many of the country's top news stations. Since then, this calf had been rumored to have been put down to protect the owners, the Eatons, from invasions of the press. British Friesian cattle While interest in increasing production through indexing and lifetime profit scores had a huge increase in Holstein bloodlines in the UK, proponents of the traditional British Friesian did not see things that way, and maintain these criteria do not reflect the true profitability or the production of the Friesian cow. Friesian breeders say modern conditions in the UK, similar to the 1950s through to the 1980s, with low milk price and the need for extensive, low-cost systems for many farmers, may ultimately cause producers to re-examine the attributes of the British Friesian. This animal came to dominate the UK dairy cow population during these years, with exports of stock and semen to many countries throughout the world. Although the idea of "dual-purpose" animals has arguably become outmoded, the fact remains that the Friesian is eminently suitable for many farms, particularly where grazing is a main feature of the system. Proponents argue that Friesians last for more lactations through more robust conformation, thus spreading depreciation costs. An added advantage of income from the male calf exists, which can be placed into barley beef systems (finishing from 11 months) or steers taken on to finish at two years, on a cheap system of grass and silage. Very respectable grades can be obtained, commensurate with beef breeds, thereby providing extra income for the farm. Such extensive, low-cost systems may imply lower veterinary costs, through good fertility, resistance to lameness, and a tendency to higher protein percentage, and, therefore, higher milk price. An 800-kg Holstein has a higher daily maintenance energy requirement than the 650-kg Friesian. Friesians have also been disadvantaged through the comparison of their type to a Holstein base. A separate "index" be composed to greater has been suggested to reflect the aspects of maintenance for bodyweight, protein percentage, longevity, and calf value. National Milk Records figures suggest highest yields are achieved between the fifth and seventh lactations; if so, this is particularly so for Friesians, with a greater lift for mature cows, and sustained over more lactations. However, production index only takes the first five lactations into account. British Friesian breeding has certainly not stood still, and through studied evaluation, substantial gains in yield have been achieved without the loss of type. History Friesians were imported into the east coast ports of England and Scotland, from the lush pastures of North Holland, during the 19th century until live cattle importations were stopped in 1892, as a precaution against endemic foot and mouth disease on the Continent. They were so few in number, they were not included in the 1908 census. In 1909, though, the society was formed as the British Holstein Cattle Society, soon to be changed to British Holstein Friesian Society and, by 1918, to the British Friesian Cattle Society. The Livestock Journal of 1900 referred to both the "exceptionally good" and "remarkably inferior" Dutch cattle. The Dutch cow was also considered to require more quality fodder and need more looking after than some English cattle that could easily be out-wintered. In an era of agricultural depression, breed societies notably had flourished, as a valuable export trade developed for traditional British breeds of cattle. At the end of 1912, the herd book noted 1,000 males and 6,000 females, the stock which originally formed the foundation of the breed in England and Scotland. Entry from then until 1921, when grading up was introduced, was by pedigree only. No other Friesian cattle were imported until the official importation of 1914, which included several near descendants of the renowned dairy bull Ceres 4497 F.R.S. These cattle were successful in establishing the Friesian as an eminent, long-lived dairy breed in Britain. This role was continued in the 1922 importation from South Africa through Terling Marthus and Terling Collona, which were also near descendants of Ceres 4497. The 1936 importation from the Netherlands introduced a more dual-purpose type of animal, the Dutch having moved away from the Ceres line in the meantime. The 1950 importation has a lesser influence on the breed today than the previous importations, although various Adema sons were used successfully in some herds. The Friesian enjoyed great expansion in the 1950s, through to the 80s, until the increased Holstein influence on the national herd in the 1990s ; a trend which is being questioned by some commercial dairy farmers in the harsh dairying climate that prevails today, with the need to exploit grazing potential to the fullest. Friesian semen is once again being exported to countries with grass-based systems of milk production. The modern Friesian is pre-eminently a grazing animal, well able to sustain itself over many lactations, on both low-lying and upland grasslands, being developed by selective breeding over the last 100 years. Some outstanding examples of the breed have 12 to 15 lactations to their credit, emphasising their inherent natural fecundity. In response to demand, protein percentages have been raised across the breed, and herd protein levels of 3.4% to 3.5% are not uncommon. Whilst the British Friesian is first and foremost a dairy breed, giving high lifetime yields of quality milk from home-produced feeds, by a happy coincidence, surplus male animals are highly regarded as producers of high quality, lean meat, whether crossed with a beef breed or not. Beef-cross heifers have long been sought after as ideal suckler cow replacements. Although understanding the need to change the society's name to include the word Holstein in 1988, British Friesian enthusiasts are less than happy now that the word Friesian has been removed from the name. With the history of the breed spanning 100 years, the British Friesian cow is continuing to prove her worth. The general robustness and proven fertility provide an ideal black and white cross for Holstein breeders seeking these attributes. The disposal of male black and white calves continues to receive media attention, and would appear to be a waste of a valuable resource. One of the great strengths of the British Friesian is the ability of the male calf to finish and grade satisfactorily, either in intensive systems, or as steers, extensively. This latter system may become increasingly popular due to the prohibitive increase in grain prices. The robustness of the British Friesian and its suitability to grazing and forage systems is well known. Compared to the Holsteins, the Friesians: Calve more frequently Calve more often in their lifetimes Need fewer replacements Provide valuable male calves Have lower cell counts Have higher fat and protein percentages Polled Holsteins The first polled Holstein was identified in the United States in 1889. Polled Holsteins have the dominant polled gene which makes them naturally hornless. The polled gene has historically had a very low gene frequency in the Holstein breed. However, with animal welfare concerns surrounding the practice of dehorning, the interest in polled genetics is growing rapidly. Red and white Holsteins The expression of red colour replacing the black in Holsteins is a function of a recessive gene. Assuming the allele 'B' stands for the dominant black and 'b' for the recessive red, cattle with the paired genes 'BB', 'Bb', or 'bB' would be black and white, while 'bb' cattle would be red and white. History Earlier 13th-century records show cattle of "broken" colours entered the Netherlands from Central Europe. Most foundation stock in the US were imported between 1869 and 1885. A group of early breeders decreed that animals of any colour other than black and white would not be accepted in the herd book, and that the breed would be known as Holsteins. There were objections, saying that quality and not colour should be the aim, and that the cattle should be called "Dutch" rather than Holsteins. Only a small number of carriers were identified over the hundred-year span from the early importations until they were accepted into the Canadian and American herd books in 1969 and 1970, respectively. Most of the early accounts of red calves being born to black and white parents were never documented. A few stories of "reds" born to elite parents persist over time, as there is a tendency to credit the ancestor with the highest (closest) relationship to a red-carrier animal as the one that transmitted the trait, whereas sometimes it is the other parental line that has passed it on, even though the ancestor responsible may have entered the pedigree several generations earlier. In 1952, a sire in an artificial insemination (AI) unit in the US was a carrier of red coat colour. Although the AI unit reported the condition and advised breeders as to its mode of inheritance, almost a third of the breeding unit's Holstein inseminations that year were to that red-carrier bull. That year, American AI units had used 67 red-factor bulls that had sired 8250 registered progeny. In spite of this, any change to the colour marking rules was rejected. The Red and White Dairy Cattle Association (RWDCA) began registry procedures in 1964 in the United States. Its first members were Milking Shorthorn breeders, who wanted a dairy registry for the cattle they had bred in prior years, including some red and white Holsteins. When Milking Shorthorn breeders were looking for potential outcrossing individuals to improve milk production, red and white Holsteins came into the picture, since the red colour factor is the same for both breeds. The RWDCA had adopted an "open herd book" policy, and the Red and White Holstein became the major player. The red trait was thus able to survive the attempts to eradicate it that came from all sides of the Holstein industry. It was inevitable that even when a red calf was culled, the herd owner rarely did anything to remove the dam from his herd and only hoped she would not have another red calf. Many red calves, born in both countries prior to the 1970s, were quietly disposed of, with a view to preserving the acceptance of their elite pedigrees. Also, thousands of Holsteins were imported from Canada each year, and many were carriers. More than 14,000 Holsteins were exported to the United States in 1964 and again in 1965. This was at a time when both countries were debating the "red question". While the United States was trying to eliminate the red trait, the Canadian imports simply counterbalanced the US effort to reduce its incidence. Canada's number one red-carrier sire in the 1940s was A.B.C. Reflection Sovereign. His sons and grandsons in the 1950s and '60s spread the red gene throughout Canada and increased its frequency in the United States. Three other big names siring Red and Whites in the United States were Rosafe Citation R, Roeland Reflection Sovereign, and Chambric A.B.C. The red trait was readily available in Canadian Holstein genetics. Early on, there was criticism of the policy of the Canadian AI units to remove bulls found to carry red. A number of superior bulls were slaughtered or exported. The studs were simply supporting the Canadian policy to prevent the intensification of the red recessive in the breed. The phrase "carries the red factor" had to be included in the description, and excessive promotion of unproven red-factor bulls was discouraged. They later added the aim of permitting intelligent breeders to use any red-carrier sire that had an outstanding proof for production and type. It became obvious that AI was the primary way of finding out which bulls were red carriers. Prior to AI, few red-carrier sires were uncovered because their service was limited to one or a few herds. Such herds often had no carrier females, and there was only a 25% chance that a carrier bull mated to a carrier female would produce a red calf. If a red and white calf were dropped, it was often concealed and quietly removed from the herd. In 1964, the Netherlands Herd Book Society indicated a breakdown of 71% Black and White Friesian and 28% Red and Whites. A herd book that accepted Red and Whites had already been established in the United States. A separate herd book for Canadian Red and Whites was then established, following which Red and Whites became acceptable to the major Canadian (export) markets. The sales ring began to establish interest in the new breed. The US Holstein-Friesian Association and its membership worked diligently from its early days until 1970 to eliminate the red trait from the registered population. However, once the door was open, red and whites began to appear in some of the more elite herds. The rush to get the best of Canadian breeding even prior to the opening of the herd book brought red calves to many dairymen who had never even seen one. Canadian Red and Whites became eligible for registration in the herd book on July 1, 1969, through an alternate registry. Red and Whites were to be listed with the suffix –RED and Black and Whites with ineligible markings would be registered with the suffix –ALT. Both groups and their progeny would be listed only in the Alternate book and the suffixes had to be part of the name. In the Canadian herd books, all –Alt and -Red animals were listed in the regular herd book in registration number order and were identified with an A in front of their numbers. The Alternates were separate in name only. The A in front of the registration number was discontinued in 1976 and the –Alt suffix was dropped in 1980, but –Red was continued. It did not bar the registration of animals whose hair turned from red to black. The US Holstein Association decided not to have a separate herd book for red and whites and off-color animals. The suffixes of –Red and –OC would be used, and numbering would be consecutive. The first red and white Holsteins were recorded with an R in front of their numbers. 212 males and 1191 females were recorded in the initial group of red registrations. Red and Whites registered in the Canadian herd book numbered 281 in 1969 and 243 in 1970. An American Breeders Service ad in the Canadian Holstein Journal in 1974 on Hanover-Hill Triple Threat mentioned one of several colour variants that were not true red. Its existence was undoubtedly common knowledge among breeders in both countries, but until that time, it had not been mentioned in print. Calves were born red and white and registered as such, but over the first six months of age turned black or mostly black with some reddish hairs down the backline, around the muzzle and at the poll. The hair coat colour change became known as Black/Red and sometimes as Telstar/Red, since the condition appeared in calves sired by Roybrook Telstar. Telstar was the sire of Triple Threat, but nothing about this had hitherto been in print about Telstar, which was by then over 10 years old. Black/Reds were often discriminated against when sold and were barred from Red and White-sponsored shows. In 1984, Holstein Canada considered recoding B/R bulls that had always been coded simply as red carriers, a designation that was not acceptable to all buyers. The breed agreed to change after checking with other breed associations and with the AI industry. In 1987, Holstein Canada and the Canadian AI industry modified their coding procedures to distinguish between Black/Red and true red colour patterns for bulls. Holstein Canada dropped the suffix Red as a part of the name in 1990, but continued to carry it as part of the birth date and other codes field. Notable Holsteins Ubre Blanca, Fidel Castro's cow (1972–1985) Pauline Wayne, US president William Howard Taft's cow RORA Elevation, a prize-winning bull Pawnee Farm Arlinda Chief, a bull with great genes for milk production; however, he also introduced a lethal gene into the population Belle Sarcastic, "unofficial mascot" of Michigan State University Archives and Historical Collections Kian (1997–2013), the first red Holstein bull whose semen has sold more than one million units worldwide Osborndale Ivanhoe (1952–1970), Holstein bull owned by Frances Osborne Kellogg and mated 100,187 times and whose semen was shipped all over the world. Toystory (2001–2014), Holstein bull whose semen has sold more than 2.4 million units worldwide and has been estimated to have sired over 500,000 offspring Knickers, an extremely large steer from Western Australia, which was making worldwide headlines in November 2018 for being too large to be processed at the local abattoirs. Lulubelle III, featured on the cover of Pink Floyd's Atom Heart Mother album
Biology and health sciences
Cattle
null
370022
https://en.wikipedia.org/wiki/Suidae
Suidae
Suidae is a family of artiodactyl mammals which are commonly called pigs, hogs, or swine. In addition to numerous fossil species, 18 extant species are currently recognized (or 19 counting domestic pigs and wild boars separately), classified into between four and eight genera. Within this family, the genus Sus includes the domestic pig, Sus scrofa domesticus or Sus domesticus, and many species of wild pig from Europe to the Pacific. Other genera include babirusas and warthogs. All suids, or swine, are native to the Old World, ranging from Asia to Europe and Africa. The earliest fossil suids date from the Oligocene epoch in Asia, and their descendants reached Europe during the Miocene. Several fossil species are known and show adaptations to a wide range of different diets, from strict herbivory to possible carrion-eating (in Tetraconodontinae). Physical characteristics Suids belong to the order Artiodactyla, and are generally regarded as the living members of that order most similar to the ancestral form. Unlike most other members of the order, they have four toes on each foot, although they walk only on the middle two digits, with the others staying clear of the ground. They also have a simple stomach, rather than the more complex ruminant stomach found in most other artiodactyl families. They are small to medium animals, varying in size from in length, and in weight in the case of the pygmy hog, to and in the giant forest hog. They have large heads and short necks, with relatively small eyes and prominent ears. Their heads have a distinctive snout, ending in a disc-shaped nose. Suids typically have a bristly coat, and a short tail ending in a tassle. The males possess a corkscrew-shaped penis, which fits into a similarly shaped groove in the female's cervix. Suids have a well-developed sense of hearing, and are vocal animals, communicating with a series of grunts, squeals, and similar sounds. They also have an acute sense of smell. Many species are omnivorous, eating grass, leaves, roots, insects, worms, and even frogs or mice. Other species are more selective and purely herbivorous. Their teeth reflect their diet, and suids retain the upper incisors, which are lost in most other artiodactyls. The canine teeth are enlarged to form prominent tusks, used for rooting in moist earth or undergrowth, and in fighting. They have only a short diastema. The number of teeth varies between species, but the general dental formula is: . Behavior and reproduction Suids are intelligent and adaptable animals. Adult females (sows) and their young travel in a group (sounder; see List of animal names), while adult males (boars) are either solitary, or travel in small bachelor groups. Males generally are not territorial, and come into conflict only during the mating season. Litter size varies between one and twelve, depending on the species. The mother prepares a grass nest or similar den, which the young leave after about ten days. Suids are weaned at around three months, and become sexually mature at 18 months. In practice, however, male suids are unlikely to gain access to sows in the wild until they have reached their full physical size, at around four years of age. In all species, the male is significantly larger than the female, and possesses more prominent tusks. Classification The following 18 extant species of suid are currently recognised: Phylogeny Cladogram of Suidae. Mikko's Phylogeny Archive (Based is McKenna & Bell, 1997, Liu, 2003 и Harris & Liu, 2007):
Biology and health sciences
Pigs_2
Animals
370312
https://en.wikipedia.org/wiki/Bitter%20orange
Bitter orange
The bitter orange, sour orange, Seville orange, bigarade orange, or marmalade orange is the hybrid citrus tree species Citrus × aurantium, and its fruit. It is native to Southeast Asia and has been spread by humans to many parts of the world. It is a cross between the pomelo, Citrus maxima, and the wild type mandarin orange, Citrus reticulata. The bitter orange is used to make essential oil, used in foods, drinks, and pharmaceuticals. The Seville orange is prized for making British orange marmalade. Definition In some proposed systems, the species Citrus × aurantium includes not only the bitter orange proper, but all other hybrids between the pomelo and the wild type mandarin, namely the sweet orange, the grapefruit, and all cultivated mandarins. This article only deals with the bitter orange proper. History The bitter orange, like many cultivated Citrus species, is a hybrid, in its case of the wild mandarin and pomelo. The bitter orange spread from Southeast Asia via India and Iran to the Islamic world as early as 700 AD in the Arab Agricultural Revolution. After the Columbian exchange, the pomelo was introduced to the New World, starting in Mexico by 1568. Botany Description The bitter orange has orange fruit with a distinctly bitter or sour taste. The tree has alternate simple leaves on long petioles; there are long thorns on the petiole. The trees require little care and may live for as long as 600 years. It grows in subtropical regions but can tolerate a brief frost. Pests and diseases The bitter orange has many of the same pests and diseases as other citrus fruits. Viral diseases include citrus tristeza virus, crinkly leaf virus, and xyloporosis. Among the many fungal diseases are anthracnose, dieback, and heart rot. Varieties C. × aurantium var. myrtifolia is possibly a distinct species, Citrus myrtifolia. The 'Chinotto' cultivar is used to make the drink of the same name. C. × aurantium var. daidai, daidai, is used in Chinese medicine and in tea. C. × aurantium subsp. currassuviencis, laraha, grows on the Caribbean island of Curaçao. The dried peel is used in Curaçao liqueur. Among the many related species is Citrus bergamia, the Bergamot orange. This is probably a bitter orange and limetta hybrid; it is cultivated in Italy for the production of bergamot oil, a component of many brands of perfume and tea, especially Earl Grey tea. It is a less hardy plant than other bitter orange varieties. Uses Culinary While the raw pulp is not edible, bitter orange is widely used in cooking. The Seville orange (the usual name in this context) is prized for making British orange marmalade, being higher in pectin than the sweet orange, and therefore giving a better set and a higher yield. Once a year, oranges of this variety are collected from trees in Seville and shipped to Britain to be used in marmalade. However, the fruit is rarely consumed locally in Andalusia. This reflects the historic Atlantic trading relationship with Portugal and Spain; an early recipe for 'marmelet of oranges' was recorded by Eliza Cholmondeley in 1677. Bitter orange—bigarade—was used in all early recipes for duck à l'orange, originally called canard à la bigarade. Malta too has a tradition of making bitter oranges into marmalade. In Finland, mämmi is a fermented malted rye dough flavoured with ground Seville orange zest. Across Scandinavia, bitter orange peel is used in dried, ground form in baked goods such as Christmas bread and gingerbread. In Greece, the nerántzi is one of the most prized fruits used for spoon sweets. In Adana province, Turkey, bitter orange jam is a principal dessert. Bitter oranges are made into chutneys in India, either in the style of a raita with curds, or roasted, spiced, and sweetened to form a condiment that can be preserved in jars. In Yucatán (Mexico), it is a main ingredient of the cochinita pibil. In Suriname, its juice is used in the well-known dish pom. An essential oil is extracted from the peel of dried, unripe bitter oranges; C. aurantium var. curassaviensis in particular is used in Curaçao liqueur. An oil is pressed from the fresh peel of ripe fruit in many countries and used in ice creams, puddings, sweets, soft and alcoholic drinks, and pharmaceuticals. The flowers are distilled to yield Neroli oil and orange flower water, with similar uses. Neroli oil is also employed in perfumes. The peel of bitter oranges is used as a spice in Belgian Witbier (white beer), for orange-flavored liqueurs such as Cointreau, and to produce bitters such as Oranjebitter. It is a component of Nordic hot spiced wine, glögg. Rootstock, wood, and soap The bitter orange is used as a rootstock in groves of sweet orange. The fruit and leaves make lather and can be used as soap. The hard, white or light-yellow wood is used in woodworking and made into baseball bats in Cuba. Herbal stimulant Extracts of bitter orange and its peel have been marketed as dietary supplements purported to act as a weight-loss aid and appetite suppressant. Bitter orange contains the tyramine metabolites N-methyltyramine, octopamine, and synephrine, substances similar to epinephrine, which act on the α1 adrenergic receptor to constrict blood vessels and increase blood pressure and heart rate. Following bans on the herbal stimulant ephedra in the U.S., Canada, and elsewhere, bitter orange has been substituted into "ephedra-free" herbal weight-loss products by dietary supplement manufacturers. Bitter orange is believed to cause the same spectrum of adverse events as ephedra. Case reports have linked bitter orange supplements to strokes, angina, ischemic colitis, and myocardial infarction. The U.S. National Center for Complementary and Integrative Health found "little evidence that bitter orange is safer to use than ephedra." Drug interactions Bitter orange may have serious grapefruit-like drug interactions with medicines such as statins (to lower cholesterol), nifedipines (to lower blood pressure), some anti-anxiety drugs, and some antihistamines.
Biology and health sciences
Citrus fruits
Plants
370666
https://en.wikipedia.org/wiki/Passenger%20ship
Passenger ship
A passenger ship is a merchant ship whose primary function is to carry passengers on the sea. The category does not include cargo vessels which have accommodations for limited numbers of passengers, such as the ubiquitous twelve-passenger freighters once common on the seas in which the transport of passengers is secondary to the carriage of freight. The type does however include many classes of ships designed to transport substantial numbers of passengers as well as freight. Indeed, until recently virtually all ocean liners were able to transport mail, package freight and express, and other cargo in addition to passenger luggage, and were equipped with cargo holds and derricks, kingposts, or other cargo-handling gear for that purpose. Only in more recent ocean liners and in virtually all cruise ships has this cargo capacity been eliminated. While typically passenger ships are part of the merchant marine, passenger ships have also been used as troopships and often are commissioned as naval ships when used as for that purpose. Description Passenger Ship Types: Passenger ships include ferries, which are vessels for day to day or overnight short-sea trips moving passengers and vehicles (whether road or rail); ocean liners, which typically are passenger or passenger-cargo vessels transporting passengers and often cargo on longer line voyages; and cruise ships, which often transport passengers on round-trips, in which the trip itself and the attractions of the ship and ports visited are the principal draw. There are several main types: Cruise ships Ferries Ocean liners Passenger ship types Cruise ships: For a long time, cruise ships were smaller than the old ocean liners had been, but in the 1980s, this changed when Knut Kloster, the director of Norwegian Caribbean Lines, bought one of the biggest surviving liners, the , and transformed her into a huge cruise ship, which he renamed the SS Norway. Her success demonstrated that there was a market for large cruise ships. Successive classes of ever-larger ships were ordered, until the Cunard liner was finally dethroned from her 56-year reign as the largest passenger ship ever built (a dethronement that led to numerous further dethronements from the same position). Ferries: They are vessels for day to day or overnight short-sea trips moving passengers and vehicles (whether road or rail). There also exist Cruise ferries, designed for longer routes, lasting from one to a couple of days. They are named such because they tend to include amenities common on cruise ships (pools, discos, spas, etc...) Ocean liners: An ocean liner is the traditional form of passenger ship. Once such liners operated on scheduled line voyages to all inhabited parts of the world. With the advent of airliners transporting passengers and specialized cargo vessels hauling freight, line voyages have almost died out. But with their decline came an increase in sea trips for pleasure and fun, and in the latter part of the 20th century ocean liners gave way to cruise ships as the predominant form of large passenger ship containing from hundreds to thousands of people, with the main area of activity changing from the North Atlantic Ocean to the Caribbean Sea. Cruise ships vs. ocean liners Although some ships have characteristics of both types, the design priorities of the two forms are different: ocean liners value speed and traditional luxury while cruise ships value amenities (swimming pools, theaters, ball rooms, casinos, sports facilities, etc.) rather than speed. These priorities produce different designs. In addition, ocean liners typically were built to cross the Atlantic Ocean between Europe and the United States or travel even further to South America or Asia while cruise ships typically serve shorter routes with more stops along coastlines or among various islands. Both the (QE2) (1969) and her successor as Cunard's flagship (QM2), which entered service in 2004, are of hybrid construction. Like transatlantic ocean liners, they are fast ships and strongly built to withstand the rigors of the North Atlantic in line voyage service, but both ships are also designed to operate as cruise ships, with the amenities expected in that trade. QM2 was superseded by the of the Royal Caribbean line as the largest passenger ship ever built; however, QM2 still hold the record for the largest ocean liner. The Freedom of the Seas was superseded by the in October 2009. Measures of size Because of changes in historic measurement systems, it is difficult to make meaningful and accurate comparisons of ship sizes. Historically, gross register tonnage (GRT) was a measure of the internal volume of certain enclosed areas of a ship divided into "tons" equivalent to of space. Gross tonnage (GT) is a comparatively new measure, adopted in 1982 to replace GRT. It is calculated based on "the moulded volume of all enclosed spaces of the ship", and is used to determine things such as a ship's manning regulations, safety rules, registration fees, and port dues. It is produced by a mathematical formula, and does not distinguish between mechanical and passenger spaces, and thus is not directly comparable to historic GRT measurements. Displacement, a measure of mass, is not commonly used for passenger vessels. While a high displacement can indicate better sea keeping abilities, gross tonnage is promoted as the most important measure of size for passenger vessels, as the ratio of gross tonnage per passenger – the Passenger/Space Ratio – gives a sense of the spaciousness of a ship, an important consideration in cruise liners where the onboard amenities are of high importance. Historically, a ship's GRT and displacement were somewhat similar in number. For example, Titanic, put in service in 1912, had a GRT of 46,328 and a displacement reported at over 52,000 tons. Similarly, Cunard Line's mid-1930s and RMS Queen Elizabeth were of approximately 81,000 – 83,000 GRT and had displacements of over 80,000 tons. Today, due to changes in construction, engineering, function, architecture, and, crucially, measurement system – which measures functionally all of a ship's internal volume, not just part of it – modern passenger ships' GT values are much higher than their displacements. The Cunard Queens' current successor, the 148,528 GT Queen Mary 2, has been estimated to only displace approximately 76,000 tons. With the completion in 2009 of the first of the over 225,000 GT cruise ships, Oasis of the Seas, passenger ships' displacements rose to 100,000 tons, well less than half their GT. This new class is characteristic of an explosive growth in gross tonnage, which has more than doubled from the largest cruise ships of the late 1990s. This reflects the much lower relative weight of enclosed space in the comparatively light superstructure of a ship versus its heavily reinforced and machinery-laden hull space, as cruise ships have grown slab-sided vertically from their maximum beam to accommodate more passengers within a given hull size. Safety regulations Passenger ships are subject to two major International Maritime Organization requirements : to perform musters of the passengers (...) within 24 hours after their embarkation and to be able to perform full abandonment within a period of 30 minutes from the time the abandon-ship signal is given. Transportation Research Board research from 2019 reported passenger vessels, much more than freight vessels, are subject to degradations in stability as a result of increases in lightship weight. Passenger vessels appear to be more pressing candidates for lightship weight-tracking programs than freight vessels. Design considerations Passengers on ships without backup generators suffer substantial distress due to lack of water, refrigeration, and sewage systems in the event of loss of the main engines or generators due to fire or other emergency. Power is also unavailable to the crew of the ship to operate electrically powered mechanisms. Lack of an adequate backup system to propel the ship can, in rough seas, render it dead in the water and result in loss of the ship. The 2006 Revised Passenger Ship Safety Standards address these issues, and others, requiring that ships ordered after July, 2010 conform to safe return to port regulations; however, as of 2013 many ships remain in service which lack this capacity. After October 1, 2010, the International Convention for the Safety of Life at Sea (SOLAS) requires passenger ships operating in international waters must either be constructed or upgraded to exclude combustible materials. It is believed some owners and operators of ships built before 1980, which are required to upgrade or retire their vessels, will be unable to conform to the regulations. Fred. Olsen Cruise Line's , built in 1966 was one such ship, but was reported to be headed for inter-island service in Venezuelan waters. External safety measures The International Ice Patrol was formed in 1914 after the sinking of the Titanic to address the long-outstanding issue of iceberg collision. Other regulations Passengers and their luggage at sea are covered by the Athens Convention.
Technology
Maritime transport
null
370734
https://en.wikipedia.org/wiki/Rhodochrosite
Rhodochrosite
Rhodochrosite is a manganese carbonate mineral with chemical composition MnCO3. In its pure form (rare), it is typically a rose-red colour, but it can also be shades of pink to pale brown. It streaks white, and its Mohs hardness varies between 3.5 and 4.5. Its specific gravity is between 3.45 and 3.6. The crystal system of rhodochrosite is trigonal, with a structure and cleavage in the carbonate rhombohedral system. The carbonate ions () are arranged in a triangular planar configuration, and the manganese ions (Mn2+) are surrounded by six oxygen ions in an octahedral arrangement. The MnO6 octahedra and CO3 triangles are linked together to form a three-dimensional structure. Crystal twinning is often present. It can be confused with the manganese silicate rhodonite, but is distinctly softer. Rhodochrosite is formed by the oxidation of manganese ore, and is found in South Africa, China, and the Americas. It is one of the national symbols of Argentina and the state of Colorado. Rhodochrosite forms a complete solid solution series with iron carbonate (siderite). Calcium (as well as magnesium and zinc, to a limited extent) frequently substitutes for manganese in the structure, leading to lighter shades of red and pink, depending on the degree of substitution. This is the reason for the rose color of rhodochrosite. Occurrence and discovery Rhodochrosite occurs as a hydrothermal vein mineral along with other manganese minerals in low temperature ore deposits as in the silver mines of Romania where it was first found. Banded rhodochrosite is mined in Capillitas, Argentina. It was first described in 1813 in reference to a sample from Cavnic, Maramureş, present-day Romania. The name is derived from the combination of Greek words ροδόν (rodon, meaning rose) and χρωσις (chrosis, meaning coloring). Use Rhodochrosite is mainly used as an ore of manganese, which is a key component of low-cost stainless steel formulations and certain aluminium alloys. Quality banded specimens are often used for decorative stones and jewellery. Due to its softness and perfect cleavage it is rarely found faceted in jewellery and is also sought after by many collectors. Manganese carbonate is extremely destructive to the amalgamation process historically used in the concentration of silver ores, and were often discarded on the mine dump. Culture Rhodochrosite is Argentina's "national gemstone". Colorado officially named rhodochrosite as its state mineral in 2002. It is sometimes called "Rosa del Inca", "Inca Rose" or Rosinca. Gallery
Physical sciences
Minerals
Earth science
371224
https://en.wikipedia.org/wiki/Calcareous%20sponge
Calcareous sponge
The calcareous sponges (class Calcarea) are members of the animal phylum Porifera, the cellular sponges. They are characterized by spicules made of calcium carbonate, in the form of high-magnesium calcite or aragonite. While the spicules in most species are triradiate (with three points in a single plane), some species may possess two- or four-pointed spicules. Unlike other sponges, calcareans lack microscleres, tiny spicules which reinforce the flesh. In addition, their spicules develop from the outside-in, mineralizing within a hollow organic sheath. Biology All sponges in this class are strictly marine, and, while they are distributed worldwide, most are found in shallow tropical waters. Like nearly all other sponges, they are sedentary filter feeders. All three sponge body plans (asconoid, syconoid, and leuconoid) can be found within the class Calcarea. Typically, calcareous sponges are small, measuring less than in height, and drab in colour. However, a few brightly coloured species are also known. Like the Homoscleromorpha, calcareous sponges are exclusively viviparous. Calcareous sponges vary from radially symmetrical vase-shaped body types to colonies made up of a meshwork of thin tubes, or irregular massive forms. The skeleton has either a mesh or honeycomb structure of interlocking spicules. Some extinct species were hypercalcified, meaning that the spicule-based skeleton is cemented together by solid calcite. Classification Of the approximately 15,000 living species of Porifera, only around 400 are calcareans. Some older studies applied the name Calcispongiae to the class, though "Calcarea" is much more common in modern nomenclature. Calcarean sponges likely first appeared during the Cambrian Period. The oldest putative calcarean genus is Gravestockia, from the "Atdabanian" (Cambrian Stage 3) of Australia. Calcareans are probably descended from "heteractinid" sponges, which first appeared in the early Cambrian. Calcareans reached their greatest diversity during the Cretaceous period. Some molecular analyses suggest the class Calcarea is not exclusively related to other sponges, and should thus be designated as a phylum. This would also render Porifera (the sponge phylum) paraphyletic. Borchiellini et al. (2001) argued that calcareans were more closely related to Eumetazoa (non-sponge animals) than to other sponges. A few studies have also supported a sister group relationship between calcareans and Ctenophora (comb jellies). Many authors have strongly doubted the hypothesis of sponge paraphyly, arguing that genetic studies have incomplete sampling and are incompatible with the unique anatomical traits shared by living sponges. Calcarea is divided into two subclasses (Calcinea and Calcaronea) and a number of orders. The two subclasses are mainly distinguished by spicule orientation, soft tissue and developmental traits. For example, calcineans develop from a parenchymella (a larva with a solid center and radial symmetry). Calcaroneans, on the other hand, develop from an amphiblastula (a larva with a hollow center and semi-bilateral symmetry). Class Calcarea Subclass Calcinea Order Clathrinida [Holocene] Order Murrayonida [Holocene] Subclass Calcaronea Order Baerida [Pleistocene–Holocene] Order Leucosolenida / Sycettida [Carboniferous?–Holocene] Incertae sedis Order Lithonida [Jurassic–Holocene] Order †Sphaerocoeliida [Permian–Cretaceous] Order †Stellispongiida [Permian–Holocene?] Genus †Gravestockia [Cambrian] Gallery
Biology and health sciences
Porifera
Animals
371272
https://en.wikipedia.org/wiki/Thrips
Thrips
Thrips (order Thysanoptera) are minute (mostly long or less), slender insects with fringed wings and unique asymmetrical mouthparts. Entomologists have described approximately 7,700 species. They fly only weakly and their feathery wings are unsuitable for conventional flight; instead, thrips exploit an unusual mechanism, clap and fling, to create lift using an unsteady circulation pattern with transient vortices near the wings. Thrips are a functionally diverse group; many of the known species are fungivorous. A small proportion of the species are serious pests of commercially important crops. Some of these serve as vectors for over 20 viruses that cause plant disease, especially the Tospoviruses. Many flower-dwelling species bring benefits as pollinators, with some predatory thrips feeding on small insects or mites. In the right conditions, such as in greenhouses, invasive species can exponentially increase in population size and form large swarms because of a lack of natural predators coupled with their ability to reproduce asexually, making them destructive to crops. Their identification to species by standard morphological characteristics is often challenging. Naming and etymology The first recorded mention of thrips dates from the 17th century, and a sketch was made by Philippo Bonanni, a Catholic priest, in 1691. Swedish entomologist Baron Charles De Geer described two species in the genus Physapus in 1744, and Linnaeus in 1746 added a third species and named this group of insects Thrips. In 1836 the Irish entomologist Alexander Henry Haliday described 41 species in 11 genera and proposed the order name of Thysanoptera. The first monograph on the group was published in 1895 by Heinrich Uzel, who is regarded by Fedor et al. as the father of Thysanoptera studies. The generic and English name thrips is a direct transliteration of the Ancient Greek word , thrips, meaning "woodworm". Like some other animal-names (such as sheep, deer, and moose) in English the word "thrips" expresses both the singular and plural, so there may be many thrips or a single thrips. Other common names for thrips include thunderflies, thunderbugs, storm flies, thunderblights, storm bugs, corn fleas, corn flies, corn lice, freckle bugs, harvest bugs, and physopods. The older group name "physopoda" references the bladder-like tips to the tarsi of the legs. The name of the order, Thysanoptera, is constructed from the ancient Greek words , thysanos, "tassel or fringe", and , pteron, "wing", with reference to the insects' fringed wings. Morphology Thrips are small hemimetabolic insects with a distinctive cigar-shaped body plan. They are elongated with transversely constricted bodies. They range in size from in length for the larger predatory thrips, but most thrips are about 1 mm in length. Flight-capable thrips have two similar, strap-like pairs of wings with a fringe of bristles. The wings are folded back over the body at rest. Their legs usually end in two tarsal segments with a bladder-like structure known as an "arolium" at the pretarsus. This structure can be everted by means of hemolymph pressure, enabling the insect to walk on vertical surfaces. They have compound eyes consisting of a small number of ommatidia and three ocelli or simple eyes on the head. Thrips have asymmetrical mouthparts unique to the group. Unlike the Hemiptera (true bugs), the right mandible of thrips is reduced and vestigial – and in some species completely absent. The left mandible is used briefly to cut into the food plant; saliva is injected and the maxillary stylets, which form a tube, are then inserted and the semi-digested food pumped from ruptured cells. This process leaves cells destroyed or collapsed, and a distinctive silvery or bronze scarring on the surfaces of the stems or leaves where the thrips have fed. The mouthparts of thrips have been described as “rasping-sucking”, “punching and sucking”, or, simply just a specific type of “piercing-sucking” mouthparts. Thysanoptera is divided into two suborders, Terebrantia and Tubulifera; these can be distinguished by morphological, behavioral, and developmental characteristics. Tubulifera consists of a single family, Phlaeothripidae; members can be identified by their characteristic tube-shaped apical abdominal segment, egg-laying atop the surface of leaves, and three "pupal" stages. In the Phlaeothripidae, the males are often larger than females and a range of sizes may be found within a population. The largest recorded phlaeothripid species is about 14 mm long. Females of the eight families of the Terebrantia all possess the eponymous saw-like (see terebra) ovipositor on the anteapical abdominal segment, lay eggs singly within plant tissue, and have two "pupal" stages. In most Terebrantia, the males are smaller than females. The family Uzelothripidae has a single species and it is unique in having a whip-like terminal antennal segment. Evolution The earliest fossils of thrips date back to the Permian Permothrips longipennis, although it probably not a member of this group. By the Early Cretaceous, true thrips became much more abundant. The extant family Merothripidae most resembles these ancestral Thysanoptera, and is probably basal to the order. There are currently over six thousand species of thrips recognized, grouped into 777 extant and sixty fossil genera. Some thrips were pollinators of the Ginkgoales as early as 110-105 Mya, in the Cretaceous. Cenomanithrips primus, Didymothrips abdominalis and Parallelothrips separatus are known from Myanmar amber of the Cenomanian age. Phylogeny Thrips are generally considered to be the sister group to Hemiptera (bugs). The phylogeny of thrips families has been little studied. A preliminary analysis in 2013 of 37 species using 3 genes, as well as a phylogeny based on ribosomal DNA and three proteins in 2012, supports the monophyly of the two suborders, Tubulifera and Terebrantia. In Terebrantia, Melanothripidae may be sister to all other families, but other relationships remain unclear. In Tubulifera, the Phlaeothripidae and its subfamily Idolothripinae are monophyletic. The two largest thrips subfamilies, Phlaeothripinae and Thripinae, are paraphyletic and need further work to determine their structure. The internal relationships from these analyses are shown in the cladogram. Taxonomy The following families are (2013) recognized: Suborder Terebrantia Adiheterothripidae (11 genera) Aeolothripidae (29 genera) – banded thrips and broad-winged thrips Fauriellidae (four genera) †Hemithripidae (one fossil genus, Hemithrips with 15 species) Heterothripidae (seven genera, restricted to the New World) †Jezzinothripidae (included by some authors in Merothripidae) †Karataothripidae (one fossil species, Karataothrips jurassicus) Melanthripidae (six genera of flower feeders) Merothripidae (five genera, mostly Neotropical and feeding on dry-wood fungi) – large-legged thrips †Scudderothripidae (included by some authors in Stenurothripidae) Thripidae (292 genera in four subfamilies, flower living) – common thrips †Triassothripidae (two fossil genera) Uzelothripidae (one species, Uzelothrips scabrosus) Suborder Tubulifera Phlaeothripidae (447 genera in two subfamilies, fungal hyphae and spore feeders) The identification of thrips to species is challenging as types are maintained as slide preparations of varying quality over time. There is also considerable variability leading to many species being misidentified. Molecular sequence based approaches have increasingly been applied to their identification. Biology Feeding Thrips are believed to have descended from a fungus-feeding ancestor during the Mesozoic, and many groups still feed upon and inadvertently redistribute fungal spores. These live among leaf litter or on dead wood and are important members of the ecosystem, their diet often being supplemented with pollen. Other species are primitively eusocial and form plant galls and still others are predatory on mites and other thrips. Two species of Aulacothrips, A. tenuis and A. levinotus, have been found to be ectoparasites on aetalionid and membracid plant-hoppers in Brazil. Akainothrips francisi of Australia is a parasite within the colonies of another thrips species Dunatothrips aneurae that makes silken nests or domiciles on Acacia trees. A number of thrips in the subfamily Phlaeothripinae that specialize on Acacia hosts produce silk with which they glue together phyllodes to form domiciles inside which their semi-social colonies live. Mirothrips arbiter has been found in paper wasp nests in Brazil. The eggs of the hosts including Mischocyttarus atramentarius, Mischocyttarus cassununga and Polistes versicolor are eaten by the thrips. Thrips, especially in the family Aeolothripidae, are also predators, and are considered beneficial in the management of pests like the codling moths. Most research has focused on thrips species that feed on economically significant crops. Some species are predatory, but most of them feed on pollen and the chloroplasts harvested from the outer layer of plant epidermal and mesophyll cells. They prefer tender parts of the plant, such as buds, flowers and new leaves. Besides feeding on plant tissues, the common blossom thrips feeds on pollen grains and on the eggs of mites. When the larva supplements its diet in this way, its development time and mortality is reduced, and adult females that consume mite eggs increase their fecundity and longevity. Pollination Some flower-feeding thrips pollinate the flowers they are feeding on, and some authors suspect that they may have been among the first insects to evolve a pollinating relationship with their host plants. Amber fossils of Gymnopollisthrips from the Early Cretaceous show them to be coated in Cycadopites-like pollen. Scirtothrips dorsalis carries pollen of commercially important chili peppers. Darwin found that thrips could not be kept out by any netting when he conducted experiments by keeping away larger pollinators.Thrips setipennis is the sole pollinator of Wilkiea huegeliana, a small, unisexual annually flowering tree or shrub in the rainforests of eastern Australia. T. setipennis serves as an obligate pollinator for other Australian rainforest plant species, including Myrsine howittiana and M. variabilis. The genus Cycadothrips is a specialist pollinator of cycads, which are normally wind pollinated but some species of Macrozamia are able to attract thrips to male cones at some times and repel them so that they move to female cones. Thrips are likewise the primary pollinators of heathers in the family Ericaceae, and play a significant role in the pollination of pointleaf manzanita. Electron microscopy has shown thrips carrying pollen grains adhering to their backs, and their fringed wings are perfectly capable of allowing them to fly from plant to plant. Damage to plants Thrips can cause damage during feeding. This impact may fall across a broad selection of prey items, as there is considerable breadth in host affinity across the order, and even within a species, varying degrees of fidelity to a host. Family Thripidae in particular is notorious for members with broad host ranges, and the majority of pest thrips come from this family. For example, Thrips tabaci damages crops of onions, potatoes, tobacco, and cotton. Some species of thrips create galls, almost always in leaf tissue. These may occur as curls, rolls or folds, or as alterations to the expansion of tissues causing distortion to leaf blades. More complex examples cause rosettes, pouches and horns. Most of these species occur in the tropics and sub-tropics, and the structures of the galls are diagnostic of the species involved. A radiation of thrips species seems to have taken place on Acacia trees in Australia; some of these species cause galls in the petioles, sometimes fixing two leaf stalks together, while other species live in every available crevice in the bark. In Casuarina in the same country, some species have invaded stems, creating long-lasting woody galls. Social behaviour While poorly documented, chemical communication is believed to be important to the group. Anal secretions are produced in the hindgut, and released along the posterior setae as predator deterrents In Australia, aggregations of male common blossom thrips have been observed on the petals of Hibiscus rosa-sinensis and Gossypium hirsutum; females were attracted to these groups so it seems likely that the males were producing pheromones. In the phlaeothripids that feed on fungi, males compete to protect and mate with females, and then defend the egg-mass. Males fight by flicking their rivals away with their abdomen, and may kill with their foretarsal teeth. Small males may sneak in to mate while the larger males are busy fighting. In the Merothripidae and in the Aeolothripidae, males are again polymorphic with large and small forms, and probably also compete for mates, so the strategy may well be ancestral among the Thysanoptera. Many thrips form galls on plants when feeding or laying their eggs. Some of the gall-forming Phlaeothripidae, such as genera Kladothrips and Oncothrips, form eusocial groups similar to ant colonies, with reproductive queens and nonreproductive soldier castes. Flight Most insects create lift by the stiff-winged mechanism of insect flight with steady state aerodynamics; this creates a leading edge vortex continuously as the wing moves. The feathery wings of thrips, however, generate lift by clap and fling, a mechanism discovered by the Danish zoologist Torkel Weis-Fogh in 1973. In the clap part of the cycle, the wings approach each other over the insect's back, creating a circulation of air which sets up vortices and generates useful forces on the wings. The leading edges of the wings touch, and the wings rotate around their leading edges, bringing them together in the "clap". The wings close, expelling air from between them, giving more useful thrust. The wings rotate around their trailing edges to begin the "fling", creating useful forces. The leading edges move apart, making air rush in between them and setting up new vortices, generating more force on the wings. The trailing edge vortices, however, cancel each other out with opposing flows. Weis-Fogh suggested that this cancellation might help the circulation of air to grow more rapidly, by shutting down the Wagner effect which would otherwise counteract the growth of the circulation. Apart from active flight, thrips, even wingless ones, can also be picked up by winds and transferred long distances. During warm and humid weather, adults may climb to the tips of plants to leap and catch air current. Wind-aided dispersal of species has been recorded over 1600 km of sea between Australia and South Island of New Zealand. It has been suggested that some bird species may also be involved in the dispersal of thrips. Thrips are picked up along with grass in the nests of birds and can be transported by the birds. A hazard of flight for very small insects such as thrips is the possibility of being trapped by water. Thrips have non-wetting bodies and have the ability to ascend a meniscus by arching their bodies and working their way head-first and upwards along the water surface in order to escape. Life cycle Thrips lay extremely small eggs, about 0.2 mm long. Females of the suborder Terebrantia cut slits in plant tissue with their ovipositor, and insert their eggs, one per slit. Females of the suborder Tubulifera lay their eggs singly or in small groups on the outside surfaces of plants. Some thrips such as Elaphothrips tuberculatus are known to be facultatively ovoviviparous, retaining the eggs internally and giving birth to male offspring. Females in many species guard the eggs against cannibalism by other females as well as predators. Thrips are hemimetabolous, metamorphosing gradually to the adult form. The first two instars, called larvae or nymphs, are like small wingless adults (often confused with springtails) without genitalia; these feed on plant tissue. In the Terebrantia, the third and fourth instars, and in the Tubulifera also a fifth instar, are non-feeding resting stages similar to pupae: in these stages, the body's organs are reshaped, and wing-buds and genitalia are formed. The larvae of some species produce silk from the terminal abdominal segment which is used to line the cell or form a cocoon within which they pupate. The adult stage can be reached in around 8–15 days; adults can live for around 45 days. Adults have both winged and wingless forms; in the grass thrips Anaphothrips obscurus, for example, the winged form makes up 90% of the population in spring (in temperate zones), while the wingless form makes up 98% of the population late in the summer. Thrips can survive the winter as adults or through egg or pupal diapause. Thrips are haplodiploid with haploid males (from unfertilised eggs, as in Hymenoptera) and diploid females capable of parthenogenesis (reproducing without fertilisation), many species using arrhenotoky, a few using thelytoky. In Pezothrips kellyanus females hatch from larger eggs than males, possibly because they are more likely to be fertilized. The sex-determining bacterial endosymbiont Wolbachia is a factor that affects the reproductive mode. Several normally bisexual species have become established in the United States with only females present. Human impact As pests Many thrips are pests of commercial crops due to the damage they cause by feeding on developing flowers or vegetables, causing discoloration, deformities, and reduced marketability of the crop. Some thrips serve as vectors for plant diseases, such as tospoviruses. Over 20 plant-infecting viruses are known to be transmitted by thrips, but perversely, less than a dozen of the described species are known to vector tospoviruses. These enveloped viruses are considered among some of the most damaging of emerging plant pathogens around the world, with those vector species having an outsized impact on human agriculture. Virus members include the tomato spotted wilt virus and the impatiens necrotic spot viruses. The western flower thrips, Frankliniella occidentalis, has spread until it now has a worldwide distribution, and is the primary vector of plant diseases caused by tospoviruses. Other viruses that they spread include the genera Ilarvirus, (Alpha |Beta |Gamma)carmovirus, Sobemovirus and Machlomovirus. Their small size and predisposition towards enclosed places makes them difficult to detect by phytosanitary inspection, while their eggs, laid inside plant tissue, are well-protected from pesticide sprays. When coupled with the increasing globalization of trade and the growth of greenhouse agriculture, thrips, unsurprisingly, are among the fastest growing group of invasive species in the world. Examples include F. occidentalis, Thrips simplex, and Thrips palmi. Flower-feeding thrips are routinely attracted to bright floral colors (including white, blue, and especially yellow), and will land and attempt to feed. It is not uncommon for some species (e.g., Frankliniella tritici and Limothrips cerealium) to "bite" humans under such circumstances. Although no species feed on blood and no known animal disease is transmitted by thrips, some skin irritation has been described. Management Thrips develop resistance to insecticides easily and there is constant research on how to control them. This makes thrips ideal as models for testing the effectiveness of new pesticides and methods. Due to their small sizes and high rates of reproduction, thrips are difficult to control using classical biological control. Suitable predators must be small and slender enough to penetrate the crevices where thrips hide while feeding, and they must also prey extensively on eggs and larvae to be effective. Only two families of parasitoid Hymenoptera parasitize eggs and larvae, the Eulophidae and the Trichogrammatidae. Other biocontrol agents of adults and larvae include anthocorid bugs of genus Orius, and phytoseiid mites. Biological insecticides such as the fungi Beauveria bassiana and Verticillium lecanii can kill thrips at all life-cycle stages. Insecticidal soap spray is effective against thrips. It is commercially available or can be made of certain types of household soap. Scientists in Japan report that significant reductions in larva and adult melon thrips occur when plants are illuminated with red light.
Biology and health sciences
Insects: General
Animals
371299
https://en.wikipedia.org/wiki/Qualitative%20research
Qualitative research
Qualitative research is a type of research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behavior. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis. Qualitative research methods have been used in sociology, anthropology, political science, psychology, communication studies, social work, folklore, educational research, information science and software engineering research. Background Qualitative research has been informed by several strands of philosophical thought and examines aspects of human life, including culture, expression, beliefs, morality, life stress, and imagination. Contemporary qualitative research has been influenced by a number of branches of philosophy, for example, positivism, postpositivism, critical theory, and constructivism. The historical transitions or 'moments' in qualitative research, together with the notion of 'paradigms' (Denzin & Lincoln, 2005), have received widespread popularity over the past decades. However, some scholars have argued that the adoptions of paradigms may be counterproductive and lead to less philosophically engaged communities. Approaches to inquiry The use of nonquantitative material as empirical data has been growing in many areas of the social sciences, including learning sciences, development psychology and cultural psychology. Several philosophical and psychological traditions have influenced investigators' approaches to qualitative research, including phenomenology, social constructionism, symbolic interactionism, and positivism. Philosophical traditions Phenomenology refers to the philosophical study of the structure of an individual's consciousness and general subjective experience. Approaches to qualitative research based on constructionism, such as grounded theory, pay attention to how the subjectivity of both the researcher and the study participants can affect the theory that develops out of the research. The symbolic interactionist approach to qualitative research examines how individuals and groups develop an understanding of the world. Traditional positivist approaches to qualitative research seek a more objective understanding of the social world. Qualitative researchers have also been influenced by the sociology of knowledge and the work of Alfred Schütz, Peter L. Berger, Thomas Luckmann, and Harold Garfinkel. Sources of data Qualitative researchers use different sources of data to understand the topic they are studying. These data sources include interview transcripts, videos of social interactions, notes, verbal reports and artifacts such as books or works of art. The case study method exemplifies qualitative researchers' preference for depth, detail, and context. Data triangulation is also a strategy used in qualitative research. Autoethnography, the study of self, is a qualitative research method in which the researcher uses his or her personal experience to understand an issue. Grounded theory is an inductive type of research, based on ("grounded" in) a very close look at the empirical observations a study yields. Thematic analysis involves analyzing patterns of meaning. Conversation analysis is primarily used to analyze spoken conversations. Biographical research is concerned with the reconstruction of life histories, based on biographical narratives and documents. Narrative inquiry studies the narratives that people use to describe their experience. Data collection Qualitative researchers may gather information through observations, note-taking, interviews, focus groups (group interviews), documents, images and artifacts. Interviews Research interviews are an important method of data collection in qualitative research. An interviewer is usually a professional or paid researcher, sometimes trained, who poses questions to the interviewee, in an alternating series of usually brief questions and answers, to elicit information. Compared to something like a written survey, qualitative interviews allow for a significantly higher degree of intimacy, with participants often revealing personal information to their interviewers in a real-time, face-to-face setting. As such, this technique can evoke an array of significant feelings and experiences within those being interviewed. Sociologists Bredal, Stefansen and Bjørnholt identified three "participant orientations", that they described as "telling for oneself", "telling for others" and "telling for the researcher". They also proposed that these orientations implied "different ethical contracts between the participant and researcher". Participant observation In participant observation ethnographers get to understand a culture by directly participating in the activities of the culture they study. Participant observation extends further than ethnography and into other fields, including psychology. For example, by training to be an EMT and becoming a participant observer in the lives of EMTs, Palmer studied how EMTs cope with the stress associated with some of the gruesome emergencies they deal with. Recursivity In qualitative research, the idea of recursivity refers to the emergent nature of research design. In contrast to standardized research methods, recursivity embodies the idea that the qualitative researcher can change a study's design during the data collection phase. Recursivity in qualitative research procedures contrasts to the methods used by scientists who conduct experiments. From the perspective of the scientist, data collection, data analysis, discussion of the data in the context of the research literature, and drawing conclusions should be each undertaken once (or at most a small number of times). In qualitative research however, data are collected repeatedly until one or more specific stopping conditions are met, reflecting a nonstatic attitude to the planning and design of research activities. An example of this dynamism might be when the qualitative researcher unexpectedly changes their research focus or design midway through a study, based on their first interim data analysis. The researcher can even make further unplanned changes based on another interim data analysis. Such an approach would not be permitted in an experiment. Qualitative researchers would argue that recursivity in developing the relevant evidence enables the researcher to be more open to unexpected results and emerging new constructs. Data analysis Qualitative researchers have a number of analytic strategies available to them. Coding In general, coding refers to the act of associating meaningful ideas with the data of interest. In the context of qualitative research, interpretative aspects of the coding process are often explicitly recognized and articulated; coding helps to produce specific words or short phrases believed to be useful abstractions from the data. Pattern thematic analysis Data may be sorted into patterns for thematic analyses as the primary basis for organizing and reporting the study findings. Content analysis According to Krippendorf, "Content analysis is a research technique for making replicable and valid inference from data to their context" (p. 21). It is applied to documents and written and oral communication. Content analysis is an important building block in the conceptual analysis of qualitative data. It is frequently used in sociology. For example, content analysis has been applied to research on such diverse aspects of human life as changes in perceptions of race over time, the lifestyles of contractors, and even reviews of automobiles. Issues Computer-assisted qualitative data analysis software (CAQDAS) Contemporary qualitative data analyses can be supported by computer programs (termed computer-assisted qualitative data analysis software). These programs have been employed with or without detailed hand coding or labeling. Such programs do not supplant the interpretive nature of coding. The programs are aimed at enhancing analysts' efficiency at applying, retrieving, and storing the codes generated from reading the data. Many programs enhance efficiency in editing and revising codes, which allow for more effective work sharing, peer review, data examination, and analysis of large datasets. Common qualitative data analysis software includes: ATLAS.ti Dedoose (mixed methods) MAXQDA (mixed methods) NVivo QDA MINER A criticism of quantitative coding approaches is that such coding sorts qualitative data into predefined (nomothetic) categories that are reflective of the categories found in objective science. The variety, richness, and individual characteristics of the qualitative data are reduced or, even, lost. To defend against the criticism that qualitative approaches to data are too subjective, qualitative researchers assert that by clearly articulating their definitions of the codes they use and linking those codes to the underlying data, they preserve some of the richness that might be lost if the results of their research boiled down to a list of predefined categories. Qualitative researchers also assert that their procedures are repeatable, which is an idea that is valued by quantitatively oriented researchers. Sometimes researchers rely on computers and their software to scan and reduce large amounts of qualitative data. At their most basic level, numerical coding schemes rely on counting words and phrases within a dataset; other techniques involve the analysis of phrases and exchanges in analyses of conversations. A computerized approach to data analysis can be used to aid content analysis, especially when there is a large corpus to unpack. Trustworthiness A central issue in qualitative research is trustworthiness (also known as credibility or, in quantitative studies, validity). There are many ways of establishing trustworthiness, including member check, interviewer corroboration, peer debriefing, prolonged engagement, negative case analysis, auditability, confirmability, bracketing, and balance. Data triangulation and eliciting examples of interviewee accounts are two of the most commonly used methods of establishing the trustworthiness of qualitative studies. Transferability of results has also been considered as an indicator of validity. Limitations of qualitative research Qualitative research is not without limitations. These limitations include participant reactivity, the potential for a qualitative investigator to over-identify with one or more study participants, "the impracticality of the Glaser-Strauss idea that hypotheses arise from data unsullied by prior expectations," the inadequacy of qualitative research for testing cause-effect hypotheses, and the Baconian character of qualitative research. Participant reactivity refers to the fact that people often behave differently when they know they are being observed. Over-identifying with participants refers to a sympathetic investigator studying a group of people and ascribing, more than is warranted, a virtue or some other characteristic to one or more participants. Compared to qualitative research, experimental research and certain types of nonexperimental research (e.g., prospective studies), although not perfect, are better means for drawing cause-effect conclusions. Glaser and Strauss, influential members of the qualitative research community, pioneered the idea that theoretically important categories and hypotheses can emerge "naturally" from the observations a qualitative researcher collects, provided that the researcher is not guided by preconceptions. The ethologist David Katz wrote "a hungry animal divides the environment into edible and inedible things....Generally speaking, objects change...according to the needs of the animal." Karl Popper carrying forward Katz's point wrote that "objects can be classified and can become similar or dissimilar, only in this way--by being related to needs and interests. This rule applied not only to animals but also to scientists." Popper made clear that observation is always selective, based on past research and the investigators' goals and motives and that preconceptionless research is impossible. The Baconian character of qualitative research refers to the idea that a qualitative researcher can collect enough observations such that categories and hypotheses will emerge from the data. Glaser and Strauss developed the idea of theoretical sampling by way of collecting observations until theoretical saturation is obtained and no additional observations are required to understand the character of the individuals under study. Bertrand Russell suggested that there can be no orderly arrangement of observations such that a hypothesis will jump out of those ordered observations; some provisional hypothesis usually guides the collection of observations. In psychology Community psychology Autobiographical narrative research has been conducted in the field of community psychology. A selection of autobiographical narratives of community psychologists can be found in the book Six Community Psychologists Tell Their Stories: History, Contexts, and Narrative. Educational psychology Edwin Farrell used qualitative methods to understand the social reality of at-risk high school students. Later he used similar methods to understand the reality of successful high school students who came from the same neighborhoods as the at-risk students he wrote about in his previously mentioned book. Health psychology In the field of health psychology, qualitative methods have become increasingly employed in research on understanding health and illness and how health and illness are socially constructed in everyday life. Since then, a broad range of qualitative methods have been adopted by health psychologists, including discourse analysis, thematic analysis, narrative analysis, and interpretative phenomenological analysis. In 2015, the journal Health Psychology published a special issue on qualitative research.<ref>Gough, B., & Deatrick, J.A. (eds.)(2015). Qualitative research in health psychology [special issue]. Health Psychology, 34 (4).</ref> Industrial and organizational psychology According to Doldor and colleagues organizational psychologists extensively use qualitative research "during the design and implementation of activities like organizational change, training needs analyses, strategic reviews, and employee development plans." Occupational health psychology Although research in the field of occupational health psychology (OHP) has predominantly been quantitatively oriented, some OHP researchers have employed qualitative methods. Qualitative research efforts, if directed properly, can provide advantages for quantitatively oriented OHP researchers. These advantages include help with (1) theory and hypothesis development, (2) item creation for surveys and interviews, (3) the discovery of stressors and coping strategies not previously identified, (4) interpreting difficult-to-interpret quantitative findings, (5) understanding why some stress-reduction interventions fail and others succeed, and (6) providing rich descriptions of the lived lives of people at work.Schonfeld, I. S., & Farrell, E. (2010). Qualitative methods can enrich quantitative research on occupational stress: An example from one occupational group. In D. C. Ganster & P. L. Perrewé (Eds.), Research in occupational stress and wellbeing series. Vol. 8. New developments in theoretical and conceptual approaches to job stress (pp. 137-197). Bingley, UK: Emerald. Some OHP investigators have united qualitative and quantitative methods within a single study (e.g., Elfering et al., [2005]); these investigators have used qualitative methods to assess job stressors that are difficult to ascertain using standard measures and well validated standardized instruments to assess coping behaviors and dependent variables such as mood. Social media psychology Since the advent of social media in the early 2000s, formerly private accounts of personal experiences have become widely shared with the public by millions of people around the world. Disclosures are often made openly, which has contributed to social media's key role in movements like the #metoo movement. The abundance of self-disclosure on social media has presented an unprecedented opportunity for qualitative and mixed methods researchers; mental health problems can now be investigated qualitatively more widely, at a lower cost, and with no intervention by the researchers. To take advantage of these data, researchers need to have mastered the tools for conducting qualitative research. Academic journals Consumption Markets & Culture Journal of Consumer Research Qualitative Inquiry Qualitative Market Research Qualitative Research The Qualitative Report
Physical sciences
Research methods
Basics and measurement
371406
https://en.wikipedia.org/wiki/Meadow
Meadow
A meadow ( ) is an open habitat or field, vegetated by grasses, herbs, and other non-woody plants. Trees or shrubs may sparsely populate meadows, as long as these areas maintain an open character. Meadows can occur naturally under favourable conditions, but are often artificially created from cleared shrub or woodland for the production of hay, fodder, or livestock. Meadow habitats, as a group, are characterized as "semi-natural grasslands", meaning that they are largely composed of species native to the region, with only limited human intervention. Meadows attract a multitude of wildlife, and support flora and fauna that could not thrive in other habitats. They are ecologically important as they provide areas for animal courtship displays, nesting, food gathering, pollinating insects, and sometimes sheltering, if the vegetation is high enough. Intensified agricultural practices (too frequent mowing, use of mineral fertilizers, manure and insecticides), may lead to declines in the abundance of organisms and species diversity. There are multiple types of meadows, including agricultural, transitional, and perpetual – each playing a unique and important part of the ecosystem. Like other biomes, meadows will experience increased pressure (including on their biodiversity) due to climate change, especially as precipitation and weather conditions change. However, grasslands and meadows also have an important climate change mitigation potential as carbon sinks; deep-rooted grasses store a substantial amount of carbon in soil. Types Agricultural meadows In agriculture, a meadow is grassland which is not regularly grazed by domestic livestock, but rather allowed to grow unchecked in order to produce hay. Their roots extend back to the Iron Age, when appropriate tools for the hay harvest emerged. The ability to produce livestock fodder on meadows had a significant advantage for livestock production, as animals could be kept in enclosures, simplifying the control over breeding. Surpluses in biomass production during the summer could be stored for the winter, preventing damages to forests and grasslands as there was no longer the need for livestock grazing during the winter. Especially in the United Kingdom and Ireland, the term meadow is commonly used in its original sense to mean a hay meadow, signifying grassland mown annually in the summer for making hay. Agricultural meadows are typically lowland or upland fields upon which hay or pasture grasses grow from self-sown or hand-sown seed. Traditional hay meadows were once common in rural Britain, but are now in decline. Ecologist Professor John Rodwell states that over the past century, England and Wales have lost about 97% of their hay meadows. Fewer than of lowland meadows remain in the UK and most sites are relatively small and fragmented. 25% of the UK's meadows are found in Worcestershire, with Foster's Green Meadow managed by the Worcestershire Wildlife Trust being a major site. A similar concept to the hay meadow is the pasture, which differs from the meadow in that it is grazed through the summer, rather than being allowed to grow out and periodically be cut for hay. A pasture can also refer to any land used for grazing, and in this wider sense the term refers not only to grass pasture but also to non-grassland habitats such as heathland, moorland and wood pasture. The term, grassland, is used to describe both hay meadows and grass pastures. The specific agricultural practices in relation to the meadow can take on various expressions. As mentioned, this could be hay production or providing food for grazing cattle and livestock but also to give room for orchards or honey production. Meadows are embedded and dependent on a complex web of socio-cultural conditions for their maintenance. Historically, they emerged to increase agricultural efficiency when the necessary tools became available. Today, agricultural practices have shifted and meadows have largely lost their original purpose. Yet, they are appreciated today for their aesthetics and ecological functions. Consequently, the European Union's Common Agricultural Policy subsidizes their management, mostly through grazing. Transitional meadows A transitional meadow occurs when a field, pasture, farmland, or other cleared land is no longer cut or grazed and starts to display luxuriant growth, extending to the flowering and self-seeding of its grass and wildflower species. The condition is however only temporary, because the grasses eventually become shaded out when scrub and woody plants become well-established, being the forerunners of the return to a fully wooded state. A transitional state can be artificially-maintained through a double-field system, in which cultivated soil and meadows are alternated for a period of 10 to 12 years each. In North America prior to European colonization, Algonquians, Iroquois and other Native Americans peoples regularly cleared areas of forest to create transitional meadows where deer and game could find food and be hunted. For example, some of today's meadows originated thousands of years ago, due to regular burnings by Native Americans. Perpetual meadows A perpetual meadow, also called a natural meadow, is one in which environmental factors, such as climatic and soil conditions, are favorable to perennial grasses and restrict the growth of woody plants indefinitely. Types of perpetual meadows may include: Alpine meadows occur at high elevations above the tree line and maintained by harsh climatic conditions. Coastal meadows maintained by salt sprays. Desert meadows restricted by low precipitation or lack of nutrients and humus. Prairies maintained by periods of severe drought or subject to wildfires. Wet meadows (a semi-wetland area) saturated with water throughout much of the year. Urban meadow Recently, urban areas have been thought of as potential biodiversity conservation sites. The shift from urban lawns, that are widely spread habitats in cities, to urban meadows is thought to promote greater refuges for plant and animal communities. Urban lawns require intensive management that puts the life there at risk of losing their habitat, especially due to the mowing frequency. Cutting that mowing frequency has demonstrated to induce a clear positive effect on the plant community's diversity, which allows the switch from urban lawns to urban meadows. Due to increased urbanization, the EU Biodiversity Strategy 2017 decreed that there is a need to protect all ecosystems due to climate change. The majority of the people that live in the urban regions of any country usually get their plant knowledge from visiting parks and or public green infrastructure. Local authorities have the duty of providing the green spaces for the public, but these departments are constantly suffering major budget cuts, making it more difficult for people to admire natural wildlife in the urban sectors and also impairing the local ecosystem. In line with the increasing acceptance of a "messier urban aesthetic", the perennial meadows can be seen as a more realistic alternative to the classic urban lawns as they would also be more cost-efficient to maintain. Factors that managers of urban spaces list as important to regard are: Aesthetics and public reaction Locational context Human Resources and economic sustainability Local politics Communication Biodiversity and existing habitat Physical factors. Human intervention Artificially or culturally conceived meadows emerge from and continually require human intervention to persist and flourish. In many places, the natural, pristine populations of free-roaming large grazers are either extinct or very limited due to human activities. This reduces or removes their natural influence on the surrounding ecology and results in meadows only being created or maintained by human intervention. Existing meadows could potentially and gradually decline, if unmaintained by agricultural practices. Humankind has influenced the ecology and the landscape for millennia in many parts of the world, so it can sometimes be difficult to discern what is natural and what is cultural. Meadows are one example. However, meadows seem to have been sustained historically by naturally occurring large grazers, which kept plant growth in checked and maintained the cleared space. As extensive farming like grazing is diminishing in some parts of the world, the meadow is endangered as a habitat. A number of research projects attempt to restore natural meadow habitats by reintroducing natural, large grazers. These include deer, elk, goat, wild horse, etc. depending on the location. A more exotic example with a wider scope is the European Tauros Programme. Some environmental organization recommend converting lawns to meadows by stopping or reducing mowing. They claim that meadows can better preserve biodiversity, water, reduce the use of fertilizers. For example, in 2018 environmental organizations with the support of the Department for Environment Food and Rural Affairs of England, concerned by the decline in the number of bees worldwide, in the first day of Bees' Needs Week 2018 (9–15 July) give some recommendation how to preserve bees. The recommendations include 1) growing flowers, shrubs, and trees, 2) letting the garden grow wild, 3) cutting grass less often, 4) leaving insect nest and hibernation spots alone, and 5) using careful consideration with pesticides. Impact of tourism The impact of human activity has been noted to increase degradation of meadow soil. This has contributed to landslides in Sholas. E.g. due to skiing activities and urbanization, the meadows of the town of Zakopane, Poland, were noted to have altered soil compositions. The soil's organic material had faded away and was affected due to the chemicals from the artificial melting water from the snow and skiing machinery. Meadows and climate change Ecological consequences Climate changes impact temperature precipitation patterns worldwide. The effects are regionally very different but generally, temperatures tend to increase, snowpacks tend to melt earlier and many places tend to become drier. Many species respond to these changes by slowly moving their habitat upwards. The increased elevation decreases mean temperatures and thus allows for species to largely maintain their original habitat. Another common response to changed environmental conditions are phenological adaptations. These include shifts in the timing of germination or blossoming. Other examples include for example changing migration patterns of birds of passage. These adaptations are primarily influenced by three drivers: Increased temperature Changing precipitation patterns Reduced snowpack and earlier melting In the meadows, as water turned out to be all the more scant, that implies less dampness for the plants. The blooming plants do not develop too and hence do not give much food to the creatures. These kinds of changes in the plants could influence population of buffalo just as numerous other more creatures, including bugs and insects. Effects of higher temperatures In response to temperature changes, flowering plants can respond through either spatial or temporal shifts. A spatial shift refers to the migration towards colder areas, often on higher altitudes. A temporal shift means that a plant may alter its phenology to blossom at a different time of the year. By moving towards the early spring or late autumn they can restore their previous temperature conditions. These adaptations are limited through. Spatial shifts may be difficult if the areas are already inhabited by other species, or when the plant is reliant on specific hydrology or soil type. Other authors have shown that higher temperatures can increase total biomass, but temperature shocks and instability seem to have negative impacts on biodiversity. This even appears to be the case for multiyear species, which were previously considered to have a buffering effect on extreme weather events. Effects of changing precipitation patterns There is a variety of hydrological regimes for meadows, ranging from dry to humid, each yielding different plant communities adapted to the respective provider of water. A shift in precipitation patterns has very different effects, depending on the type of meadow. Meadows that are either dry or wet appear to be rather resilient to change, as a moderate increase or decrease in precipitation does not radically alter their character. Meanwhile, mesic meadows, with a moderate supply of water do change their character as it is easier to tip them into a different regime. Dry meadows in particular are threatened by the invasion of shrubs and other woody plants and a decreasing prevalence of flowering forbs, whereas hydric sites tend to lose woody species. Due to the dryer upper soil layers, forbs with shallow roots have difficulties obtaining enough water. Woody plants in contrast with their lower-reaching root systems can still extract water stored in lower soil layers and are able to sustain themselves through longer drought periods with their stored water reserves. In the longer term, changing hydrologic regimes may also facilitate the establishment of invasive species that may be better adapted to the new conditions. The effects are already quite visible, an example is the substitution of Alpine meadows in the southern Himalayas through shrubland. Climate change appears to be an important driver of this process. Wetter winters in contrast might increase total biomass, but favour already competitive species. By harming specialised plants and promoting the prevalence of more generalist species, more unstable precipitation patterns could also reduce ecological biodiversity. Effects of reduced snowpacks Snow covers are directly related to changes in temperature, precipitation and cloud cover. Still, changes in the timing of the snowmelt seem to be, particularly in alpine regions, an important determinant for phenological responses. There is even data suggesting that the impact of snowmelt is even higher than the warming alone. Earlier are not uniformly positive for plants though, as moisture injected through snow-melt might be missing later in the year. Additionally, it might allow for longer periods of seed predation. Problematic is also the lack of the insulating snow cover, springtime frost events might have a larger negative impact. Effects on ecological communities All the drivers mentioned above give rise to complex, non-linear community responses. These responses can be disentangled by looking at multiple climate drivers and species together. As different species show varying degrees of phenological responses, the consequence is a so-called phenological reassembly, where the structure of the ecosystem changes fundamentally. Phenological responses in blossoming periods of certain plants may not coincide with the phenological shifts of their pollinators or growing periods of plant communities relying on each other may start to diverge. A study of meadows in the Rocky Mountains revealed the emergence of a mid-season period with little floral activity. Specifically, the study identified that the typical mid-summer floral peak was composed out of several consecutive peaks in dry, mesic and wet meadow systems. Phenological responses to climate change let these distinct peaks diverge, leading to a gap during mid-summer. This poses a threat to pollinators relying on a continuous supply of floral resources. As ecological communities are often highly adapted to local circumstances which can not be reproduced at higher elevations, Debinski et al. describe the short-term changes observed on meadows "as a shift in the mosaic of the landscape composition". Therefore, it is important to monitor not only how specific species respond to climate change, but to also investigate them in the context of different habitats they occur in. Phenological reassembly Animals as well as plants are changing rapidly to the anthropogenic global warming, and the number of individuals, habitat occupancy, changing reproductive cycles are the strategies to adapt to this severe and unpredictable environment alterations. The different types of meadows all around the planet are different communities of plants (perennial and annual plants) that constantly are interacting with each other to stay alive and reproduce. Timing and duration of flowering is one of the phenological reassembly driven by many different factors like snow melt, temperature and soil moisture to mention a few. All of the changes that a plant or an animal may go through are depending in habitat's topography, altitude, and latitude of a specific organism. It is important to monitor properly the plants because they are one of the best bioindicators of how climate change is affecting the planet. Flowering phenology is one of the most important features of plant in order to survive any type of adversity. Thanks to different modern techniques and constant monitoring we can assure which ecological strategy the plants are using in order to multiply their species. In alpine meadow of the eastern Tibet notorious variances and similarities were observed between annual and perennial plants. Where perennial plants flowering peak date was directly proportional to the duration and inversely proportional in annuals plants. This is just a limited quantity of many relationships on phenology and functional traits interacting with the environment to survive. Extreme weather Climate change is increasing temperatures all over the world, and boreal regions are more susceptible to suffer noticeable changes. An experiment was conducted to monitor the reaction of alpine arctic meadow plants to different patterns of increased temperatures. This experiment was based on vascular plants that live in arctic and subarctic environments within three different levels of vegetation: canopy layer, bottom layer and functional groups. It is crucial to keep in mind that these plants are usually sharing the space and constantly interacting with bryophytes, lichens, arthropods, animals and many other organisms. The result was a clear adaptation of a constant pattern that plants recognized and had time to reach thermal acclimation meaning that they got a net carbon gain by intensifying photosynthesis and slightly increasing respiration thanks to a warmer climate for a reasonable time period. However, plants that suffer changes of any kind (not only temperature rising and falling) in a short period of time are more likely to die because they did not have enough time to reach thermal acclimation. Meadow restorations Carbon storage in meadows Meadows can act as substantial sinks and sources of organic carbon, holding vast quantities of it in the soil. The fluxes of carbon depend mainly on the natural cycle of carbon uptake and efflux, which interplays with seasonal variations (e.g. non-growing vs growing season). The wide range of meadow subtypes have in turn differing attributes (like plant configurations) affecting the area's ability to act as sinks; seagrass meadows are for instant identified as some of the more important sinks in the global carbon cycle. In the instance of seagrass meadows, enhanced production of other greenhouse gases (CH4 and N2O) does occur but the estimated overall effect results in an offset of the total emission. Meanwhile, a usual driver of meadow loss (except for direct alterations due to human development) is climate change, consequently increasing carbon emissions and bringing up the topic of restoration projects which in some cases have prompted initiated meadow restorations (e.g. Zostera marina meadow in Virginia U.S.A). Grassland degradations Where grassland degradation has occurred, significant alterations to the carbon dioxide efflux during the non-growing season may take place. Both climate change and overgrazing factor into the degradation. As exemplified by the alpine wetland meadow on the Qinghai-Tibetan Plateau, there is the potential of being a moderate source of CO2 and a carbon sink, due to high soil organic content and low decomposition. The more the dynamics have been quantified, however, the effects of degradation become more tangible. A strong connection between grassland degradation and soil carbon loss has been seen, pinpointing that carbon dioxide release is being stimulated by this event. This subsequently indicates a climate change mitigation potential by restoring degraded grassland. Cap-and-trade Being a market-based regulation of emissions, the cap-and-trade system can sometimes incorporate restoration projects for climate mitigation. For example, the cap-and-trade program in California is looking at how meadow restorations can be incorporated into their system of reducing carbon emissions. Audubon's preliminary studies point to the potential of storing a substantially increased amount of soil carbon compared to degraded meadows while boosting the local biodiversity. Most recently though, during the COVID-19 pandemic, difficulties with restoration are beginning to show: During the first years, areas under restoration are vulnerable to outside disruption, like meadow management put on hold when the ecosystem is most sensitive, for example to invasive species.
Physical sciences
Grasslands
null
371461
https://en.wikipedia.org/wiki/Acorus
Acorus
Acorus is a genus of monocot flowering plants. This genus was once placed within the family Araceae (aroids), but more recent classifications place it in its own family Acoraceae and order Acorales, of which it is the sole genus of the oldest surviving line of monocots. Some older studies indicated that it was placed in a lineage (the order Alismatales), that also includes aroids (Araceae), Tofieldiaceae, and several families of aquatic monocots (e.g., Alismataceae, Posidoniaceae). However, modern phylogenetic studies demonstrate that Acorus is sister to all other monocots. Common names include calamus and sweet flag. The genus is native to North America and northern and eastern Asia, and naturalised in southern Asia and Europe from ancient cultivation. The known wild populations are diploid except for some tetraploids in eastern Asia, while the cultivated plants are sterile triploids, probably of hybrid origin between the diploid and tetraploid forms. Characteristics The inconspicuous flowers are arranged on a lateral spadix (a thickened, fleshy axis). Unlike aroids, there is no spathe (large bract, enclosing the spadix). The spadix is 4–10 cm long and is enclosed by the foliage. The bract can be ten times longer than the spadix. The leaves are linear with entire margin. Taxonomy Although the family Acoraceae was originally described in 1820, since then Acorus has traditionally been included in Araceae in most classification systems, as in the Cronquist system. The family has recently been resurrected as molecular systematic studies have shown that Acorus is not closely related to Araceae or any other monocot family, leading plant systematists to place the genus and family in its own order. This placement currently lacks support from traditional plant morphology studies, and some taxonomists still place it as a subfamily of Araceae, in the order Alismatales. The APG III system recognizes order Acorales, distinct from the Alismatales, and as the sister group to all other monocots. This relationship is confirmed by more recent phylogenetic studies. Treatment in the APG IV system is unchanged from APG III. Species In older literature and on many websites, there is still much confusion, with the name Acorus calamus equally but wrongfully applied to Acorus americanus (formerly Acorus calamus var. americanus). As of July 2014, the Kew Checklist accepts only 2 species, one of which has three accepted varieties: Acorus calamus L. – common sweet flag; sterile triploid (3n = 36); probably of cultivated origin. It is native to Europe, temperate India and the Himalayas and southern Asia, widely cultivated and naturalised elsewhere. Acorus calamus var. angustatus Besser - Siberia, China, Russian Far East, Japan, Korea, Mongolia, Himalayas, Indian Subcontinent, Indochina, Philippines, Indonesia Acorus calamus var. calamus - Siberia, Russian Far east, Mongolia, Manchuria, Korea, Himalayas; naturalized in Europe, North America, Java and New Guinea Acorus americanus Raf. - Canada, northern United States, Buryatiya region of Russia Acorus gramineus Sol. ex Aiton – Japanese sweet flag or grassy-leaved sweet flag; fertile diploid (2n = 18); - China, Himalayas, Japan, Korea, Indochina, Philippines, Primorye Acorus from Europe, China and Japan have been planted in the United States. Etymology The name 'acorus' is derived from the Greek word 'acoron', a name used by Dioscorides, which in turn was derived from 'coreon', meaning 'pupil', because it was used in herbal medicine as a treatment for inflammation of the eye. Distribution and habitat These plants are found in wetlands, particularly marshes, where they spread by means of thick rhizomes. Like many other marsh plants, they depend upon aerenchyma to transport oxygen to the rooting zone. They frequently occur on shorelines and floodplains where water levels fluctuate seasonally. Ecology The native North American species appears in many ecological studies. Compared to other species of wetland plants, they have relatively high competitive ability. Although many marsh plants accumulate large banks of buried seeds, seed banks of Acorus may not accumulate in some wetlands owing to low seed production. The seeds appear to be adapted to germinate in clearings; after a period of cold storage, the seeds will germinate after seven days of light with fluctuating temperature, and somewhat longer under constant temperature. A comparative study of its life history traits classified it as a "tussock interstitial", that is, a species that has a dense growth form and tends to occupy gaps in marsh vegetation, not unlike Iris versicolor. Toxicity Products derived from Acorus calamus were banned in 1968 as food additives by the United States Food and Drug Administration. The questionable chemical derived from the plant was β-asarone. Confusion exists whether all strains of A. calamus contain this substance. Four varieties of A. calamus strains exist in nature: diploid, triploid, tetraploid and hexaploid. Diploids do not produce the carcinogenic β-asarone. Diploids are known to grow naturally in Eastern Asia (Mongolia and C Siberia) and North America. The triploid cytotype probably originated in the Himalayan region, as a hybrid between the diploid and tetraploid cytotypes. The North American Calamus is known as Acorus calamus var. americanus or more recently as simply Acorus americanus. Like the diploid strains of A. calamus in parts of the Himalayas, Mongolia, and C Siberia, the North American diploid strain does not contain the carcinogenic β-asarone. Research has consistently demonstrated that "β-asarone was not detectable in the North American spontaneous diploid Acorus [calamus var. americanus]". Uses The parallel-veined leaves of some species contain ethereal oils that give a sweet scent when dried. Fine-cut leaves used to be strewn across the floor in the Middle Ages, both for the scent, and for presumed efficacy against pests.
Biology and health sciences
Acorales
Plants
371462
https://en.wikipedia.org/wiki/Fraunhofer%20lines
Fraunhofer lines
The Fraunhofer lines are a set of spectral absorption lines. They are dark absorption lines, seen in the optical spectrum of the Sun, and are formed when atoms in the solar atmosphere absorb light being emitted by the solar photosphere. The lines are named after German physicist Joseph von Fraunhofer, who observed them in 1814. Discovery In 1802, English chemist William Hyde Wollaston was the first person to note the appearance of a number of dark features in the solar spectrum. In 1814, Joseph von Fraunhofer independently rediscovered the lines and began to systematically study and measure their wavelengths. He mapped over 570 lines, designating the most prominent with the letters A through K and weaker lines with other letters. Modern observations of sunlight can detect many thousands of lines. About 45 years later, Gustav Kirchhoff and Robert Bunsen noticed that several Fraunhofer lines coincide with characteristic emission lines identified in the spectra of heated chemical elements. They inferred that dark lines in the solar spectrum are caused by absorption by chemical elements in the solar atmosphere. Some of the other observed features were instead identified as telluric lines originating from absorption by oxygen molecules in the Earth's atmosphere. Sources The Fraunhofer lines are typical spectral absorption lines. Absorption lines are narrow regions of decreased intensity in a spectrum, which are the result of photons being absorbed as light passes from the source to the detector. In the Sun, Fraunhofer lines are a result of gas in the Sun's atmosphere and outer photosphere. These regions have lower temperatures than gas in the inner photosphere, and absorbs some of the light emitted by it. Naming The major Fraunhofer lines, and the elements they are associated with, are shown in the following table: The Fraunhofer C, F, G′, and h lines correspond to the alpha, beta, gamma, and delta lines of the Balmer series of emission lines of the hydrogen atom. The Fraunhofer letters are now rarely used for those lines. The D1 and D2 lines form a pair known as the "sodium doublet", the centre wavelength of which (589.29 nm) is given the designation letter "D". This historical designation for this line has stuck and is given to all the transitions between the ground state and the first excited state of the other alkali atoms as well. The D1 and D2 lines correspond to the fine-structure splitting of the excited states. The Fraunhofer H and K letters are also still used for the calcium doublet in the violet part of the spectrum, important in astronomical spectroscopy. There is disagreement in the literature for some line designations; for example, the Fraunhofer d line may refer to the cyan iron line at 466.814 nm, or alternatively to the yellow helium line (also labeled D3) at 587.5618 nm. Similarly, there is ambiguity regarding the e line, since it can refer to the spectral lines of both iron (Fe) and mercury (Hg). In order to resolve ambiguities that arise in usage, ambiguous Fraunhofer line designations are preceded by the element with which they are associated (e.g., Mercury e line and Helium d line). Because of their well-defined wavelengths, Fraunhofer lines are often used to specify standard wavelengths for characterising the refractive index and dispersion properties of optical materials.
Physical sciences
Basics
Astronomy
371540
https://en.wikipedia.org/wiki/Acorus%20calamus
Acorus calamus
Acorus calamus (also called sweet flag, sway or muskrat root, among many other common names) is a species of flowering plant with psychoactive chemicals. It is a tall wetland monocot of the family Acoraceae, in the genus Acorus. Although used in traditional medicine over centuries to treat digestive disorders and pain, it has no clinical evidence of safety or efficacy and may be toxic if ingested, and so has been commercially banned in the United States. Description Sweet flag is a herbaceous perennial, tall. Its leaves resemble those of the iris family. Sweet flag consists of tufts of basal leaves that rise from a spreading rhizome. The leaves are erect yellowish-brown, radical, with pink sheathing at their bases, sword-shaped, flat and narrow, tapering into a long, acute point, and have parallel veins. The leaves have smooth edges, which can be wavy or crimped. The sweet flag can be distinguished from iris and other similar plants by the crimped edges of the leaves, the fragrant odor it emits when crushed, and the presence of a spadix. Only plants that grow in water bear flowers. The fruit is a berry filled with mucus, which when ripe falls into the water and disperses by floating. The solid, triangular flower-stems rise from the axils of the outer leaves. A semi-erect spadix emerges from one side of the flower stem. The spadix is solid, cylindrical, tapers at each end, and is 5 to 10 cm in length. A covering spathe, as is usual with Araceae, is absent. The spadix is densely crowded with tiny greenish-yellow flowers. Each flower contains six petals and stamens enclosed in a perianth with six divisions, surrounding a three-celled, oblong ovary with a sessile stigma. The flowers are sweetly fragrant. In Europe, it flowers for about a month in late spring or early summer, but does not bear fruit. In Asia, it also fruits sparingly, and propagates itself mainly by growth of its rhizome, forming colonies. The branched, cylindrical, knobby rhizome is the thickness of a human finger and has numerous coarse fibrous roots below it. The exterior is brown and the interior white. Range and habitat Sweet flag grows in India, Nepal, central Asia, southern Russia and Siberia, Europe and North America. Habitats include edges of small lakes, ponds and rivers, marshes, swamps, and other wetlands. Names and etymology In addition to "sweet flag" and "calamus" other common names include beewort, bitter pepper root, calamus root, flag root, gladdon, myrtle flag, myrtle grass, myrtle root, myrtle sedge, pine root, sea sedge, sweet cane, sweet cinnamon, sweet grass, sweet myrtle, sweet root, sweet rush, sweet sedge and wada kaha. The generic name is the Latin word acorus, which is derived from the Greek άχόρου (áchórou) of Dioscorides (note different versions of the text have different spellings). The word άχόρου itself is thought to have been derived from the word κόρη (kóri), which means pupil (of an eye), because of the juice from the root of the plant being used as a remedy in diseases of the eye ('darkening of the pupil'). The specific name calamus is derived from Greek κάλαμος (kálamos, meaning "reed"), cognate to Latin culmus ("stalk") and Old English healm ("straw"), Arabic قَلَم (qálam, "pen"), in turn from Proto-Indo European *kole-mo- (thought to mean "grass" or "reed"). The name "sweet flag" refers to its sweet scent and its similarity to Iris species, which have been commonly known as flags in English since at least the late fourteenth century. History The plant was already mentioned in the Chester Beatty papyrus VI dating to approximately 1300 BC. The ancient Egyptians rarely mentioned the plant in medicinal contexts, but it was certainly used to make perfumes. Initially, Europeans confused the identity and medicinal uses of the Acorus calamus of the Romans and Greeks with their native Iris pseudacorus. Thus the Herbarius zu Teutsch, published at Mainz in 1485, describes and includes a woodcut of this iris under the name Acorus. This German book is one of three possible sources for the same error in the French Le Grant Herbier, written in 1486, 1488, 1498 or 1508, which was also published in an English translation as the Grete Herball by Peter Treveris in 1526; all of these contain the false identification printed in the Herbarius zu Teutsch. William Turner, writing in 1538, describes 'acorum' as "gladon or a flag, a yelowe floure delyce". The plant was introduced to Britain in the late 16th century. By at least 1596, true Acorus calamus was grown in Britain, as it is listed in The Catalogue, a list of plants John Gerard grew in his garden at Holborn. Gerard notes, "It prospereth exceeding well in my garden, but as yet beareth neither flowers nor stalke". Gerard lists the Latin name as Acorus verus, but it is evident there was still doubt about its veracity: in his 1597 herbal he lists the English common name as 'bastard calamus'. Carl O. Sauer reported that the tuber was already being used by North American Indians at the time of European contact. Taxonomy There are three cytotypic forms distinguished by chromosome number: a diploid form (2n=24), an infertile triploid form (2n=36), and a tetraploid form (see below). The triploid form is the most common and is thought to have arisen relatively recently in the Himalayan region through hybridisation of the diploid with the tetraploid. Probably indigenous to most of Asia, the triploid form Acorus calamus var. calamus (also known as var. vulgaris or var. verus) has now been introduced across Europe, Australia, New Guinea, South Africa, Réunion and North America. The tetraploid form Acorus calamus var. angustatus is native throughout Asia, from India to Japan and the Philippines and from Indonesia to Siberia. The diploid form Acorus americanus or Acorus calamus var. americanus is found in northern subarctic North America and scattered disjunct areas throughout the Mississippi Valley. It may not be native to some of these areas, Pre-Columbian populations are thought to have dispersed it across parts of the United States. Other diploids are found in Mongolia, central Siberia (Buryatia), Gilgit–Baltistan in Pakistan (claimed by India) and northern Himachal Pradesh in India. Currently the taxonomic position of the different forms is contested. The comprehensive taxonomic analysis in Plants of the World Online from 2023 considers all three forms to be distinct varieties of a single species. The Flora of North America publication considers the diploid form to be a distinct species; its analysis differentiates North American diploid forms from triploid and tetraploid varieties, and does not take into account the morphology of Asian forms of the diploid variety. Also, in older literature, the name Acorus americanus may be used indiscriminately for all forms of Acorus calamus occurring in North America, irrespective of cytological diversity (i.e. both the diploid and triploid forms). The treatment in the Flora of China from 2010 considers all varieties to be synonyms of a single taxonomically undifferentiated species, since characteristics that are treated as distinctive in the Flora of North America are subject to morphological overlap in Asian specimens. The primary morphological distinction between the triploid and the North American forms of the diploid is made by the number of prominent leaf veins, the diploid having a prominent midvein with equally raised secondary veins on both sides, the triploid having a single prominent midvein with the secondary veins barely distinct. According to the Flora of China, there is clear overlap in these characteristics and the different cytotypes are impossible to distinguish morphologically. Triploid plants are infertile and show an abortive ovary with a shrivelled appearance. This form will never form fruit (let alone seeds) and can only spread asexually. The tetraploid variety is usually known as Acorus calamus var. angustatus Besser. A number of synonyms are known, but some of those are contested as to which variety they belong. It is morphologically diverse, with some forms having very broad and some narrow leaves. It is also cytotypically diverse, with an array of different karyotypes. Chemistry Calamus leaves and rhizomes contain a volatile oil that gives a characteristic odor and flavor. Major components of the oil are beta-asarone (as much as 75%), methyl isoeugenol (as much as 40%) and alpha-asarone, saponins, lectins, sesquiterpenoids, lignans, and steroids. Phytochemicals in the plant vary according to geographic location, plant age, climate, species variety, and plant component extracted. Diploids do not contain beta-asarone. Safety and regulations A. calamus and products derived from A. calamus (such as its oil) were banned from use as human food or as a food additive in 1968 by the United States Food and Drug Administration. Although limits on consumption in food or alcoholic beverages (115 micrograms per day) were recommended in a 2001 ruling by the European Commission, the degree of safe exposure remained undefined. Toxicity Although calamus has been used for its fragrance and ingested, it has not been studied by rigorous clinical research. Individual medical reports of toxicity mention severe nausea and prolonged vomiting over many hours following oral uses. Laboratory studies of its extracts indicate other forms of toxicity, due mainly to the emetic compound β-asarone. Allegedly, the plant is psychoactive (hallucinogenic), but for example all experiments with American calamus have been completely unsuccessful, even those involving very high dosages (up to 300 g of rhizomes). Uses A. calamus has been an item of trade in many cultures for centuries. It has been used medicinally for a wide variety of ailments, such as gastrointestinal diseases and treating pain, and its aroma makes calamus essential oil valued in the perfume industry. The essence from the rhizome is used as a flavor for foods, alcoholic beverages, and bitters in Europe. It was also once used to make candy. Food The young stalks can be pulled when under ; the inner stems can be eaten raw. The roots can be washed, peeled, cut into small pieces, boiled, and simmered in syrup to make candy. In herbal medicine Sweet flag has a long history of use in Chinese, Nepalese, and Indian herbal traditions. Sweet flag was and is used as an herbal medicine by the Chipewyan people. Horticulture This plant is sometimes used as a pond plant in horticulture. There is at least one tetraploid ornamental cultivar known; it is usually called 'Variegatus', but the RHS recommends calling it 'Argenteostriatus'. Insecticide and antifungal The asarone from A. calamus, found most abundantly in the dried and pulverized roots, has been identified as having insecticidal properties. β-asarone also exhibits anti-fungal activity by inhibiting ergosterol biosynthesis in Aspergillus niger. However, asarone's toxicity and carcinogenicity in mammals (including humans) means that it may be difficult to develop any practical medications or insecticides based on it.
Biology and health sciences
Acorales
Plants
371549
https://en.wikipedia.org/wiki/Plethodontidae
Plethodontidae
Plethodontidae, or lungless salamanders, are a family of salamanders. With over 500 species, lungless salamanders are by far the largest family of salamanders in terms of their diversity. Most species are native to the Western Hemisphere, from British Columbia to Brazil. Only two extant genera occur in the Eastern Hemisphere: Speleomantes (native to Sardinia and mainland Europe south of the Alps) and Karsenia (native to South Korea). Biology Adult lungless salamanders have four limbs, with four toes on the fore limbs, and usually with five on the hind limbs. Within many species, mating and reproduction occur solely on land. Accordingly, many species also lack an aquatic larval stage, a phenomenon known as direct development in which the offspring hatch as fully-formed, miniature adults. Direct development is correlated with changes in the developmental characteristics of plethodontids compared to other families of salamanders including increases in egg size and duration of embryonic development. Additionally, the evolutionary loss of the aquatic larval stage is related to a diminishing dependence on aquatic habitats for reproduction. The lift of this constraint allowed widespread colonization and diversification within a broad number of terrestrial habitats which is a testament to the high success and proliferation of Plethodontidae. Despite the absence of lungs, some can grow rather large. The largest species of lungless salamanders, Bell's false brook salamander, can reach lengths of . Many species have a projectile tongue and hyoid apparatus, which they can fire almost a body length at high speed to capture prey. Measured in individual numbers, they are very successful animals where they occur. In some places, they make up the dominant biomass of vertebrates. An estimated 1.88 billion individuals of the southern redback salamander inhabit just one district of Mark Twain National Forest alone, about 1,400 tons of biomass. Due to their modest size and low metabolism, they are able to feed on prey such as springtails, which are usually too small for other terrestrial vertebrates. This gives them access to a whole ecological niche with minimal competition from other groups. Courtship and mating Plethodontids exhibit highly stereotyped and complex mating behaviors and courtship rituals that are not present in any other salamander family. Mating behavior tends to be uniform among all plethodontids and typically involves a tail-straddle walk in which the female orients her head at the base of the male's tail while also straddling the tail with her body. The male will twist his body around and deposit a sperm capsule, known as the spermatophore, on the substrate in front of the female's snout. As the male leads the female over the spermatophore with his tail, the female lowers her cloaca onto the spermatophore and lodges the sperm mass inside while leaving the base of the spermatophore on the ground. Within many species of plethodontidae, the courtship ritual is often accompanied by transfer of male pheromones during the tail-straddling walk. During the breeding period, males will grow enlarged anterior teeth used to scratch the female's skin on her head as a part of the courtship ritual. Subsequently, the male will rub pheromones onto the abraded spot which are secreted from a pad of tissue called the mental gland located underneath the male's chin. Courtship pheromones greatly increase male mating success for a variety of reasons. Overall, the pheromone secretions increase female receptivity to courtship and sperm transfer. This not only increases the likelihood of successful mating with a specific female, but also shortens the duration of courtship which is important because it minimizes the chance of the male being interrupted by other competing males. In scientific literature discussing the variations between the mental glands of plethodontid salamanders, it was discovered that male plethodontids had minor variations in height  and diameter of the simple tubular glands, and major variation was found in the diameter of the secretory granules. This is attributed to the fact that males can mate throughout all months of the year, while females oviposit seasonally. Respiration A number of features distinguish the plethodontids from other salamanders. Most significantly, they lack lungs, conducting respiration through their skin, and the tissues lining their mouths. Some species of cave salamanders are neotenic, and keep their larval gills even as adults. Gills are absent in all other adult plethodontids. Plethodontids possess costal grooves on the trunk of their bodies. These help keep the skin moist via water transport over the surface of the body. Plethodontid salamanders are almost entirely reliant on cutaneous respiration. Approximately 83%–93% of oxygen uptake is through this method. Plethodontid salamander respiration rates are constrained by their SA:V, and higher SA:Vs are correlated to warmer, wetter climates. Plethodontids are constantly exposed to air or water, which allows for constant gas exchange that is not limited by ventilation. Oxygen uptake is identical in water and air, assuming the partial pressure of oxygen is the same. Oxygenated and non-oxygenated blood are mixed together in the venous system, which causes the partial pressure of oxygen within cardiac blood to typically be low. Plethodontids can tolerate hypoxia for prolonged periods by reducing their metabolic rate instead of by relying on anaerobic cutaneous respiration, as initially theorized. Plethodontids have been observed to develop rudimentary lungs as embryos. The lung rudiment develops similarly to that of non-plethodontid salamanders for the first three weeks of development and then begins to regress through apoptosis. A paralogue of the SFTPC gene, which is expressed exclusively in the lungs in other vertebrates, is in lungless salamanders expressed in the larval integument instead. When going through metamorphosis, it disappears from the integument and appears in the buccopharynx in adults. It is suggested the gene facilitate extrapulmonary respiration through the production of pulmonary surfactant-like secretions. Chemoreception Another distinctive feature is the presence of a vertical slit between the nostril and upper lip, known as the "nasolabial groove". The groove is lined with glands, and enhances the salamander's chemoreception which is correlated with a higher degree of olfactory lobe and nasal mucous membrane development in plethodontids. The presence of this specialized structure is likely related to the absence of lungs in these salamanders. Though some lunged salamanders do exhibit similar structures, they are reduced in size and are not arranged near the nostrils (i.e. nares) in the same fashion as plethodontids. Due to the fact that plethodontids cannot generate air pressure via expulsion of air from the lungs and through the nares, they are presented with the challenge of removing water and debris from the nasal passages which has the potential to significantly limit olfactory processes. As such, the nasolabial grooves are structured in a way that maximizes drainage from the nose. The groove is deeper and more narrow directly around the nares and the orifices of the glands are slightly elevated both of which aid in the gravitational flow of fluid from the nares and nasal depression. Additionally, the nasolabial glands around the margins of the nares secrete a fatty film which further encourages the removal of water from the nasal passages due to differences in polarity between water and the lipid secretions. Evolutionary history Plethodontidae are estimated to have split from their sister group Amphiumidae around the K-Pg boundary, and to have diversified during the Paleogene. The origin region of the family is North America, with oldest of the European members of the family known from the Middle Miocene of Slovakia. Subfamilies and genera The family Plethodontidae consists of two extant subfamilies and about 516 to 520 species divided among 29 genera, making up the majority of known salamander species: Following a major revision in 2006, the genus Haideotriton was found to be a synonym of Eurycea, while the genus Lineatriton were made synonyms of Pseudoeurycea. A single hemidactyliine (Palaeoplethodon) is known from Miocene fossil remains preserved in Dominican amber, marking the only record of salamanders in the Caribbean. Conservation status
Biology and health sciences
Salamanders and newts
Animals
371585
https://en.wikipedia.org/wiki/Pine%20squirrel
Pine squirrel
Pine squirrels are squirrels of the genus Tamiasciurus, in the Sciurini tribe, of the large family Sciuridae. Species This genus includes three species: Tamiasciurus douglasii — Douglas squirrel T. d. mearnsi — Mearns's squirrel Tamiasciurus fremonti — southwestern red squirrel T. f. grahamensis — Mount Graham red squirrel Tamiasciurus hudsonicus — American red squirrel All three species are native to North America. Pine squirrels can be found in the northern and western United States, most of Canada, Alaska, and northwestern Mexico. Description Pine squirrels, Tamiasciurus species, are small tree squirrels with bushy tails. Along with members of the genus Sciurus, they are members of the Sciurini tribe. The name Tamiasciurus comes from Greek wiktionary:ταμίας tamías ‘steward, dispenser’ and wiktionary:σκίουρος skíouros 'squirrel'. The American red squirrel should not be confused with the Eurasian red squirrel (Sciurus vulgaris) — both are usually just referred to as the "red squirrel" in their home continents. Pine squirrels rely on a variety of food sources including fungi, plants, arthropods and tree seed.
Biology and health sciences
Rodents
Animals
371656
https://en.wikipedia.org/wiki/Bornite
Bornite
Bornite, also known as peacock ore, is a sulfide mineral with chemical composition that crystallizes in the orthorhombic system (pseudo-cubic). It is an important copper ore. Appearance Bornite has a brown to copper-red color on fresh surfaces that tarnishes to various iridescent shades of blue to purple in places. Its striking iridescence gives it the nickname peacock copper or peacock ore. Mineralogy Bornite is an important copper ore mineral and occurs widely in porphyry copper deposits along with the more common chalcopyrite. Chalcopyrite and bornite are both typically replaced by chalcocite and covellite in the supergene enrichment zone of copper deposits. Bornite is also found as disseminations in mafic igneous rocks, in contact metamorphic skarn deposits, in pegmatites and in sedimentary cupriferous shales. It is important as an ore for its copper content of about 63 percent by mass. Structure At temperatures above , the structure is isometric with a unit cell that is about 5.50 Å on an edge. This structure is based on cubic close-packed sulfur atoms, with copper and iron atoms randomly distributed into six of the eight tetrahedral sites located in the octants of the cube. With cooling, the Fe and Cu become ordered, so that 5.5 Å subcells in which all eight tetrahedral sites are filled alternate with subcells in which only four of the tetrahedral sites are filled; symmetry is reduced to orthorhombic. Composition Substantial variation in the relative amounts of copper and iron is possible and solid solution extends towards chalcopyrite (CuFeS2) and digenite (Cu9S5). Exsolution of blebs and lamellae of chalcopyrite, digenite, and chalcocite is common. Form and twinning Rare crystals are approximately cubic, dodecahedral, or octahedral. Usually massive. Penetration twinning on the crystallographic direction, {111}. Occurrence It occurs globally in copper ores with notable crystal localities in Butte, Montana and at Bristol, Connecticut in the U.S. It is also collected from the Carn Brea mine, Illogan, and elsewhere in Cornwall, England. Large crystals are found from the Frossnitz Alps, eastern Tirol, Austria; the Mangula mine, Lomagundi district, Zimbabwe; from the N'ouva mine, Talate, Morocco, the West Coast of Tasmania and in Dzhezkazgan, Kazakhstan. There are also traces of it found amongst the hematite in the Pilbara region of Western Australia. History and etymology It was first described in 1725 for an occurrence in the Ore Mountains, Bohemia, in what is now the Karlovy Vary Region of the Czech Republic. It was named in 1845 for Austrian mineralogist Ignaz von Born.
Physical sciences
Minerals
Earth science
371786
https://en.wikipedia.org/wiki/Iridescence
Iridescence
Iridescence (also known as goniochromism) is the phenomenon of certain surfaces that appear gradually to change colour as the angle of view or the angle of illumination changes. Iridescence is caused by wave interference of light in microstructures or thin films. Examples of iridescence include soap bubbles, feathers, butterfly wings and seashell nacre, and minerals such as opal. Pearlescence is a related effect where some or most of the reflected light is white. The term pearlescent is used to describe certain paint finishes, usually in the automotive industry, which actually produce iridescent effects. Etymology The word iridescence is derived in part from the Greek word ἶρις îris (gen. ἴριδος íridos), meaning rainbow, and is combined with the Latin suffix -escent, meaning "having a tendency toward". Iris in turn derives from the goddess Iris of Greek mythology, who is the personification of the rainbow and acted as a messenger of the gods. Goniochromism is derived from the Greek words gonia, meaning "angle", and chroma, meaning "colour". Mechanisms Iridescence is an optical phenomenon of surfaces in which hue changes with the angle of observation and the angle of illumination. It is often caused by multiple reflections from two or more semi-transparent surfaces in which phase shift and interference of the reflections modulates the incidental light, by amplifying or attenuating some frequencies more than others. The thickness of the layers of the material determines the interference pattern. Iridescence can for example be due to thin-film interference, the functional analogue of selective wavelength attenuation as seen with the Fabry–Pérot interferometer, and can be seen in oil films on water and soap bubbles. Iridescence is also found in plants, animals and many other items. The range of colours of natural iridescent objects can be narrow, for example shifting between two or three colours as the viewing angle changes, Iridescence can also be created by diffraction. This is found in items like CDs, DVDs, some types of prisms, or cloud iridescence. In the case of diffraction, the entire rainbow of colours will typically be observed as the viewing angle changes. In biology, this type of iridescence results from the formation of diffraction gratings on the surface, such as the long rows of cells in striated muscle, or the specialized abdominal scales of peacock spider Maratus robinsoni and M. chrysomelas. Some types of flower petals can also generate a diffraction grating, but the iridescence is not visible to humans and flower-visiting insects as the diffraction signal is masked by the colouration due to plant pigments. In biological (and biomimetic) uses, colours produced other than with pigments or dyes are called structural colouration. Microstructures, often multi-layered, are used to produce bright but sometimes non-iridescent colours: quite elaborate arrangements are needed to avoid reflecting different colours in different directions. Structural colouration has been understood in general terms since Robert Hooke's 1665 book Micrographia, where Hooke correctly noted that since the iridescence of a peacock's feather was lost when it was plunged into water, but reappeared when it was returned to the air, pigments could not be responsible. It was later found that iridescence in the peacock is due to a complex photonic crystal. Pearlescence Pearlescence is an effect related to iridescence and has a similar cause. Structures within a surface cause light to be reflected back, but in the case of pearlescence some or most of the light is white, giving the object a pearl-like luster. Artificial pigments and paints showing an iridescent effect are often described as pearlescent, for example when used for car paints. Examples Life Invertebrates Eledone moschata has a bluish iridescence running along its body and tentacles. Vertebrates The feathers of birds such as kingfishers, birds-of-paradise, hummingbirds, parrots, starlings, grackles, ducks, and peacocks are iridescent. The lateral line on the neon tetra is also iridescent. A single iridescent species of gecko, Cnemaspis kolhapurensis, was identified in India in 2009. The tapetum lucidum, present in the eyes of many vertebrates, is also iridescent. Iridescence is known to be present among prehistoric non-avian and avian dinosaurs such as dromaeosaurids, enantiornithes, and lithornithids. Muscle tissues can display irisdescence. Plants Many groups of plants have developed iridescence as an adaptation to use more light in dark environments such as the lower levels of tropical forests. The leaves of Southeast Asia's Begonia pavonina, or peacock begonia, appear iridescent azure to human observers due to each leaf's thinly layered photosynthetic structures called iridoplasts that absorb and bend light much like a film of oil over water. Iridescences based on multiple layers of cells are also found in the lycophyte Selaginella and several species of ferns. Non-biological Minerals Meteorological Human-made Nanocellulose is sometimes iridescent, as are thin films of petrol and some other hydrocarbons and alcohols when floating on water.
Physical sciences
Optics
Physics
371830
https://en.wikipedia.org/wiki/Durum%20wheat
Durum wheat
Durum wheat (), also called pasta wheat or macaroni wheat (Triticum durum or Triticum turgidum subsp. durum), is a tetraploid species of wheat. It is the second most cultivated species of wheat after common wheat, although it represents only 5% to 8% of global wheat production. It was developed by artificial selection of the domesticated emmer wheat strains formerly grown in Central Europe and the Near East around 7000 BC, which developed a naked, free-threshing form. Like emmer, durum wheat is awned (with bristles). It is the predominant wheat that grows in the Middle East. Durum in Latin means 'hard', and the species is the hardest of all wheats. This refers to the resistance of the grain to milling, in particular of the starchy endosperm, causing dough made from its flour to be weak or "soft". This makes durum favorable for semolina and pasta and less practical for flour, which requires more work than with hexaploid wheats such as common bread wheats. Despite its high protein content, durum is not a strong wheat in the sense of giving strength to dough through the formation of a gluten network. Durum contains 27% extractable wet gluten, about 3% higher than common wheat (T. aestivum L.). Taxonomy Some authorities synonymize "durum" and Triticum turgidum. Some reserve "durum" for Triticum turgidum subsp. durum. Genetics Durum wheat is a tetraploid wheat, having four sets of chromosomes for a total of 28, unlike hard red winter and hard red spring wheats, which are hexaploid (six sets of chromosomes) for a total of 42. Durum wheat originated through intergeneric hybridization and polyploidization involving two diploid (having two sets of chromosomes) grass species: T. urartu (2n=2x=14, AA genome) and a B-genome diploid related to Aegilops speltoides (2n=2x=14, SS genome) and is thus an allotetraploid (having four sets of chromosomes, from unlike parents) species. Durum—and indeed all tetraploids—lack alleles. The only exception is found by Buerstmayr et al., 2012 on the . One of the predominant production areas of durum—Italy—has domesticated varieties with lower genetic diversity than wild types, but ssp. turanicum, ssp. polonicum and ssp. carthlicum have a level of diversity intermediate between those groups. There is evidence of an increase in the intensity of breeding after 1990. Uses Commercially produced dry pasta, or , is made almost exclusively from durum semolina. Most home-made fresh pastas also use durum wheat or a combination of soft and hard wheats. Husked but unground, or coarsely ground, it is used to produce the semolina in the couscous of North Africa and the Levant. It is also used for Levantine dishes such as tabbouleh, kashk, kibbeh, bitfun and the bulgur for pilafs. In North African cuisine and Levantine cuisine, it forms the basis of many soups, gruels, stuffings, puddings and pastries. When ground as fine as flour, it is used for making bread. In the Middle East, it is used for flat round breads, and in Europe and elsewhere, it can be used for pizza or torte. The use of wheat to produce pasta was described as early as the 10th century by Ibn Wahshīya of Cairo. The North Africans called the product itrīya, from which Italian sources derived the term tria (or aletría in the case of Spanish sources) during the 15th century. Production Durum wheat (Triticum turgidum ssp. durum) is the 10th most cultivated cereal worldwide, with a total production of about 38 million tons. Most of the durum grown today is amber durum, the grains of which are amber-colored due to the extra carotenoid pigments and are larger than those of other types of wheat. Durum has a yellow endosperm, which gives pasta its color. When durum is milled, the endosperm is ground into a granular product called semolina. Semolina made from durum is used for premium pastas and breads. Notably semolina is also one of the only flours that is purposely oxidized for flavor and color. There is also a red durum, used mostly for livestock feed. The cultivation of durum generates greater yield than other wheats in areas of low precipitation. Good yields can be obtained by irrigation, but this is rarely done. In the first half of the 20th century, the crop was widely grown in Russia. Durum is one of the most important food crops in West Asia. Although the variety of the wheat there is diverse, it is not extensively grown there, and thus must be imported. West amber durum produced in Canada is used mostly as semolina/pasta, but some is also exported to Italy for bread production. In the Middle East and North Africa, local bread-making accounts for half the consumption of durum. Some flour is even imported. On the other hand, many countries in Europe produce durum in commercially significant quantities. In India durum accounts for roughly 5% of total wheat production in the country, and is used to make products such as rava and sooji. Processing and protein content Durum wheat is subject to four processes: cleaning, tempering, milling and purifying. First, durum wheat is cleaned to remove foreign material and shrunken and broken kernels. Then it is tempered to a moisture content, toughening the seed coat for efficient separation of bran and endosperm. Durum milling is a complex procedure involving repetitive grinding and sieving. Proper purifying results in maximum semolina yield and the least amount of bran powder. To produce bread, durum wheat is ground into flour. The flour is mixed with water to produce dough. The quantities mixed vary, depending on the acidity of the mixture. To produce fluffy bread, the dough is mixed with yeast and lukewarm water, heavily kneaded to form a gas-retaining gluten network, and then fermented for hours, producing bubbles. The quality of the bread produced depends on the viscoelastic properties of gluten, the protein content and protein composition. Containing about 12% total protein in defatted flour compared to 11% in common wheat, durum wheat yields 27% extractable, wet gluten compared to 24% in common wheat. Health concerns Because durum wheat contains gluten, it is unsuitable for people with gluten-related disorders such as celiac disease, non-celiac gluten sensitivity and wheat allergy.
Biology and health sciences
Grains
Plants
372305
https://en.wikipedia.org/wiki/Pilot%20whale
Pilot whale
Pilot whales are cetaceans belonging to the genus Globicephala. The two extant species are the long-finned pilot whale (G. melas) and the short-finned pilot whale (G. macrorhynchus). The two are not readily distinguishable at sea, and analysis of the skulls is the best way to distinguish between the species. Between the two species, they range nearly worldwide, with long-finned pilot whales living in colder waters and short-finned pilot whales living in tropical and subtropical waters. Pilot whales are among the largest of the oceanic dolphins, exceeded in size only by the orca. They and other large members of the dolphin family are also known as blackfish. Pilot whales feed primarily on squid, but will also hunt large demersal fish such as cod and turbot. They are highly social and may remain with their birth pod throughout their lifetime. Short-finned pilot whales are one of the few non-primate mammal species in which females go through menopause, and postreproductive females continue to contribute to their pod. Pilot whales are notorious for stranding themselves on beaches, but the reason behind this is not fully understood. Marine biologists have shed some light on the matter, suggesting that it is due to the mammals inner ear (their principal navigational sonar) being damaged from noise pollution in the ocean, such as from cargo ships or military exercises. The conservation status of short-finned and long-finned pilot whales has been determined to be least concern. Naming The animals were named "pilot whales" because pods were believed to be "piloted" by a leader. They are also called "pothead whales" and "blackfish". The genus name is a combination of the Latin word globus ("round ball" or "globe") and the Greek word Kephale ("head"). Taxonomy and evolution Pilot whales are classified into two species; the long-finned pilot whale (Globicephala melas) and the short-finned pilot whale (G. macrorhynchus). The short-finned pilot whale was described, from skeletal materials only, by John Edward Gray in 1846. He presumed from the skeleton that the whale had a large beak. The long-finned pilot whale was first classified by Thomas Stewart Traill in 1809 as Delphinus melas. Its scientific name was eventually changed to Globicephala melaena. Since 1986, the specific name of the long-finned pilot whale was changed to its original form melas. Other species classifications have been proposed but only two have been accepted. There exist geographic forms of short-finned pilot whales off the east coast of Japan, which comprise genetically isolated stocks. Fossils of an extinct relative, Globicephala baereckeii, have been found in Pleistocene deposits in Florida. Another Globicephala dolphin was discovered in Pliocene strata in Tuscany, Italy, and was named G. etruriae. Evolution of Tappanaga, the endemic, larger form of short-finned pilots found in northern Japan, with similar characteristics to the whales found along Vancouver Island and northern USA coasts, has indicated that the geniture of this form could be caused by the extinction of long-finned pilots in north Pacific in the 12th century, where Magondou, the smaller, southern type possibly filled the former niches of long-finned pilots, adapting and colonizing into colder waters. Description Pilot whales are mostly dark grey, brown, or black, but have some light areas such as a grey saddle patch behind the dorsal fin. Other light areas are an anchor-shaped patch under the chin, a faint blaze marking behind the eye, a large marking on the belly, and a genital patch. The dorsal fin is set forward on the back and sweeps backwards. A pilot whale is more robust than most dolphins and has a distinctive large, bulbous melon. Pilot whales' long, sickle-shaped flippers and tail stocks are flattened from side to side. Male long-finned pilot whales develop more circular melons than females, although this does not seem to be the case for short-finned pilot whales off the Pacific coast of Japan. Long-finned and short-finned pilot whales are so similar, it is difficult to tell the two species apart. They were traditionally differentiated by the length of the pectoral flippers relative to total body length and the number of teeth. The long-finned pilot whale was thought to have 9–12 teeth in each row and flippers one-fifth of total body length, compared to the short-finned pilot whale with its 7–9 teeth in each row and flippers one-sixth of total body length. Studies of whales in the Atlantic showed much overlap in these characteristics between the species, making them clines instead of distinctive features. Thus, biologists have since used skull differences to distinguish the two species. The size and weight depend on the species, as long-finned pilot whales are generally larger than short-finned pilot whales. Their lifespans are about 45 years in males and 60 years in females for both species. Both species exhibit sexual dimorphism. Adult long-finned pilot whales reach a body length of approximately 6.5 m, with males being 1 m longer than females. Their body mass reaches up to 1,300 kg in females and up to 2,300 kg in males. For short-finned pilot whales, adult females reach a body length of about 5.5 m, while males reach 7.2 m and may weigh up to 3,200 kg. Distribution and habitat Pilot whales can be found in oceans nearly worldwide, but data about current population sizes is deficient. The long-finned pilot whale prefers slightly cooler waters than the short-finned and is divided into two populations. The smaller group is found in a circumpolar band in the Southern Ocean from about 20 to 65°S. It may be sighted off the coasts of Chile, Argentina, South Africa, Australia, and New Zealand. An estimated more than 200,000 individuals were in this population in 2006. The second, much larger, population inhabits the North Atlantic Ocean, in a band from South Carolina in the United States across to the Azores and Morocco at its southern edge and from Newfoundland to Greenland, Iceland, and northern Norway at its northern limit. This population was estimated at 778,000 individuals in 1989. It is also present in the western half of the Mediterranean Sea. The short-finned pilot whale is less populous. It is found in temperate and tropical waters of the Indian, Atlantic and Pacific Oceans. Its population overlaps slightly with the long-finned pilot whale in the temperate waters of the North Atlantic and Southern Oceans. About 150,000 individuals are found in the eastern tropical Pacific Ocean. More than 30,000 animals are estimated in the western Pacific, off the coast of Japan. Pilot whales are generally nomadic, but some populations stay year-round in places such as Hawaii and parts of California. They prefer the waters of the shelf break and slope. Once commonly seen off of Southern California, short-finned pilot whales disappeared from the area after a strong El Niño year in the early 1980s, according to the National Oceanic and Atmospheric Administration. In October 2014, crew and passengers on several boats spotted a pod of 50–200 off Dana Point, California. Behavior and life history Foraging and parasites Although pilot whales are not known to have many predators, possible threats come from humans and killer whales. Both species eat primarily squid. The whales make seasonal inshore and offshore movements in response to the dispersal of their prey. Fish that are consumed include Atlantic cod, Greenland turbot, Atlantic mackerel, Atlantic herring, hake, and spiny dogfish in the northwest Atlantic. In the Faroe Islands, whales mostly eat squid, but will also eat fish species such as greater argentine and blue whiting. However, Faroe whales do not seem to feed on cod, herring, or mackerel, even when they are abundant. Pilot whales generally take several breaths before diving for a few minutes. Feeding dives may last over ten minutes. They are capable of diving to depths of 600 meters, but most dives are to a depth of 30–60 m. Shallow dives tend to take place during the day, while deeper ones take place at night. When making deep dives, pilot whales often make fast sprints to catch fast-moving prey such as squid. Compared to sperm whales and beaked whales, foraging short-finned pilot whales are more energetic at the same depth. When they reach the end of their dives, pilot whales will sprint, possibly to catch prey, and then make a few buzzes. This is unusual, considering that deep-diving, breath-holding animals would be expected to swim slowly to conserve oxygen. The animal's high metabolism possibly allows it to sprint at deep depths, which would also give it shorter diving periods than some other marine mammals. This may also be the case for long-finned pilot whales. In 2024, a gps-fitted long-finned pilot whale recorded a diving depth over 1,100 meters. Pilot whales are often infested with whale lice, cestodes, and nematodes. They also can be hosts to various pathogenic bacteria and viruses, such as Streptococcus, Pseudomonas, Escherichia, Staphylococcus, and influenza. One sample of Newfoundland pilot whales found that the most common illness was an upper respiratory tract infection. Social structure Both species live in groups of 10–30, but some groups may number 100 or more. Data suggest the social structures of pilot whale pods are similar to those of "resident" killer whales. The pods are highly stable and the members have close matrilineal relationships. Pod members are of various age and sex classes, although adult females tend to outnumber adult males. They have been observed making various kin-directed behaviors, such as providing food. Numerous pods will temporarily gather, perhaps to allow individuals from different pods to interact and mate, as well as provide protection. Both species are loosely polygynous. Data suggest both males and females remain in their mother's pod for life; despite this, inbreeding within a pod does not seem to occur. During aggregations, males will temporarily leave their pods to mate with females from other pods. Male reproductive dominance or competition for mates does not seem to exist. After mating, a male pilot whale usually spends only a few months with a female, and an individual may sire several offspring in the same pod. Males return to their own pods when the aggregations disband, and their presence may contribute to the survival of the other pod members. No evidence of "bachelor" groups has been found. Pilot whale pods off southern California have been observed in three different groups: traveling/hunting groups, feeding groups and loafing groups. In traveling/hunting groups, individuals position themselves in chorus lines stretching two miles long, with only a few whales underneath. Sexual and age-class segregation apparently occurs in these groups. In feeding groups, individuals are very loosely associated, but may move in the same direction. In loafing groups, whales number between 12 and 30 individuals resting. Mating and other behaviors may take place. Reproduction and lifecycle Pilot whales have one of the longest birth intervals of the cetaceans, calving once every three to five years. Most matings and calvings occur during the summer for long-finned pilot whales. For short-finned pilot whales of the Southern Hemisphere, births are at their highest in spring and autumn, while in Northern Hemisphere, the time in which calving peaks can vary by population. For long-finned pilot whales, gestation lasts 12–16 months, and short-finned pilot whales have a 15-month gestation period. The calf nurses for 36–42 months, allowing for extensive mother-calf bonds. Young pilot whales will take milk until as old as 13–15 years of age. Short-finned pilot whale females will go through menopause, but this is not as common in females of long-finned pilot whales. Postreproductive females possibly play important roles in the survival of the young. Postreproductive females will continue to lactate and nurse young. Since they can no longer bear young of their own, these females invest in the current young, allowing them to feed even though they are not their own. Short-finned pilot whales grow more slowly than long-finned pilot whales. For the short-finned pilot whale, females become sexually mature at 9 years old and males at about 13–16 years. For the long-finned pilot whale, females reach maturity at around eight years and males at around 12 years. Vocalizations Pilot whales emit echolocation clicks for foraging and whistles and burst pulses as social signals (e.g. to keep contact with members of their pod). With active behavior, vocalizations are more complex, while less-active behavior is accompanied by simple vocalizations. Differences have been found in the calls of the two species. Compared with short-finned pilot whales, long-finned pilot whales have relatively low-frequency calls with narrow frequency ranges. In one study of North Atlantic long-finned pilot whales, certain vocalizations were heard to accompany certain behaviors. When resting or "milling", simple whistles are emitted. Surfacing behavior is accompanied by more complex whistles and pulsed sounds. The number of whistles made increases with the number of subgroups and the distance in which the whales are spread apart. A study of short-finned pilot whales off the southwest coast of Tenerife in the Canary Islands has found the members of a pod maintained contact with each other through call repertoires unique to their pod. A later study found, when foraging at around 800 m deep, short-finned pilot whales make tonal calls. The number and length of the calls seem to decrease with depth despite being farther away from conspecifics at the surface. As such, the surrounding water pressure affects the energy of the calls, but it does not appear to affect the frequency levels. When in stressful situations, pilot whales produce "shrills" or "plaintive cries", which are variations of their whistles. To elude predators, long-finned pilot whales off the southern coast of Australia have been observed to mimic the calls of orcas while scavenging for food. This behaviour is thought to deter orca pods from approaching the pilot whales. Antagonistic interactions with other species Pilot whales have been occasionally observed mobbing or chasing other species of cetaceans. In several parts of the world, including off Iceland, long-finned pilot whales have been frequently documented chasing killer whales. The reasons for these chases are unknown, but it has been proposed that they might be due to either competition for prey or an anti-predation strategy. In 2021 an adult female killer whale with a newborn pilot whale travelling alongside her was observed off Western Iceland, leading scientists to question whether the relationship between these species might be far more complex than previously suggested. It is not known whether the newborn was adopted or abducted, but this same female killer whale was seen a year later interacting with a larger group of pilot whales. Based on experimentation involving familiar sounds of Orcas that consume fish and unfamiliar vocalizations of mammal-hunting Killer Whales, one study suggests that long-finned pilot whales can distinguish between familiar and unaccustomed types of Orca, noting behavioral differences like the ceasing of feeding when mammal-hunting Orcas' sounds were played. The study suggests that antagonistic interactions against fish-eating Killer Whales could either be an anti-predatory behavior or an attempt to maintain territory, while actions done in response to mammal-hunting Killer Whales could be a response to a more dangerous threat. Stranding Of the cetaceans, pilot whales are among the most common stranders. Because of their strong social bonds, whole groups of pilot whales will strand. Single stranders have been recorded and these are usually diseased. Group stranding tends to be of mostly healthy individuals. Several hypotheses have been proposed to explain group strandings. When using magnetic fields for navigation, the whales have been suggested to get perplexed by geomagnetic anomalies or they may be following a sick member of their group that got stranded. The pod also may be following a member of high importance that got stranded and a secondary social response makes them keep returning. Researchers from New Zealand have successfully used secondary social responses to keep a stranding pod of long-finned pilot whales from returning to the beach. In addition, the young members of the pod were taken offshore to buoys, and their distress calls lured the older whales back out to sea. In September 2022, nearly 200 pilot whales died after becoming stranded on Ocean Beach, part of Tasmania's west coast. Authorities said only about 35 survived of the 230 that were stranded. Human interaction The IUCN lists long-finned pilot whales as "least concern" in the Red List of Threatened Species. Long-finned pilot whales in the North and Baltic Seas are listed in Appendix II of the Convention on the Conservation of Migratory Species of Wild Animals (CMS). Those from northwest and northeast Atlantic may also need to be included to Appendix II of CMS. The short-finned pilot whale is listed on Appendix II of CITES. Hunting The long-finned pilot whale has traditionally been hunted by "driving", which involves many hunters and boats gathering in a semicircle behind a pod of whales close to shore, and slowly driving them towards a bay, where they become stranded and are then slaughtered. This practice was common in both the 19th and 20th centuries. The whales were hunted for bone, meat, oil, and fertilizer. In the Faroe Islands, pilot whale hunting started at least in the 16th century, and continued into modern times, with thousands being killed during the 1970s and 1980s. In other parts of the North Atlantic, such as Norway, West Greenland, Ireland and Cape Cod, pilot whales have also been hunted, but to a lesser extent. One fishery at Cape Cod harvested 2,000–3,000 whales per year during the late 19th and early 20th centuries. Newfoundland's long-finned pilot whale fishery was at its highest in 1956, but declined shortly after and is now defunct. In the Southern Hemisphere, exploitation of long-finned pilot whales has been sporadic and low. Currently, long-finned pilot whales are only hunted at the Faroe Islands and Greenland. According to the IUCN the harvesting of this species for food in the Faroe Islands and Greenland has not resulted in any detectable declines in abundance. The short-finned pilot whale has also been hunted for many centuries, particularly by Japanese whalers. Between 1948 and 1980, hundreds of whales were exploited at Hokkaido and Sanriku in the north and Taiji, Izu, and Okinawa in the south. These fisheries were at their highest in the late 1940s and early 1950s; 2,326 short-finned pilot whales were harvested in the mid- to late 1980s. This had decreased to about 400 per year by the 1990s. Pilot whales have also fallen victim to bycatches. In one year, around 30 short-finned pilot whales were caught by the squid round-haul fishery in southern California. Likewise, California's drift gill net fishery took around 20 whales a year in the mid-1990s. In 1988, 141 whales caught on the east coast of the U.S. were taken by the foreign Atlantic mackerel fishery, which forced it to be shut down. Pollution As with other marine mammals, pilot whales are susceptible to certain pollutants. Off the Faroes, France, the UK, and the eastern US, pilot whales were found to have been contaminated with high amounts of DDT and PCB. Pollutants such as DDT and mercury can be passed from mothers to their babies during gestation and lactation. The Faroes whales have also been contaminated with cadmium and mercury. However, pilot whales from Newfoundland and Tasmania were found to have had very low levels of DDT. Short-finned pilot whales off the west coast of the US have had high amounts of DDT and PCB in contrast to the low amounts found in whales from Japan and the Antilles. Cuisine Pilot whale meat is available for consumption in very few areas of Japan, mainly along the central Pacific coast, and also in other areas of the world, such as the Faroe Islands. The meat is high in protein (higher than beef) and low in fat. Because a whale's fat is contained in the layer of blubber beneath the skin, and the muscle is high in myoglobin, the meat is a dark shade of red. In Japan, where pilot whale meat can be found in certain restaurants and izakayas, the meat is sometimes served raw, as sashimi, but just as often pilot whale steaks are marinated, cut into small chunks, and grilled. When grilled, the meat is slightly flaky and quite flavorful, somewhat gamey, though similar to a quality cut of beef, with distinct yet subtle undertones recalling its marine origin. In both Japan and the Faroe Islands, the meat is contaminated with mercury and cadmium, causing a health risk for those who frequently eat it, especially children and pregnant women. In November 2008, an article in New Scientist reported that research done on the Faroe Islands resulted in two chief medical officers recommending against the consumption of pilot whale meat, considering it to be too toxic. In 2008, the local authorities recommended that pilot whale meat should no longer be eaten due to the contamination. This has resulted in reduced consumption, according to a senior Faroese health official. Captivity Pilot whales, mostly short-finned pilot whales, have been kept in captivity in various marine parks, arguably starting in the late 1940s. Since 1973, some long-finned pilot whales from New England waters were taken and temporarily kept in captivity. Short-finned pilot whales off southern California, Hawaii and Japan have been kept in aquariums and oceanariums. Several pilot whales from southern California and Hawaii were taken into captivity during the 1960s and early 1970s, two of which were placed at SeaWorld San Diego. During the 1970s and early 1980s, six pilot whales were captured alive by drive hunts and taken for public display. Pilot whales have historically had low survival rates in captivity, with the average annual survival being 0.51 years during the mid-1960s to early 1970s. There have been a few exceptions to the rule. Bubbles, a female short-finned pilot whale, who was displayed in Marineland of the Pacific and eventually at Sea World California, lived to be somewhere in her 50s when she eventually died on 12 June 2016. In 1968, a pilot whale was captured, given the name Morgan, and trained by the U.S. Navy's Deep Ops to retrieve deeper-attached objects from the ocean floor. He dove a record depth of 1654 feet and was used for training until 1971. Films There are two documentaries entirely dedicated to the pilot whales. Full-length Cheetahs of the deep (49’, 2014, directed by Rafa Herrero Massieu) — tells about the way of life, features of social interaction, the subtleties of hunting, games and breeding on the example of a group of non-migrating short-finned pilot whales living between the Islands of Tenerife and La Gomera of the Canary archipelago. A curious feature of the film is that: “all marine mammals filmed in freediving”. Short film My Pilot, Whale (28’, 2014, directed by Alexander and Nicole Gratovsky) demonstrates the possibility of interaction between humans and free-living pilot whales, offering the viewer a number of philosophical questions related to cetaceans: about their attitude to the world, what we have in common, what we — humans — can learn from them, and so on. The film has received a number awards of international film festivals.
Biology and health sciences
Toothed whale
Animals
372399
https://en.wikipedia.org/wiki/Opposite%20category
Opposite category
In category theory, a branch of mathematics, the opposite category or dual category Cop of a given category C is formed by reversing the morphisms, i.e. interchanging the source and target of each morphism. Doing the reversal twice yields the original category, so the opposite of an opposite category is the original category itself. In symbols, . Examples An example comes from reversing the direction of inequalities in a partial order. So if X is a set and ≤ a partial order relation, we can define a new partial order relation ≤op by x ≤op y if and only if y ≤ x. The new order is commonly called dual order of ≤, and is mostly denoted by ≥. Therefore, duality plays an important role in order theory and every purely order theoretic concept has a dual. For example, there are opposite pairs child/parent, descendant/ancestor, infimum/supremum, down-set/up-set, ideal/filter etc. This order theoretic duality is in turn a special case of the construction of opposite categories as every ordered set can be understood as a category. Given a semigroup (S, ·), one usually defines the opposite semigroup as (S, ·)op = (S, *) where x*y ≔ y·x for all x,y in S. So also for semigroups there is a strong duality principle. Clearly, the same construction works for groups, as well, and is known in ring theory, too, where it is applied to the multiplicative semigroup of the ring to give the opposite ring. Again this process can be described by completing a semigroup to a monoid, taking the corresponding opposite category, and then possibly removing the unit from that monoid. The category of Boolean algebras and Boolean homomorphisms is equivalent to the opposite of the category of Stone spaces and continuous functions. The category of affine schemes is equivalent to the opposite of the category of commutative rings. The Pontryagin duality restricts to an equivalence between the category of compact Hausdorff abelian topological groups and the opposite of the category of (discrete) abelian groups. By the Gelfand–Naimark theorem, the category of localizable measurable spaces (with measurable maps) is equivalent to the category of commutative Von Neumann algebras (with normal unital homomorphisms of *-algebras). Properties Opposite preserves products: (see product category) Opposite preserves functors: (see functor category, opposite functor) Opposite preserves slices: (see comma category)
Mathematics
Category theory
null
372608
https://en.wikipedia.org/wiki/Milankovitch%20cycles
Milankovitch cycles
Milankovitch cycles describe the collective effects of changes in the Earth's movements on its climate over thousands of years. The term was coined and named after the Serbian geophysicist and astronomer Milutin Milanković. In the 1920s, he provided a more definitive and quantitative analysis than James Croll's earlier hypothesis that variations in eccentricity, axial tilt, and precession combined to result in cyclical variations in the intra-annual and latitudinal distribution of solar radiation at the Earth's surface, and that this orbital forcing strongly influenced the Earth's climatic patterns. Earth movements The Earth's rotation around its axis, and revolution around the Sun, evolve over time due to gravitational interactions with other bodies in the Solar System. The variations are complex, but a few cycles are dominant. The Earth's orbit varies between nearly circular and mildly elliptical (its eccentricity varies). When the orbit is more elongated, there is more variation in the distance between the Earth and the Sun, and in the amount of solar radiation, at different times in the year. In addition, the rotational tilt of the Earth (its obliquity) changes slightly. A greater tilt makes the seasons more extreme. Finally, the direction in the fixed stars pointed to by the Earth's axis changes (axial precession), while the Earth's elliptical orbit around the Sun rotates (apsidal precession). The combined effect of precession with eccentricity is that proximity to the Sun occurs during different astronomical seasons. Milankovitch studied changes in these movements of the Earth, which alter the amount and location of solar radiation reaching the Earth. This is known as solar forcing (an example of radiative forcing). Milankovitch emphasized the changes experienced at 65° north due to the great amount of land at that latitude. Land masses change surface temperature more quickly than oceans, mainly because convective mixing between shallow and deeper waters keeps the ocean surface relatively cooler. Similarly, the very large thermal inertia of the global ocean delays changes to Earth's average surface temperature when gradually driven by other forcing factors. Orbital eccentricity The Earth's orbit approximates an ellipse. Eccentricity measures the departure of this ellipse from circularity. The shape of the Earth's orbit varies between nearly circular (theoretically the eccentricity can hit zero) and mildly elliptical (highest eccentricity was 0.0679 in the last 250 million years). Its geometric or logarithmic mean is 0.0019. The major component of these variations occurs with a period of 405,000 years (eccentricity variation of ±0.012). Other components have 95,000-year and 124,000-year cycles (with a beat period of 400,000 years). They loosely combine into a 100,000-year cycle (variation of −0.03 to +0.02). The present eccentricity is 0.0167 and decreasing. Eccentricity varies primarily due to the gravitational pull of Jupiter and Saturn. The semi-major axis of the orbital ellipse, however, remains unchanged; according to perturbation theory, which computes the evolution of the orbit, the semi-major axis is invariant. The orbital period (the length of a sidereal year) is also invariant, because according to Kepler's third law, it is determined by the semi-major axis. Longer-term variations are caused by interactions involving the perihelia and nodes of the planets Mercury, Venus, Earth, Mars, and Jupiter. Effect on temperature The semi-major axis is a constant. Therefore, when Earth's orbit becomes more eccentric, the semi-minor axis shortens. This increases the magnitude of seasonal changes. The relative increase in solar irradiation at closest approach to the Sun (perihelion) compared to the irradiation at the furthest distance (aphelion) is slightly larger than four times the eccentricity. For Earth's current orbital eccentricity, incoming solar radiation varies by about 6.8%, while the distance from the Sun currently varies by only 3.4% (). Perihelion presently occurs around 3 January, while aphelion is around 4 July. When the orbit is at its most eccentric, the amount of solar radiation at perihelion will be about 23% more than at aphelion. However, the Earth's eccentricity is so small (at least at present) that the variation in solar irradiation is a minor factor in seasonal climate variation, compared to axial tilt and even compared to the relative ease of heating the larger land masses of the northern hemisphere. Effect on lengths of seasons The seasons are quadrants of the Earth's orbit, marked by the two solstices and the two equinoxes. Kepler's second law states that a body in orbit traces equal areas over equal times; its orbital velocity is highest around perihelion and lowest around aphelion. The Earth spends less time near perihelion and more time near aphelion. This means that the lengths of the seasons vary. Perihelion currently occurs around 3 January, so the Earth's greater velocity shortens winter and autumn in the northern hemisphere, and summer and spring in the southern hemisphere. Summer in the northern hemisphere is 4.66 days longer than winter, and spring is 2.9 days longer than autumn. In the southern hemisphere this is the reverse, winter is 4.66 days longer than summer, and autumn is 2.9 days longer than spring. Greater eccentricity increases the variation in the Earth's orbital velocity. Currently, however, the Earth's orbit is becoming less eccentric (more nearly circular). This will make the seasons in the immediate future more similar in length. Axial tilt (obliquity) The angle of the Earth's axial tilt with respect to the orbital plane (the obliquity of the ecliptic) varies between 22.1° and 24.5°, over a cycle of about 41,000 years. The current tilt is 23.44°, roughly halfway between its extreme values. The tilt last reached its maximum in 8,700 BCE, which correlates with the beginning of the Holocene, the current geological epoch. It is now in the decreasing phase of its cycle, and will reach its minimum around the year 11,800 CE. Increased tilt increases the amplitude of the seasonal cycle in insolation, providing more solar radiation in each hemisphere's summer and less in winter. However, these effects are not uniform everywhere on the Earth's surface. Increased tilt increases the total annual solar radiation at higher latitudes, and decreases the total closer to the equator. The current trend of decreasing tilt, by itself, will promote milder seasons (warmer winters and colder summers), as well as an overall cooling trend. Because most of the planet's snow and ice lies at high latitude, decreasing tilt may encourage the termination of an interglacial period (and lead to an overall cooler climate) and the onset of a glacial period for two reasons: 1) there is less overall summer insolation, and 2) there is less insolation at higher latitudes (which melts less of the previous winter's snow and ice). Axial precession Axial precession is the trend in the direction of the Earth's axis of rotation relative to the fixed stars, with a period of about 25,700 years. Also known as the precession of the equinoxes, this motion means that eventually Polaris will no longer be the north pole star. This precession is caused by the tidal forces exerted by the Sun and the Moon on the rotating Earth; both contribute roughly equally to this effect. Currently, perihelion occurs during the southern hemisphere's summer. This means that solar radiation due to both the axial tilt inclining the southern hemisphere toward the Sun, and the Earth's proximity to the Sun, will reach maximum during the southern summer and reach minimum during the southern winter. These effects on heating are thus additive, which means that seasonal variation in irradiation of the southern hemisphere is more extreme. In the northern hemisphere, these two factors reach maximum at opposite times of the year: the north is tilted toward the Sun when the Earth is furthest from the Sun. The two effects work in opposite directions, resulting in less extreme variations in insolation. In about 10,000 years, the north pole will be tilted toward the Sun when the Earth is at perihelion. Axial tilt and orbital eccentricity will both contribute their maximum increase in solar radiation during the northern hemisphere's summer. Axial precession will promote more extreme variation in irradiation of the northern hemisphere and less extreme variation in the south. When the Earth's axis is aligned such that aphelion and perihelion occur near the equinoxes, axial tilt will not be aligned with or against eccentricity. Apsidal precession The orbital ellipse itself precesses in space, in an irregular fashion, completing a full cycle in about 112,000 years relative to the fixed stars. Apsidal precession occurs in the plane of the ecliptic and alters the orientation of the Earth's orbit relative to the ecliptic. This happens primarily as a result of interactions with Jupiter and Saturn. Smaller contributions are also made by the sun's oblateness and by the effects of general relativity that are well known for Mercury. Apsidal precession combines with the 25,700-year cycle of axial precession (see above) to vary the position in the year that the Earth reaches perihelion. Apsidal precession shortens this period to about 21,000 years, at present. According to a relatively old source (1965), the average value over the last 300,000 years was 23,000 years, varying between 20,800 and 29,000 years. As the orientation of Earth's orbit changes, each season will gradually start earlier in the year. Precession means the Earth's nonuniform motion (see above) will affect different seasons. Winter, for instance, will be in a different section of the orbit. When the Earth's apsides (extremes of distance from the sun) are aligned with the equinoxes, the length of spring and summer combined will equal that of autumn and winter. When they are aligned with the solstices, the difference in the length of these seasons will be greatest. Orbital inclination The inclination of Earth's orbit drifts up and down relative to its present orbit. This three-dimensional movement is known as "precession of the ecliptic" or "planetary precession". Earth's current inclination relative to the invariable plane (the plane that represents the angular momentum of the Solar System—approximately the orbital plane of Jupiter) is 1.57°. Milankovitch did not study planetary precession. It was discovered more recently and measured, relative to Earth's orbit, to have a period of about 70,000 years. When measured independently of Earth's orbit, but relative to the invariable plane, however, precession has a period of about 100,000 years. This period is very similar to the 100,000-year eccentricity period. Both periods closely match the 100,000-year pattern of glacial events. Theory constraints Materials taken from the Earth have been studied to infer the cycles of past climate. Antarctic ice cores contain trapped air bubbles whose ratios of different oxygen isotopes are a reliable proxy for global temperatures around the time the ice was formed. Study of this data concluded that the climatic response documented in the ice cores was driven by northern hemisphere insolation as proposed by the Milankovitch hypothesis. Similar astronomical hypotheses had been advanced in the 19th century by Joseph Adhemar, James Croll, and others. Analysis of deep-ocean cores and of lake depths, and a seminal paper by Hays, Imbrie, and Shackleton provide additional validation through physical evidence. Climate records contained in a core of rock drilled in Arizona show a pattern synchronized with Earth's eccentricity, and cores drilled in New England match it, going back 215 million years. 100,000-year issue Of all the orbital cycles, Milankovitch believed that obliquity had the greatest effect on climate, and that it did so by varying the summer insolation in northern high latitudes. Therefore, he deduced a 41,000-year period for ice ages. However, subsequent research has shown that ice age cycles of the Quaternary glaciation over the last million years have been at a period of 100,000 years, which matches the eccentricity cycle. Various explanations for this discrepancy have been proposed, including frequency modulation or various feedbacks (from carbon dioxide, or ice sheet dynamics). Some models can reproduce the 100,000-year cycles as a result of non-linear interactions between small changes in the Earth's orbit and internal oscillations of the climate system. In particular, the mechanism of the stochastic resonance was originally proposed in order to describe this interaction. Jung-Eun Lee of Brown University proposes that precession changes the amount of energy that Earth absorbs, because the southern hemisphere's greater ability to grow sea ice reflects more energy away from Earth. Moreover, Lee says, "Precession only matters when eccentricity is large. That's why we see a stronger 100,000-year pace than a 21,000-year pace." Some others have argued that the length of the climate record is insufficient to establish a statistically significant relationship between climate and eccentricity variations. Transition changes From 1–3 million years ago, climate cycles matched the 41,000-year cycle in obliquity. After one million years ago, the Mid-Pleistocene Transition (MPT) occurred with a switch to the 100,000-year cycle matching eccentricity. The transition problem refers to the need to explain what changed one million years ago. The MPT can now be reproduced in numerical simulations that include a decreasing trend in carbon dioxide and glacially induced removal of regolith. Interpretation of unsplit peak variances Even the well-dated climate records of the last million years do not exactly match the shape of the eccentricity curve. Eccentricity has component cycles of 95,000 and 125,000 years. Some researchers, however, say the records do not show these peaks, but only indicate a single cycle of 100,000 years. The split between the two eccentricity components, however, is observed at least once in a drill core from the 500-million year-old Scandinavian Alum Shale. Unsynced stage five observation Deep-sea core samples show that the interglacial interval known as marine isotope stage 5 began 130,000 years ago. This is 10,000 years before the solar forcing that the Milankovitch hypothesis predicts. (This is also known as the causality problem because the effect precedes the putative cause.) Present and future conditions Since orbital variations are predictable, any model that relates orbital variations to climate can be run forward to predict future climate, with two caveats: the mechanism by which orbital forcing influences climate is not definitive; and non-orbital effects can be important (for example, the human impact on the environment principally increases greenhouse gases resulting in a warmer climate). An often-cited 1980 orbital model by Imbrie predicted "the long-term cooling trend that began some 6,000 years ago will continue for the next 23,000 years." Another work suggests that solar insolation at 65° N will reach a peak of 460 W·m−2 in around 6,500 years, before decreasing back to current levels (450 W·m−2) in around 16,000 years. Earth's orbit will become less eccentric for about the next 100,000 years, so changes in this insolation will be dominated by changes in obliquity, and should not decline enough to permit a new glacial period in the next 50,000 years. Other celestial bodies Mars Since 1972, speculation sought a relationship between the formation of Mars' alternating bright and dark layers in the polar layered deposits, and the planet's orbital climate forcing. In 2002, Laska, Levard, and Mustard showed ice-layer radiance, as a function of depth, correlate with the insolation variations in summer at the Martian north pole, similar to palaeoclimate variations on Earth. They also showed Mars' precession had a period of about 51 kyr, obliquity had a period of about 120 kyr, and eccentricity had a period ranging between 95 and 99 kyr. In 2003, Head, Mustard, Kreslavsky, Milliken, and Marchant proposed Mars was in an interglacial period for the past 400 kyr, and in a glacial period between 400 and 2100 kyr, due to Mars' obliquity exceeding 30°. At this extreme obliquity, insolation is dominated by the regular periodicity of Mars' obliquity variation. Fourier analysis of Mars' orbital elements, show an obliquity period of 128 kyr, and a precession index period of 73 kyr. Mars has no moon large enough to stabilize its obliquity, which has varied from 10 to 70 degrees. This would explain recent observations of its surface compared to evidence of different conditions in its past, such as the extent of its polar caps. Outer Solar system Saturn's moon Titan has a cycle of approximately 60,000 years that could change the location of the methane lakes. Neptune's moon Triton has a variation similar to Titan's, which could cause its solid nitrogen deposits to migrate over long time scales. Exoplanets Scientists using computer models to study extreme axial tilts have concluded that high obliquity could cause extreme climate variations, and while that would probably not render a planet uninhabitable, it could pose difficulty for land-based life in affected areas. Most such planets would nevertheless allow development of both simple and more complex lifeforms. Although the obliquity they studied is more extreme than Earth ever experiences, there are scenarios 1.5 to 4.5 billion years from now, as the Moon's stabilizing effect lessens, where obliquity could leave its current range and the poles could eventually point almost directly at the Sun.
Physical sciences
Earth science basics: General
Earth science
17703619
https://en.wikipedia.org/wiki/Human%E2%80%93wildlife%20conflict
Human–wildlife conflict
Human–wildlife conflict (HWC) refers to the negative interactions between humans and wild animals, with undesirable consequences both for people and their resources on the one hand, and wildlife and their habitats on the other. HWC, caused by competition for natural resources between human and wildlife, influences human food security and the well-being of both humans and other animals. In many regions, the number of these conflicts has increased in recent decades as a result of human population growth and the transformation of land use. HWC is a serious global threat to sustainable development, food security and conservation in urban and rural landscapes alike. In general, the consequences of HWC include: crop destruction, reduced agricultural productivity, competition for grazing lands and water supply, livestock predation, injury and death to human, damage to infrastructure, and increased risk of disease transmission among wildlife and livestock. As of 2020, conflict mitigation strategies utilized lethal control, translocation, population size regulation and endangered species preservation. Recent management now uses an interdisciplinary set of approaches to solving conflicts. These include applying scientific research, sociological studies and the arts to reducing conflicts. As human-wildlife conflict inflicts direct and indirect consequences on people and animals, its mitigation is an important priority for the management of biodiversity and protected areas. Resolving human-wildlife conflicts and fostering coexistence requires well-informed, holistic and collaborative processes that take into account underlying social, cultural and economic contexts. In 2023, the IUCN SSC Human-Wildlife Conflict & Coexistence Specialist Group published the IUCN SSC Guidelines on human-wildlife conflict and coexistence that aim to provide foundations and principles for good practice, with clear, practical guidance on how best to tackle conflicts and enable coexistence with wildlife. As of 2013, many countries have started to explicitly include human-wildlife conflict in national policies and strategies for wildlife management, development and poverty alleviation. At the national level, collaboration between forestry, wildlife, agriculture, livestock and other relevant sectors is key. Meaning Human–wildlife conflict has been defined by the World Wide Fund for Nature (WWF) in 2004 as "any interaction between humans and wildlife that results in negative impacts of human social, economic or cultural life, on the conservation of wildlife populations, or on the environment". The Creating Co-existence workshop at the 5th Annual World Parks Congress (8–17 September 2003, Montreal) defined human-wildlife conflict in the context of human goals and animal needs as follows: “Human-wildlife conflict occurs when the needs and behavior of wildlife impact negatively on the goals of humans or when the goals of humans negatively impact the needs of wildlife." A 2007 review by the United States Geological Survey defined human-wildlife conflict in two contexts; firstly, actions by wildlife conflict with human goals i.e. life, livelihood and life-style, and secondly, human activities that threaten the safety and survival of wildlife. However, in both cases outcomes are decided by human responses to the interactions. The Government of Yukon defined human-wildlife conflict simply, but through the lens of damage to property, i.e. "any interaction between wildlife and humans which causes harm, whether it’s to the human, the wild animal, or property." Here, property includes buildings, equipment and camps, livestock and pets, but does not include crops, fields or fences. In 2020, the IUCN SSC Human-Wildlife Conflict Task Force described human-wildlife conflict as "struggles that emerge when the presence or behaviour of wildlife poses actual or perceived, direct and recurring threat to human interests or needs, leading to disagreements between groups of people and negative impacts on people and/or wildlife". History Human-wildlife interactions have occurred throughout man's prehistory and recorded history. An early form of human-wildlife conflict is the depredation of the ancestors of prehistoric man by a number of predators of the Miocene such as saber-toothed cats, leopards, and spotted hyenas. Fossil remains of early hominids show evidence of depredation; the Taung Child, the fossilized skull of a young Australopithecus africanus, is thought to have been killed by an eagle from the distinct marks on its skull and the fossil having been found among egg shells and remains of small animals. A Plio-Pleistocene horned crocodile, Crocodylus anthropophagus, whose fossil remains have been recorded from Olduvai Gorge, was the largest predator encountered by prehistoric man, as indicated by hominid specimens preserving crocodile bite marks from these sites. Another 12,000 year old example is the buffalo jump cliff sites found in the western United States. These sites occurred as a result of humans exploiting an animal's herding behavior and predator-flight instincts. The extinction of the passenger pigeon is another example. In 2023 alone, over 1.8 million distinct human-wildlife conflicts occurred as animal involved auto accidents on roadways, seen as roadkill. Understanding bird strike frequency is important to Aircraft safety engineers. Reducing the frequent animal collisions (strikes) from automobiles on roadways are shared concerns of biologists, civil engineers, and automobile safety designers. As of 2020, with specific reference to forests, a high density of large ungulates such as deer, can cause severe damage to the vegetation and can threaten regeneration by trampling or browsing small trees, rubbing themselves on trees or stripping tree bark. This behavior can have important economic implications and can lead to polarization between forest and wildlife managers. Examples Africa As a tropical continent with substantial anthropogenic development, Africa is a hotspot for biodiversity and therefore, for human-wildlife conflict. Two of the primary examples of conflict in Africa are human-predator (lions, leopards, cheetahs, etc.) and human-elephant conflict. Depredation of livestock by African predators is well documented in Kenya, Namibia, Botswana, and more. African elephants frequently clash with humans, as their long-distance migrations often intersect with farms. The resulting damage to crops, infrastructure, and at times, people, can lead to the retaliatory killing of elephants by locals. In 2017, more than 8 000 human-wildlife conflict incidents were reported in Namibia alone (World Bank, 2019). Hyenas killed more than 600 cattle in the Zambezi Region of Namibia between 2011 and 2016 and there were more than 4 000 incidents of crop damage, mostly caused by elephants moving through the region (NACSO, 2017a). Asia With a rapidly increasing human population and high biodiversity, interactions between people and wild animals are becoming more and more prevalent. Like human-predator in Africa, encounters between tigers, people, and their livestock is a prominent issue on the Asian continent. Attacks on humans and livestock have exacerbated major threats to tiger conservation such as mortality, removal of individuals from the wild, and negative perceptions of the animals from locals. Even non-predator conflicts are common, with crop-raiding by elephants and macaques persisting in both rural and urban environments, respectively. Poor disposal of hotel waste in tourism-dominated towns have altered behaviours of carnivores such as sloth bears that usually avoid human habitation and human-generated garbage. For example, as a result of the Human-elephant conflict in Sri Lanka each year as many as 80 people are killed by elephants and more than 230 elephants are killed by farmers. The Sri Lankan elephant is listed as endangered, and only 2.500–4.000 individuals remain in the wild. As of 2021, in India the conflict is exceedingly acute because of the country's Wildlife Protection Act. Moreover, In Asia, wildlife is considered sacred as a messenger of God, and in some cases, religious and political protections are implemented, which can cause conflicts. For example, in Nara City, Japan, the sacred Japanese sika deer (Cervus nippon), protected for over a millennium, has recently seen a population surge around Nara Park. Genetic analysis reveals mixing between sacred deer from the sanctuary and common lineage deer, posing a risk to the sacred deer's unique genetics. This situation presents a complex challenge where excluding surrounding deer populations is necessary to maintain the genetic uniqueness of a sacred deer population that humans have protected for a long time. Antarctica In Antarctica the first known instance of death due to human-wildlife conflict occurred in 2003 when a leopard seal dragged a snorkelling British marine biologist underwater where she drowned. Europe Human–wildlife conflict in Europe includes interactions between people and both carnivores and herbivores. A variety of non-predators such as deer, wild boar, rodents, and starlings have been shown to damage crops and forests. Carnivores like raptors and bears create conflict with humans by eating both farmed and wild fish, while others like lynxes and wolves prey upon livestock. Even less apparent cases of human-wildlife conflict can cause substantial losses; 500,000 deer-vehicle collisions in Europe (and 1-1.5 million in North America) led to 30,000 injuries and 200 deaths. North America Instances of human-wildlife conflict are widespread in North America. In Wisconsin, United States wolf depredation of livestock is a prominent issue that resulted in the injury or death of 377 domestic animals over a 24-year span. Similar incidents were reported in the Greater Yellowstone ecosystem, with reports of wolves killing pets and livestock. Expanding urban centers have created increasing human-wildlife conflicts, with interactions between human and coyotes and mountain lions documented in cities in Colorado and California, respectively, among others. Big cats are a similar source of conflict in Central Mexico, where reports of livestock depredation are widespread, while interactions between humans and coyotes were observed in Canadian cities as well. Oceania On K'gari-Fraser Island in Australia, attacks by wild dingoes on humans (including the well-publicized death of a child) created a human-wildlife crisis that required scientific intervention to manage. In New Zealand, distrust and dislike of introducing predatory birds (such as the New Zealand falcon) to vineyard landscapes led to tensions between people and the surrounding wildlife. In extreme cases large birds have been reported to attack people who approach their nests, with human-magpie conflict in Australia a well-known example. Even conflict in urban environments has been documented, with development increasing the frequency of human-possum interactions in Sydney. South America As with most continents, the depredation of livestock by wild animals is a primary source of human-wildlife conflict in South America. The killings of guanacos by predators in Patagonia, Chile – which possess both economic and cultural value in the region – have created tensions between ranchers and wildlife. South America's only species of bear, the Andean Bear, faces population declines due to similar conflict with livestock owners in countries like Ecuador. Marine ecosystems While many of the causes of human-wildlife conflict are the same between terrestrial and marine ecosystems (depredation, competition, human injury, etc.), as of 2019, ocean environments have been less studied and management approaches often differ.   As with terrestrial conflict, human-wildlife conflict in aquatic environments is diverse and extends across the globe. In Hawaii, for example, an increase in monk seals around the islands has created a conflict between locals who believe that seals “belong” and those who do not. Marine predators such as killer whales and fur seals compete with fisheries for food and resources, while others like great white sharks have a history of injuring humans. In the summer of 2022, a 1,300-pound walrus appeared in Oslo harbor and moved in highly populated areas. Norwegian authorities declared her a threat to human safety as she had moved onto boats, threatened to sink them and she was euthanized. In April 2023, a life sized bronze sculpture of her was installed at Kongen Marina to "create a historic document about the case". Mitigation strategies Mitigation strategies for managing human-wildlife conflict vary significantly depending on location and type of conflict. The preference is always for passive, non-intrusive prevention measures but often active intervention is required to be carried out in conjunction. Regardless of approach, the most successful solutions are those that include local communities in the planning, implementation, and maintenance. Resolving conflicts, therefore, often requires a regional plan of attack with a response tailored to the specific crisis. Still, there are a variety of management techniques that are frequently employed to mitigate conflicts. Examples include: Translocation of problematic animals: Relocating so-called "problem" animals from a site of conflict to a new place is a mitigation technique used in the past, although recent research has shown that this approach can have detrimental impacts on species and is largely ineffective. Translocation can decrease survival rates and lead to extreme dispersal movements for a species, and often "problem" animals will resume conflict behaviors in their new location. Erection of fences or other barriers: Building barriers around cattle bomas, creating distinct wildlife corridors, and erecting beehive fences around farms to deter elephants have all demonstrated the ability to be successful and cost-effective strategies for mitigating human-wildlife conflict. Improving community education and perception of animals: Various cultures have myriad views and values associated with the natural world, and how wildlife is perceived can play a role in exacerbating or alleviating human-wildlife conflict. In one Masaai community where young men once obtained status by killing lions, conservationists worked with community leaders to shift perceptions and allow those young men to achieve the same social status by protecting lions instead. Effective land use planning: altering land use practices can help mitigate conflict between humans and crop-raiding animals. For example, in Mozambique, communities started to grow more chili pepper plants after making the discovery that elephants dislike and avoid plants containing capsaicin. This creative and effective method discourages elephants from trampling community farmers' fields as well as protects the species. Compensation: in some cases, governmental systems have been established to offer monetary compensation for losses sustained due to human-wildlife conflict. These systems hope to deter the need for retaliatory killings of animals, and to financially incentivize the co-existing of humans and wildlife. Compensation strategies have been employed in India, Italy, and South Africa, to name a few. The success of compensation in managing human-wildlife conflict has varied greatly due to under-compensation, a lack of local participation, or a failure by the government to provide timely payments. Spatial analyses and mapping conflict hotspots: mapping interactions and creating spatial models has been successful in mitigating human-carnivore conflict and human-elephant conflict, among others. In Kenya, for example, using grid-based geographical information systems in collaboration with simple statistical analyses allowed conservationists to establish an effective predictor for human-elephant conflict. Predator-deterring guard dogs: The use of guard dogs to protect livestock from depredation has been effective in mitigating human-carnivore conflict around the globe. A recent review found that 15.4% of study cases researching human-carnivore conflict used livestock-guarding dogs as a management technique, with animal losses on average 60 times lower than the norm. Managing garbage and artificial feeding to prevent attraction of wildlife: Many wildlife species are attracted to garbage, especially including food wastes, leading to negative interactions with people. Poor disposal of garbage such as hotel waste is rapidly emerging as an important aspect that heightens human-carnivore conflicts in countries such as India. Urgent research to increase knowledge of the impact of easily available garbage is needed, and improving management of garbage in areas where carnivores reside is essential. Managing garbage disposal and artificial feeding of primates can also reduce conflicts and opportunities for disease transmission. One study found that prohibiting tourists from feeding Japanese macaques reduced aggressive interactions between macaques and people. Use of technology: Rapid technology development (especially Information Technology) can play a vital role in the prevention of Human–wildlife conflict. Drones and mobile applications can be used to detect the movements of animals and warn highways and railways authorities to prevent collisions of animals with vehicles and trains. SMS or WhatsApp messaging systems have also been used to alert people about the presence of animals in nearby areas. Early warning wireless systems have been successfully used in undulating and flat terrain to mitigate human-elephant conflict in Tamil Nadu, India. Hidden dimensions of the conflict Human wildlife conflict also has a range of hidden dimensions that are not typically considered when the focus is on visible consequences. These can include health impacts, opportunity costs, and transaction costs. As of 2013, case studies have included work on elephants in Uttarakhand, northeast India, where human-elephant interactions are correlated with increased imbibing of alcohol by crop guardians resulting in enhanced mortality in interactions. and issues related to gender in northern India. In addition, research has shown that the fear caused by the presence of predators can aggravate human-wildlife conflict more than the actual damage produced by encounters.
Physical sciences
Earth science basics: General
Earth science
17704946
https://en.wikipedia.org/wiki/Epigenomics
Epigenomics
Epigenomics is the study of the complete set of epigenetic modifications on the genetic material of a cell, known as the epigenome. The field is analogous to genomics and proteomics, which are the study of the genome and proteome of a cell. Epigenetic modifications are reversible modifications on a cell's DNA or histones that affect gene expression without altering the DNA sequence. Epigenomic maintenance is a continuous process and plays an important role in stability of eukaryotic genomes by taking part in crucial biological mechanisms like DNA repair. Plant flavones are said to be inhibiting epigenomic marks that cause cancers. Two of the most characterized epigenetic modifications are DNA methylation and histone modification. Epigenetic modifications play an important role in gene expression and regulation, and are involved in numerous cellular processes such as in differentiation/development and tumorigenesis. The study of epigenetics on a global level has been made possible only recently through the adaptation of genomic high-throughput assays. Epigenetics Genomic modifications that alter gene expression that cannot be attributed to modification of the primary DNA sequence and that are heritable mitotically and meiotically are classified as epigenetic modifications. DNA methylation and histone modification are among the best characterized epigenetic processes. DNA methylation The first epigenetic modification to be characterized in depth was DNA methylation. As its name implies, DNA methylation is the process by which a methyl group is added to DNA. The enzymes responsible for catalyzing this reaction are the DNA methyltransferases (DNMTs). While DNA methylation is stable and heritable, it can be reversed by an antagonistic group of enzymes known as DNA de-methylases. In eukaryotes, methylation is most commonly found on the carbon 5 position of cytosine residues (5mC) adjacent to guanine, termed CpG dinucleotides. DNA methylation patterns vary greatly between species and even within the same organism. The usage of methylation among animals is quite different; with vertebrates exhibiting the highest levels of 5mC and invertebrates more moderate levels of 5mC. Some organisms like Caenorhabditis elegans have not been demonstrated to have 5mC nor a conventional DNA methyltransferase; this would suggest that other mechanisms other than DNA methylation are also involved. Within an organism, DNA methylation levels can also vary throughout development and by region. For example, in mouse primordial germ cells, a genome wide de-methylation even occurs; by implantation stage, methylation levels return to their previous somatic values. When DNA methylation occurs at promoter regions, the sites of transcription initiation, it has the effect of repressing gene expression. This is in contrast to unmethylated promoter regions which are associated with actively expressed genes. The mechanism by which DNA methylation represses gene expression is a multi-step process. The distinction between methylated and unmethylated cytosine residues is carried out by specific DNA-binding proteins. Binding of these proteins recruit histone deacetylases (HDACs) enzyme which initiate chromatin remodeling such that the DNA becoming less accessible to transcriptional machinery, such as RNA polymerase, effectively repressing gene expression. Histone modification In eukaryotes, genomic DNA is coiled into protein-DNA complexes called chromatin. Histones, which are the most prevalent type of protein found in chromatin, function to condense the DNA; the net positive charge on histones facilitates their bonding with DNA, which is negatively charged. The basic and repeating units of chromatin, nucleosomes, consist of an octamer of histone proteins (H2A, H2B, H3 and H4) and a 146 bp length of DNA wrapped around it. Nucleosomes and the DNA connecting form a 10 nm diameter chromatin fiber, which can be further condensed. Chromatin packaging of DNA varies depending on the cell cycle stage and by local DNA region. The degree to which chromatin is condensed is associated with a certain transcriptional state. Unpackaged or loose chromatin is more transcriptionally active than tightly packaged chromatin because it is more accessible to transcriptional machinery. By remodeling chromatin structure and changing the density of DNA packaging, gene expression can thus be modulated. Chromatin remodeling occurs via post-translational modifications of the N-terminal tails of core histone proteins. The collective set of histone modifications in a given cell is known as the histone code. Many different types of histone modification are known, including: acetylation, methylation, phosphorylation, ubiquitination, SUMOylation, ADP-ribosylation, deamination and proline isomerization; acetylation, methylation, phosphorylation and ubiquitination have been implicated in gene activation whereas methylation, ubiquitination, SUMOylation, deimination and proline isomerization have been implicated in gene repression. Note that several modification types including methylation, phosphorylation and ubiquitination can be associated with different transcriptional states depending on the specific amino acid on the histone being modified. Furthermore, the DNA region where histone modification occurs can also elicit different effects; an example being methylation of the 3rd core histone at lysine residue 36 (H3K36). When H3K36 occurs in the coding sections of a gene, it is associated with gene activation but the opposite is found when it is within the promoter region. Histone modifications regulate gene expression by two mechanisms: by disruption of the contact between nucleosomes and by recruiting chromatin remodeling ATPases. An example of the first mechanism occurs during the acetylation of lysine terminal tail amino acids, which is catalyzed by histone acetyltransferases (HATs). HATs are part of a multiprotein complex that is recruited to chromatin when activators bind to DNA binding sites. Acetylation effectively neutralizes the basic charge on lysine, which was involved in stabilizing chromatin through its affinity for negatively charged DNA. Acetylated histones therefore favor the dissociation of nucleosomes and thus unwinding of chromatin can occur. Under a loose chromatin state, DNA is more accessible to transcriptional machinery and thus expression is activated. The process can be reversed through removal of histone acetyl groups by deacetylases. The second process involves the recruitment of chromatin remodeling complexes by the binding of activator molecules to corresponding enhancer regions. The nucleosome remodeling complexes reposition nucleosomes by several mechanisms, enabling or disabling accessibility of transcriptional machinery to DNA. The SWI/SNF protein complex in yeast is one example of a chromatin remodeling complex that regulates the expression of many genes through chromatin remodeling. Relation to other genomic fields Epigenomics shares many commonalities with other genomics fields, in both methodology and in its abstract purpose. Epigenomics seeks to identify and characterize epigenetic modifications on a global level, similar to the study of the complete set of DNA in genomics or the complete set of proteins in a cell in proteomics. The logic behind performing epigenetic analysis on a global level is that inferences can be made about epigenetic modifications, which might not otherwise be possible through analysis of specific loci. As in the other genomics fields, epigenomics relies heavily on bioinformatics, which combines the disciplines of biology, mathematics and computer science. However while epigenetic modifications had been known and studied for decades, it is through these advancements in bioinformatics technology that have allowed analyses on a global scale. Many current techniques still draw on older methods, often adapting them to genomic assays as is described in the next section. Methods Histone modification assays The cellular processes of transcription, DNA replication and DNA repair involve the interaction between genomic DNA and nuclear proteins. It had been known that certain regions within chromatin were extremely susceptible to DNAse I digestion, which cleaves DNA in a low sequence specificity manner. Such hypersensitive sites were thought to be transcriptionally active regions, as evidenced by their association with RNA polymerase and topoisomerases I and II. It is now known that sensitivity to DNAse I regions correspond to regions of chromatin with loose DNA-histone association. Hypersensitive sites most often represent promoters regions, which require for DNA to be accessible for DNA binding transcriptional machinery to function. ChIP-Chip and ChIP-Seq Histone modification was first detected on a genome wide level through the coupling of chromatin immunoprecipitation (ChIP) technology with DNA microarrays, termed ChIP-Chip. However instead of isolating a DNA-binding transcription factor or enhancer protein through chromatin immunoprecipitation, the proteins of interest are the modified histones themselves. First, histones are cross-linked to DNA in vivo through light chemical treatment (e.g., formaldehyde). The cells are next lysed, allowing for the chromatin to be extracted and fragmented, either by sonication or treatment with a non-specific restriction enzyme (e.g., micrococcal nuclease). Modification-specific antibodies in turn, are used to immunoprecipitate the DNA-histone complexes. Following immunoprecipitation, the DNA is purified from the histones, amplified via PCR and labeled with a fluorescent tag (e.g., Cy5, Cy3). The final step involves hybridization of labeled DNA, both immunoprecipitated DNA and non-immunoprecipitated onto a microarray containing immobilized gDNA. Analysis of the relative signal intensity allows the sites of histone modification to be determined. ChIP-chip was used extensively to characterize the global histone modification patterns of yeast. From these studies, inferences on the function of histone modifications were made; that transcriptional activation or repression was associated with certain histone modifications and by region. While this method was effective providing near full coverage of the yeast epigenome, its use in larger genomes such as humans is limited. In order to study histone modifications on a truly genome level, other high-throughput methods were coupled with the chromatin immunoprecipitation, namely: SAGE: serial analysis of gene expression (ChIP-SAGE), PET: paired end ditag sequencing (ChIP-PET) and more recently, next-generation sequencing (ChIP-Seq). ChIP-seq follows the same protocol for chromatin immunoprecipitation but instead of amplification of purified DNA and hybridization to a microarray, the DNA fragments are directly sequenced using next generation parallel re-sequencing. It has proven to be an effective method for analyzing the global histone modification patterns and protein target sites, providing higher resolution than previous methods. DNA methylation assays Techniques for characterizing primary DNA sequences could not be directly applied to methylation assays. For example, when DNA was amplified in PCR or bacterial cloning techniques, the methylation pattern was not copied and thus the information lost. The DNA hybridization technique used in DNA assays, in which radioactive probes were used to map and identify DNA sequences, could not be used to distinguish between methylated and non-methylated DNA. Restriction endonuclease based methods Non genome-wide approaches The earliest methylation detection assays used methylation modification sensitive restriction endonucleases. Genomic DNA was digested with both methylation-sensitive and insensitive restriction enzymes recognizing the same restriction site. The idea being that whenever the site was methylated, only the methylation insensitive enzyme could cleave at that position. By comparing restriction fragment sizes generated from the methylation-sensitive enzyme to those of the methylation-insensitive enzyme, it was possible to determine the methylation pattern of the region. This analysis step was done by amplifying the restriction fragments via PCR, separating them through gel electrophoresis and analyzing them via southern blot with probes for the region of interest. This technique was used to compare the DNA methylation modification patterns in the human adult and hemoglobin gene loci. Different regions of the gene (gamma delta beta globin) were known to be expressed at different stages of development. Consistent with a role of DNA methylation in gene repression, regions that were associated with high levels of DNA methylation were not actively expressed. This method was limited not suitable for studies on the global methylation pattern, or ‘methylome’. Even within specific loci it was not fully representative of the true methylation pattern as only those restriction sites with corresponding methylation sensitive and insensitive restriction assays could provide useful information. Further complications could arise when incomplete digestion of DNA by restriction enzymes generated false negative results. Genome wide approaches DNA methylation profiling on a large scale was first made possible through the Restriction Landmark Genome Scanning (RLGS) technique. Like the locus-specific DNA methylation assay, the technique identified methylated DNA via its digestion methylation sensitive enzymes. However it was the use of two-dimensional gel electrophoresis that allowed be characterized on a broader scale. However it was not until the advent of microarray and next generation sequencing technology when truly high resolution and genome-wide DNA methylation became possible. As with RLGS, the endonuclease component is retained in the method but it is coupled to new technologies. One such approach is the differential methylation hybridization (DMH), in which one set of genomic DNA is digested with methylation-sensitive restriction enzymes and a parallel set of DNA is not digested. Both sets of DNA are subsequently amplified and each labelled with fluorescent dyes and used in two-colour array hybridization. The level of DNA methylation at a given loci is determined by the relative intensity ratios of the two dyes. Adaptation of next generation sequencing to DNA methylation assay provides several advantages over array hybridization. Sequence-based technology provides higher resolution to allele specific DNA methylation, can be performed on larger genomes, and does not require creation of DNA microarrays which require adjustments based on CpG density to properly function. Bisulfite sequencing Bisulfite sequencing relies on chemical conversion of unmethylated cytosines exclusively, such that they can be identified through standard DNA sequencing techniques. Sodium bisulfate and alkaline treatment does this by converting unmethylated cytosine residues into uracil while leaving methylated cytosine unaltered. Subsequent amplification and sequencing of untreated DNA and sodium bisulphite treated DNA allows for methylated sites to be identified. Bisulfite sequencing, like the traditional restriction based methods, was historically limited to methylation patterns of specific gene loci, until whole genome sequencing technologies became available. However, unlike traditional restriction based methods, bisulfite sequencing provided resolution on a nucleotide level. Limitations of the bisulfite technique include the incomplete conversion of cytosine to uracil, which is a source of false positives. Further, bisulfite treatment also causes DNA degradation and requires an additional purification step to remove the sodium bisulfite. Next-generation sequencing is well suited in complementing bisulfite sequencing in genome-wide methylation analysis. While this now allows for methylation pattern to be determined on the highest resolution possible, on the single nucleotide level, challenges still remain in the assembly step because of reduced sequence complexity in bisulphite treated DNA. Increases in read length seek to address this challenge, allowing for whole genome shotgun bisulphite sequencing (WGBS) to be performed. The WGBS approach using an Illumina Genome Analyzer platform and has already been implemented in Arabidopsis thaliana. Reduced representation genomic methods based on bisulfite sequencing exist as well, and they are particularly suitable for species with large genome sizes. Chromatin accessibility assays Chromatin accessibility is the measure of how "accessible" or "open" a region of genome is to transcription or binding of transcription factors. The regions which are inaccessible (i.e. because they're bound by nucleosomes) are not actively transcribed by the cell while open and accessible regions are actively transcribed. Changes in chromatin accessibility are important epigenetic regulatory processes that govern cell- or context-specific expression of genes. Assays such as MNase-seq, DNase-seq, ATAC-seq or FAIRE-seq are routinely used to understand the accessible chromatin landscape of cells. The main feature of all these methods is that they're able to selectively isolate either the DNA sequences that are bounded to the histones, or those that are not. These sequences are then compared to a reference genome that allows to identify their relative position. MNase-seq and DNase-seq both follow the same principles, as they employ lytic enzymes that target nucleic acids to cut the DNA strands unbounded by nucleosomes or other proteic factors, while the bounded pieces are sheltered, and can be retrieved and analysed. Since active, unbound regions are destroyed, their detection can only be indirect, by sequencing with a Next Generation Sequencing technique and comparison with a reference. MNase-seq utilises a micrococcal nuclease that produces a single strand cleavage on the opposite strand of the target sequence. DNase-seq employs DNase I, a non-specific double strand-cleaving endonuclease. This technique has been used to such an extent that nucleosome-free regions have been labelled as DHSs, DNase I hypersensitive sites, and has been ENCODE consortium's election method for genome wide chromatin accessibility analyses. The main issue of this technique is that the cleavage distribution can be biased, lowering the quality of the results. FAIRE-seq (Formaldehyde-Assisted Isolation of Regulatory Elements) requires as its first step crosslinking of the DNA with nucleosomes, then DNA shearing by sonication. The free and linked fragments are separated with a traditional phenol-chloroform extraction, since the proteic fraction is stuck in the interphase while the unlinked DNA shifts to the aqueous phase and can be analysed with various methods. Sonication produces random breaks, and therefore is not subject to any kind of bias, and is also the bigger length of the fragments (200-700 nt) makes this technique suitable for wider regions, while it's unable to resolve the single nucleosome. Unlike the nuclease-based methods, FAIRE-seq allows the direct identification of the transcriptionally active sites, and a less laborious sample preparation. ATAC-seq is based on the activity of Tn5 transposase. The transposase is used to insert tags in the genome, with higher frequency on regions not covered by proteic factors. The tags are then used as adapters for PRC or other analytical tools. Direct detection Polymerase sensitivity in single-molecule real-time sequencing made it possible for scientists to directly detect epigenetic marks such as methylation as the polymerase moves along the DNA molecule being sequenced. Several projects have demonstrated the ability to collect genome-wide epigenetic data in bacteria. Nanopore sequencing is based on changes of electrolytic current signals according to base modifications (e.g. Methylation). A polymerase mediates the entrance of ssDNA in the pore: the ion-current variation is modulated by a section of the pore and the consequently generated difference is recorded revealing the position of CpG. Discrimination between hydroxymethylation and methylation is possible thanks to solid-state nanopores even if the current while passing through the high-field region of the pore may be slightly influenced in it. As a reference amplified DNA is used which will not present copied methylationed sites after the PCR process. The Oxford Nanopore Technologies MinION sequencer is a technology where, according to a hidden Markov model, it is possible to distinguish unmethylated cytosine from the methylated one even without chemical treatment that acts to enhance the signal of that modification. The data are registered commonly in picoamperes during established time. Other devices are the Nanopolish and the SignaAlign: the former expresses the frequency of a methylation in a read while the latter gives a probability of it derived from the sum of all the reads. Single-molecule real-time sequencing (SMRT) is a single-molecule DNA sequencing method. Single-molecule real-time sequencing utilizes a zero-mode waveguide (ZMW). A single DNA polymerase enzyme is bound to the bottom of a ZMW with a single molecule of DNA as a template. Each of the four DNA bases is attached to one of four different fluorescent dyes. When a nucleotide is incorporated by the DNA polymerase, the fluorescent tag is cleaved off and the detector detects the fluorescent signal of the nucleotide incorporation. As the sequencing occurs, the polymerase enzyme kinetics shift when it encounters a region of methylation or any other base modification. When the enzyme encounters chemically modified bases, it will slow down or speed up in a uniquely identifiable way. Fluorescence pulses in SMRT sequencing are characterized not only by their emission spectra but also by their duration and by the interval between successive pulses. These metrics, defined as pulse width and interpulse duration (IPD), add valuable information about DNA polymerase kinetics. Pulse width is a function of all kinetic steps after nucleotide binding and up to fluorophore release, and IPD is determined by the kinetics of nucleotide binding and polymerase translocation. In 2010 a team of scientists demonstrated the use of single-molecule real-time sequencing for direct detection of modified nucleotide in the DNA template including N6-methyladenosine, 5-methylcytosine and 5-hydroxylcytosine. These various modifications affect polymerase kinetics differently, allowing discrimination between them. In 2017, another team proposed a combined bisulfite conversion with third-generation single-molecule real-time sequencing, it is called single-molecule real-time bisulfite sequencing (SMRT-BS), which is an accurate targeted CpG methylation analysis method capable of a high degree of multiplying and long read lengths (1.5 kb) without the need for PCR amplicon sub-cloning. Theoretical modeling approaches First mathematical models for different nucleosome states affecting gene expression were introduced in 1980s [ref]. Later, this idea was almost forgotten, until the experimental evidence has indicated a possible role of covalent histone modifications as an epigenetic code. In the next several years, high-throughput data have indeed uncovered the abundance of epigenetic modifications and their relation to chromatin functioning which has motivated new theoretical models for the appearance, maintaining and changing these patterns,. These models are usually formulated in the frame of one-dimensional lattice approaches.
Biology and health sciences
Genetics
Biology
3172714
https://en.wikipedia.org/wiki/Cornu%20aspersum
Cornu aspersum
Cornu aspersum (syn. Helix aspersa, Cryptomphalus aspersus), known by the common name garden snail, is a species of land snail in the family Helicidae, which includes some of the most familiar land snails. Of all terrestrial molluscs, this species may well be the most widely known. It was classified under the name Helix aspersa for over two centuries, but the prevailing classification now places it in the genus Cornu. The Garden Snail is relished as a food item in some areas, but it is also widely regarded as a pest in gardens and in agriculture, especially in regions where it has been introduced accidentally, and where snails are not usually considered to be a menu item. Description The adult bears a hard, thin calcareous shell in diameter and high, with four or five whorls. The shell is variable in coloring and shade of color, but generally it has a reticulated pattern of dark brown, brownish-golden, or chestnut with yellow stripes, flecks, or streaks (characteristically interrupted brown colour bands). The aperture is large and characteristically oblique, its margin in adults is whitish and reflected. The body is soft and slimy, brownish-grey, and able to be retracted entirely into the shell, which the animal does when inactive or threatened. When injured or badly irritated the snail produces a defensive froth of mucus that might repel some enemies or overwhelm aggressive small ants and the like. It has no operculum; during dry or cold weather it seals the aperture of the shell with a thin membrane of dried mucus; the term for such a membrane is epiphragm. The epiphragm helps the snail retain moisture and protects it from small predators such as some ants. The snail's quiescent periods during heat and drought are known as aestivation; its quiescence during winter is known as overwintering. When overwintering, Cornu aspersum avoids the formation of ice in its tissues by altering the osmotic components of its blood (or haemolymph); this permits it to survive temperatures as low as . During aestivation, the mantle collar has the ability to change its permeability to water. The snail also has an osmoregulatory mechanism that prevents excessive absorption of water during hibernation. These mechanisms allow Cornu aspersum to avoid either fatal desiccation or hydration during months of either kind of quiescence. During times of activity the snail's head and "foot" emerge. The head bears four tentacles; the upper two are larger and bear eye-like light sensors, and the lower two are tactile and olfactory sense organs. The snail extends the tentacles by internal pressure of body fluids, and retracts all four tentacles into the head by invagination when threatened or otherwise retreating into its shell. The mouth is located beneath the tentacles, and contains a chitinous radula with which the snail scrapes and manipulates food particles. The shell of Cornu aspersum is almost always right-coiled, but exceptional left-coiled specimens are also known; see Jeremy (snail) for an example. Taxonomy The accepted name of the species was long considered to be Helix aspersa, a member of the genus Helix, like the Roman snail Helix pomatia. However, in a number of publications since 1990, it has instead been placed in various genera previously considered as subgenera of Helix. One such genus is Cornu, which is appropriate if the species is considered as congeneric with the species previously known as Helix aperta. Then the name would be Cornu aspersum. Previously there was debate whether Cornu was a valid generic name (because it was first applied to teratological specimens), but a 2015 ruling has confirmed that it is so. Until this was established, Italian research teams and others used the generic name Cantareus instead. Other workers, including Ukrainian and Russian research teams, who regard H. aspersa and H. aperta as being in different genera, call the former Cryptomphalus aspersus. Analyses based on DNA sequences have now established that C. aspersum and C. aperta share a clade with snails in the genera Otala and Eobania, distinct from the clade containing Helix, so it is no longer tenable to consider them as species of Helix. Many subspecific varieties have been described on the basis of shell characters (e.g.). The most prominent example nowadays is the subspecies Cornu aspersum maximum (Taylor, 1883), originally described as a large shelled form from Algeria (but perhaps including similar forms from elsewhere). In the recent scientific literature the name has been applied both to large Algerian snails and to a large form found in snail farms. Some Algerian forms are indeed genetically quite distant from the usual, most widespread form, but the large form in snail farms is different again. It is also problematic that there was a prior use of the name Helix aspersa maxima unassociated with Algeria. The subspecies maximum is formally considered by some authorities as a junior synonym of Cornu aspersum. Life cycle Like other Pulmonata, individuals are hermaphrodites, producing both male and female gametes. Reproduction is predominantly, and probably exclusively, by outcrossing. During a mating session of several hours, two snails exchange sperm reciprocally. H. aspersa snails stab a calcite spine, known as a love dart, into their partner. The mucus coating the love dart contains a chemical that diverts sperm away from being digested. This is important for sperm competition because individuals mate repeatedly and the donated sperm can remain viable for 4 years. About 10 days after fertilisation, the snail lays a batch of on average 50 spherical, pearly-white eggs into crevices in the topsoil, or sheltered under stones. In a year it may lay approximately six batches of eggs. The size of the egg is 3 mm. After snails hatch from the egg, they mature in one or more years. Maturity takes two years in Southern California, while it takes only 10 months in South Africa. In captivity snails can become sexually mature within 3.5 months of hatching, before they stop growing. The lifespan of snails in the wild is typically 2–3 years. Distribution Cornu aspersum is native to the Mediterranean region and its present range stretches from northwest Africa and Iberia, eastwards to Asia Minor and Egypt,<ref>Commonwealth of Australia. 2002 (April) [http://www.daff.gov.au/__data/assets/pdf_file/0015/24702/fin_egyptian_citrus.pdf Citrus Imports from the Arab Republic of Egypt. A Review Under Existing Import Conditions for Citrus from Israel] . Agriculture, Fisheries and Forestry, Australia. Caption: Gastropods, page 12 and Appendix 2.</ref> and northwards to Britain.Cornu aspersum is a typically anthropochorous species; it has been spread to many geographical regions by humans, either deliberately or accidentally. Nowadays, it is cosmopolitan in temperate zones, and has become naturalised in regions with climates that differ from the mediterranean climate in which it evolved.Pfleger, V. & Chatfield, J. (1983). A guide to snails of Britain and Europe. Hamlyn, London. Its passive anthropochory is the likeliest explanation for genetic resemblances between allopatric populations. Its anthropochorous spread may have started as early as during the Neolithic Revolution some 8500 BP. Such anthropochory continues, sometimes resulting in locally catastrophic destruction of habitat or crops. Its increasing non-native distribution includes parts of Europe, such as Bohemia in the Czech Republic since 2008. It is present in Australia, New Zealand, North America, Costa Rica and southern South America. It was introduced to Southern Africa as a food animal by Huguenots in the 18th century, and into California as a food animal in the 1850s; it is now a notorious agricultural pest in both regions, especially in citrus groves and vineyards. Many jurisdictions have quarantines for preventing the importation of the snail in plant matter. A number of North African endemic forms and subspecies have been described on the basis of shell characters. Cornu aspersum aspersum, in French commonly called the "petit gris", is native to the Mediterranean area and Western Europe, but has been spread widely elsewhere. The name Cornu aspersum maximum has been applied to a large form kept in heliciculture (in French commonly called the "gros gris"), but this is genetically distinct from large Algerian forms earlier given this name. EcologyCornu aspersum is a primarily a herbivore. It feeds on numerous types of fruit trees, vegetable crops, rose bushes, garden flowers, and cereals. It also is an omnivorous scavenger that will feed on rotting plant material and on occasion scavenge animal matter, such as crushed snails and worms. Cornu aspersum can obtain the calcium required to build its shell by consuming soil. In turn it is a food source for many other animals, including small mammals, some bird species, lizards, frogs, centipedes, predatory insects such as glowworms in the family Lampyridae, and predatory terrestrial snails. The species may be of use as an indicator of environmental pollution, because it deposits heavy metals, such as lead, in its shell. Parasites Parasites of Cornu aspersum include a number of nematodes. Metacercariae of various species of the digenean genus Brachylaima have also been reported, and those have potential for being harmful to people because the adults can infect humans. However, the snails are capable of trapping cercariae (trematode larvae) in their shell, thus possibly reducing the intensity of infestation by parasites. Behavior The snail secretes thixotropic adhesive mucus that permits locomotion by rhythmic waves of contraction passing forward within its muscular foot. Starting from the rear, the contraction of the longitudinal muscle fibres above a small area of the film of mucus causes shear that liquefies the mucus, permitting the tip of the tail to move forward. The contracted muscle relaxes while its immediately anteriad transverse band of longitudinal fibres contract in their turn, repeating the process, which continues forward until it reaches the head. At that point the whole animal has moved forward by the length of the contraction of one of the bands of contraction. However, depending on the length of the animal, several bands of contraction can be in progress simultaneously, so that the resultant speed amounts to the speed imparted by a single wave, multiplied by the number of individual waves passing along simultaneously. A separate type of wave motion that may be visible from the side enables the snail to conserve mucus when moving over a dry surface. It lifts its belly skin clear of the ground in arches, contacting only one to two thirds of the area it passes over. With suitable lighting the lifting may be seen from the side as illustrated, and the percentage of saving of mucus may be estimated from the area of wet mucus trail dabs that it leaves behind. This type of wave passes backwards at the speed of the snail's forward motion, therefore having a zero velocity with respect to the ground. An estimate from 1974 for a top speed of 0.03 mph (1.3 cm/s) has become popular. However, this estimate has been questioned since in competitions between snails only speeds of 2.4 mm/s have been achieved.Cornu aspersum has a strong homing instinct, readily returning to a regular hibernation site. Human relevance The species is known as an agricultural and garden pest, an edible delicacy, and occasionally a household pet. In French cuisine, it is known as petit gris, and is served for instance in Escargot a la Bordelaise. Also in Lleida, a city of Catalonia (Spain), there is a gastronomic festival called L'Aplec del Caragol dedicated to this type of snail, known as bover, and attracts over 200,000 guests every year. From Crete are known a dish called "chochloi mpoumpouristoi" (snails turned upside down), the snails cooked alive in a hot pan, on a thick layer of sea salt. Other dishes with snails are snails with rosemary, etc. The practice of rearing snails for food is known as heliciculture. For purposes of cultivation, the snails are kept in a dark place in a wired cage with dry straw or dry wood. Coppiced wine-grape vines are often used for this purpose. During the rainy period the snails come out of hibernation and release most of their mucus onto the dry wood/straw. The snails are then prepared for cooking. Their texture when cooked is slightly chewy. Approaches to snail pest control There are a variety of snail-control measures that gardeners and farmers use in an attempt to reduce damage to valuable plants. Traditional pesticides are still used, as are many less toxic control options such as concentrated garlic or wormwood solutions. Copper metal is also a snail repellent, and thus a copper band around the trunk of a tree will prevent snails from climbing up and reaching the foliage and fruit. Caffeine has proven surprisingly toxic to snails, to the extent that spent coffee grounds (not decaffeinated) make a safe and immediately effective snail-repellant and even molluscicidal mulch for pot-plants, or for wherever else the supply is adequate. The decollate snail (Rumina decollata) will capture and eat garden snails, and because of this it has sometimes been introduced as a biological pest control agent. However, this is not without problems, as the decollate snail is just as likely to attack and devour other species of gastropods that may represent a valuable part of the native fauna of the region. Pharmacological studiesCornu aspersum has gained some popularity as the chief ingredient in skin creams and gels (crema/gel de caracol) sold in the US. These creams are promoted as being suitable for use on wrinkles, scars, dry skin, and acne to reduce pigmentation, scarring, and wrinkles. Secretions of Cornu aspersum produced under stress have skin-regenerative properties because of antioxidant superoxide dismutase and glutathione S-transferase (GSTs) activities. The secretions can stimulate fibroblast proliferation and rearrange the actin cytoskeleton stimulate extracellular matrix assembly and regulation of metalloproteinase activities for regeneration of wounded tissue. The mucus of Cornu aspersum'' contains a rich source of substances that can be used to treat biotic human diseases. Nine fractions of compounds with varying molecular weight were purified from the mucus and was tested against gram-positive and gram-negative bacterial strains. Results found three fractions exhibited predominant antibacterial activity against the gram-positive strain. While further confirmatory research is still needed, potential benefits of the snail extracts or secretion filtrates have been also demonstrated in other disease models in mice, including protective effects against ethanol-induced gastric ulcer and against the progression of Alzheimer's type dementia.
Biology and health sciences
Gastropods
Animals
3172719
https://en.wikipedia.org/wiki/Helix%20pomatia
Helix pomatia
Helix pomatia, known as the Roman snail, Burgundy snail, or escargot, is a species of large, air-breathing stylommatophoran land snail native to Europe. It is characterized by a globular brown shell. It is an edible species which commonly occurs synanthropically throughout its range. Distribution The present distribution of Helix pomatia is considerably affected by the dispersion by human and synanthropic occurrences. The northern limits of their natural distribution run presumably through central Germany and southern Poland with the eastern range limits running through western-most Ukraine and Moldova/Romania to Bulgaria. In the south, the species reaches northern Bulgaria, central Serbia, Bosnia and Hezegovina and Croatia. It occurs in northern Italy southwards to the Po and the Ligurian Apennines. Westerly the native range extends to eastern France. Currently, H. pomatia is distributed up to western Russia (broadly distributed in and around Moskva), to the south of Finland, Sweden and Norway, in Denmark and the Benelux. Scattered introduced populations occur westwards up to northern Spain. In Great Britain, it lives on chalk soils in the south and west of England. In the east, isolated populations live as far as south of Novosibirsk. Introduced populations also exist in the eastern United States and Canada. Description The shell is creamy white to light brownish, often with indistinct brown colour bands although sometimes the banding is well developed and conspicuous. The shell has five to six whorls. The aperture is large. The apertural margin is slightly reflected in adult snails. The umbilicus is narrow and partly covered by the reflected columellar margin. The width of the shell is . The height of the shell is . Ecology Habitat In Central Europe, it occurs in forests and shrubland, as well as in various synanthropic habitats. It lives up to above sea level in the Alps, but usually below . In the south of England, it is restricted to undisturbed grassy or bushy wastelands, usually not in gardens. Lifecycle This snail is hermaphroditic. Reproduction in Central Europe begins at the end of May. Eggs are laid in June and July, in clutches of 40–65 eggs. The size of the egg is 5.5–6.5 mm or 8.6 × 7.2 mm. Juveniles hatch after three to four weeks, and may consume their siblings under unfavourable climate conditions. Maturity is reached after two to five years. The life span is up to 20 years, but they often die sooner due to drying in summer and freezing in winter. Ten-year-old individuals are probably not uncommon in natural populations. The maximum lifespan is 35 years. During estivation or hibernation, H. pomatia is one of the few species that is capable of creating a calcareous epiphragm to seal the opening of its shell. Preference for feeding on the nettle Urtica dioica was found in H. pomatia juveniles in Germany. Conservation This species is listed in IUCN Red List, and in European Red List of Non-marine Molluscs as of least concern. H. pomatia is threatened by continuous habitat destructions and drainage, usually less threatened by commercial collections. Many unsuccessful attempts have been made to establish the species in various parts of England, Scotland, and Ireland; it only survived in natural habitats in southern England, and is threatened by intensive farming and habitat destruction. It is of lower concern in Switzerland and Austria, but many regions restrict commercial collecting. Within its native range, Helix pomatia is mostly a common species. It is also considered Least Concern by the IUCN Red List. However, it is listed in the Annex V of the EU's Habitats Directive and protected by law in several countries to regulate harvesting from free living populations. Germany: listed as a specially protected species in annex 1 of the Bundesartenschutzverordnung. Austria: the protection is up to , and the species is protected in some (e.g. Burgenland). Great Britain: protected in England under the Wildlife and Countryside Act 1981, making it illegal to kill, injure, collect or sell these snails. France: collecting prohibited of individuals with shell diameter under 3 cm and during the period from 1 April to 30 June. Denmark: commercial collecting is prohibited. Uses The intestinal juice of H. pomatia contains large amounts of aryl, steroid, and glucosinolate sulfatase activities. These sulfatases have a broad specificity, so they are commonly used as a hydrolyzing agent in analytical procedures such as chromatography where they are used to prepare samples for analysis. Culinary use and history Roman snails were eaten by Ancient Romans. Nowadays, these snails are especially popular in French cuisine. In the English language, it is called by the French name escargot when used in cooking (escargot simply means snail). Although this species is highly prized as a food, it is difficult to cultivate and is rarely farmed commercially.
Biology and health sciences
Gastropods
Animals
3173663
https://en.wikipedia.org/wiki/Plug%20flow%20reactor%20model
Plug flow reactor model
The plug flow reactor model (PFR, sometimes called continuous tubular reactor, CTR, or piston flow reactors) is a model used to describe chemical reactions in continuous, flowing systems of cylindrical geometry. The PFR model is used to predict the behavior of chemical reactors of such design, so that key reactor variables, such as the dimensions of the reactor, can be estimated. Fluid going through a PFR may be modeled as flowing through the reactor as a series of infinitely thin coherent "plugs", each with a uniform composition, traveling in the axial direction of the reactor, with each plug having a different composition from the ones before and after it. The key assumption is that as a plug flows through a PFR, the fluid is perfectly mixed in the radial direction but not in the axial direction (forwards or backwards). Each plug of differential volume is considered as a separate entity, effectively an infinitesimally small continuous stirred tank reactor, limiting to zero volume. As it flows down the tubular PFR, the residence time () of the plug is a function of its position in the reactor. In the ideal PFR, the residence time distribution is therefore a Dirac delta function with a value equal to . PFR modeling The stationary PFR is governed by ordinary differential equations, the solution for which can be calculated providing that appropriate boundary conditions are known. The PFR model works well for many fluids: liquids, gases, and slurries. Although turbulent flow and axial diffusion cause a degree of mixing in the axial direction in real reactors, the PFR model is appropriate when these effects are sufficiently small that they can be ignored. In the simplest case of a PFR model, several key assumptions must be made in order to simplify the problem, some of which are outlined below. Note that not all of these assumptions are necessary, however the removal of these assumptions does increase the complexity of the problem. The PFR model can be used to model multiple reactions as well as reactions involving changing temperatures, pressures and densities of the flow. Although these complications are ignored in what follows, they are often relevant to industrial processes. Assumptions: Plug flow Steady state Constant density (reasonable for some liquids but a 20% error for polymerizations; valid for gases only if there is no pressure drop, no net change in the number of moles, nor any large temperature change) Single reaction occurring in the bulk of the fluid (homogeneously). A material balance on the differential volume of a fluid element, or plug, on species i of axial length dx between x and x + dx gives: [accumulation] = [in] - [out] + [generation] - [consumption] Accumulation is 0 under steady state; therefore, the above mass balance can be re-written as follows: 1. . where: x is the reactor tube axial position, m dx the differential thickness of fluid plug the index i refers to the species i Fi(x) is the molar flow rate of species i at the position x, mol/s D is the tube diameter, m At is the tube transverse cross sectional area, m2 ν is the stoichiometric coefficient, dimensionless r is the volumetric source/sink term (the reaction rate), mol/m3s. The flow linear velocity, u (m/s) and the concentration of species i, Ci (mol/m3) can be introduced as: and where is the volumetric flow rate. On application of the above to Equation 1, the mass balance on i becomes: 2. . When like terms are cancelled and the limit dx → 0 is applied to Equation 2 the mass balance on species i becomes 3. , The temperature dependence of the reaction rate, r, can be estimated using the Arrhenius equation. Generally, as the temperature increases so does the rate at which the reaction occurs. Residence time, , is the average amount of time a discrete quantity of reagent spends inside the tank. Assume: isothermal conditions, or constant temperature (k is constant) single, irreversible reaction (νA = -1) first-order reaction (r = k CA) After integration of Equation 3 using the above assumptions, solving for CA(x) we get an explicit equation for the concentration of species A as a function of position: 4. , where CA0 is the concentration of species A at the inlet to the reactor, appearing from the integration boundary condition. Operation and uses PFRs are used to model the chemical transformation of compounds as they are transported in systems resembling "pipes". The "pipe" can represent a variety of engineered or natural conduits through which liquids or gases flow. (e.g. rivers, pipelines, regions between two mountains, etc.) An ideal plug flow reactor has a fixed residence time: Any fluid (plug) that enters the reactor at time will exit the reactor at time , where is the residence time of the reactor. The residence time distribution function is therefore a Dirac delta function at . A real plug flow reactor has a residence time distribution that is a narrow pulse around the mean residence time distribution. A typical plug flow reactor could be a tube packed with some solid material (frequently a catalyst). Typically these types of reactors are called packed bed reactors or PBR's. Sometimes the tube will be a tube in a shell and tube heat exchanger. When a plug flow model can not be applied, the dispersion model is usually employed. Residence-time distribution The residence-time distribution (RTD) of a reactor is a characteristic of the mixing that occurs in the chemical reactor. There is no axial mixing in a plug-flow reactor, and this omission is reflected in the RTD which is exhibited by this class of reactors. Real plug flow reactors do not satisfy the idealized flow patterns, back mix flow or plug flow deviation from ideal behavior can be due to channeling of fluid through the vessel, recycling of fluid within the vessel or due to the presence of stagnant region or dead zone of fluid in the vessel. Real plug flow reactors with non-ideal behavior have also been modelled. To predict the exact behavior of a vessel as a chemical reactor, RTD or stimulus response technique is used. The tracer technique, the most widely used method for the study of axial dispersion, is usually used in the form of: Pulse input Step input Cyclic input Random input The RTD is determined experimentally by injecting an inert chemical, molecule, or atom, called a tracer, into the reactor at some time t = 0 and then measuring the tracer concentration, C, in the effluent stream as a function of time. The RTD curve of fluid leaving a vessel is called the E-Curve. This curve is normalized in such a way that the area under it is unity: (1) The mean age of the exit stream or mean residence time is: (2) When a tracer is injected into a reactor at a location more than two or three particle diameters downstream from the entrance and measured some distance upstream from the exit, the system can be described by the dispersion model with combinations of open or close boundary conditions. For such a system where there is no discontinuity in type of flow at the point of tracer injection or at the point of tracer measurement, the variance for open-open system is: (3) Where, (4) which represents the ratio of rate of transport by convection to rate of transport by diffusion or dispersion. = characteristic length (m) = effective dispersion coefficient ( m2/s) = superficial velocity (m/s) based on empty cross-section Vessel dispersion number is defined as: The variance of a continuous distribution measured at a finite number of equidistant locations is given by: (5) Where mean residence time τ is given by: (6) (7) Thus (σθ)2 can be evaluated from the experimental data on C vs. t and for known values of , the dispersion number can be obtained from eq. (3) as: (8) Thus axial dispersion coefficient DL can be estimated (L = packed height) As mentioned before, there are also other boundary conditions that can be applied to the dispersion model giving different relationships for the dispersion number. Advantages From the safety technical point of view the PFR has the advantages that It operates in a steady state It is well controllable Large heat transfer areas can be installed Concerns The main problems lies in difficult and sometimes critical start-up and shut down operations. Applications Plug flow reactors are used for some of the following applications: Large-scale production Fast reactions Homogeneous or heterogeneous reactions Continuous production High-temperature reactions
Physical sciences
Chemical engineering
Chemistry
14861261
https://en.wikipedia.org/wiki/Blurred%20vision
Blurred vision
Blurred vision is an ocular symptom where vision becomes less precise and there is added difficulty to resolve fine details. Temporary blurred vision may involve dry eyes, eye infections, alcohol poisoning, hypoglycemia, or low blood pressure. Other medical conditions may include refractive errors such as myopia, high hypermetropia, and astigmatism, amblyopia, presbyopia, pseudomyopia, diabetes, cataract, pernicious anemia, vitamin B12 deficiency, thiamine deficiency, glaucoma, retinopathy, hypervitaminosis A, migraine, sjögren's syndrome, floater, macular degeneration, and can be a sign of stroke or brain tumor. Causes There are many causes of blurred vision: Refractive errors: Uncorrected refractive errors like myopia, high hypermetropia, and astigmatism will cause distance vision blurring. It is one of the leading cause of visual impairment worldwide. Unless there is no associated amblyopia, visual blur due to refractive errors can be corrected to normal using corrective lenses or refractive surgeries. Presbyopia due to physiological insufficiency of accommodation (accommodation tends to decrease with age) is the main cause of defective near vision in the elderly. Other causes of defective near vision include accommodative insufficiency, paralysis of accommodation etc. Pseudomyopia due to accommodation anomalies like accommodative excess, accommodative spasm etc. cause distance vision blurring. Alcohol intoxication can cause blurred vision. Use of cycloplegic drugs like atropine or other anticholinergics cause visual blur due to paralysis of accommodation. Cataracts: Cloudiness over the eye's lens, cause blurring of vision, halos around lights, and sensitivity to glare. It is also the main cause of blindness worldwide. Glaucoma: Increased intraocular pressure (pressure in the eye) cause progressive optic neuropathy that leads to optic nerve damage, visual field defects and blindness. Sometimes glaucoma may occur without increased intraocular pressure also. Some glaucomas (e.g. open angle glaucoma) cause gradual loss of vision and some others (e.g. angle closure glaucoma) cause sudden loss of vision. It is one of the leading cause of blindness worldwide. Diabetes: Poorly controlled blood sugar can lead to temporary swelling of the lens of the eye, resulting in blurred vision. While it resolves if blood sugar control is reestablished, it is believed repeated occurrences promote the formation of cataracts (which are not temporary). Retinopathy: If left untreated, any type of retinopathy (including diabetic retinopathy, hypertensive retinopathy, sickle cell retinopathy, anemic retinopathy, etc.) can damage retina and lead to visual field defects and blindness. Hypervitaminosis A: Excess consumption of vitamin A can cause blurred vision. Macular degeneration: Macular degeneration cause loss of central vision, blurred vision (especially while reading), metamorphopsia (seeing straight lines as wavy), and colors appearing faded. Macular degeneration is the third main cause of blindness worldwide, and is the main cause of blindness in industrialised countries. Eye infection, inflammation, or injury. Sjögren's syndrome, a chronic autoimmune inflammatory disease that destroys moisture producing glands, including lacrimal gland and leads to dry eye and visual blur. Floaters: Tiny particles drifting across the eye. Although often brief and harmless, they may be a sign of retinal detachment. Retinal detachment: Symptoms include floaters, flashes of light across your visual field, or a sensation of a shade or curtain hanging on one side of your visual field. Optic neuritis: Inflammation of the optic nerve from infection or multiple sclerosis may cause blurring of vision. There may be pain while moving the eye or touching it through the eyelid. Stroke or transient ischemic attack Brain tumor Toxocara: A parasitic roundworm that can cause blurred vision. Bleeding into the eye Temporal arteritis: Inflammation of an artery in the brain that supplies blood to the optic nerve. Migraine headaches: Spots of light, halos, or zigzag patterns are common symptoms prior to the start of the headache. A retinal migraine is when you have only visual symptoms without a headache. Reduced blinking: Lid closure that occurs too infrequently often leads to irregularities of the tear film due to prolonged evaporation, thus resulting in disruptions in visual perception. Carbon monoxide poisoning: Reduced oxygen delivery can affect many areas of the body including vision. Other symptoms caused by CO include vertigo, hallucination and sensitivity to light.
Biology and health sciences
Symptoms and signs
Health
9028799
https://en.wikipedia.org/wiki/Bacteria
Bacteria
Bacteria (; : bacterium) are ubiquitous, mostly free-living organisms often consisting of one biological cell. They constitute a large domain of prokaryotic microorganisms. Typically a few micrometres in length, bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit the air, soil, water, acidic hot springs, radioactive waste, and the deep biosphere of Earth's crust. Bacteria play a vital role in many stages of the nutrient cycle by recycling nutrients and the fixation of nitrogen from the atmosphere. The nutrient cycle includes the decomposition of dead bodies; bacteria are responsible for the putrefaction stage in this process. In the biological communities surrounding hydrothermal vents and cold seeps, extremophile bacteria provide the nutrients needed to sustain life by converting dissolved compounds, such as hydrogen sulphide and methane, to energy. Bacteria also live in mutualistic, commensal and parasitic relationships with plants and animals. Most bacteria have not been characterised and there are many species that cannot be grown in the laboratory. The study of bacteria is known as bacteriology, a branch of microbiology. Like all animals, humans carry vast numbers (approximately 1013 to 1014) of bacteria. Most are in the gut, though there are many on the skin. Most of the bacteria in and on the body are harmless or rendered so by the protective effects of the immune system, and many are beneficial, particularly the ones in the gut. However, several species of bacteria are pathogenic and cause infectious diseases, including cholera, syphilis, anthrax, leprosy, tuberculosis, tetanus and bubonic plague. The most common fatal bacterial diseases are respiratory infections. Antibiotics are used to treat bacterial infections and are also used in farming, making antibiotic resistance a growing problem. Bacteria are important in sewage treatment and the breakdown of oil spills, the production of cheese and yogurt through fermentation, the recovery of gold, palladium, copper and other metals in the mining sector (biomining, bioleaching), as well as in biotechnology, and the manufacture of antibiotics and other chemicals. Once regarded as plants constituting the class Schizomycetes ("fission fungi"), bacteria are now classified as prokaryotes. Unlike cells of animals and other eukaryotes, bacterial cells do not contain a nucleus and rarely harbour membrane-bound organelles. Although the term bacteria traditionally included all prokaryotes, the scientific classification changed after the discovery in the 1990s that prokaryotes consist of two very different groups of organisms that evolved from an ancient common ancestor. These evolutionary domains are called Bacteria and Archaea. Etymology The word bacteria is the plural of the Neo-Latin , which is the romanisation of the Ancient Greek (), the diminutive of (), meaning "staff, cane", because the first ones to be discovered were rod-shaped. Origin and early evolution The ancestors of bacteria were unicellular microorganisms that were the first forms of life to appear on Earth, about 4 billion years ago. For about 3 billion years, most organisms were microscopic, and bacteria and archaea were the dominant forms of life. Although bacterial fossils exist, such as stromatolites, their lack of distinctive morphology prevents them from being used to examine the history of bacterial evolution, or to date the time of origin of a particular bacterial species. However, gene sequences can be used to reconstruct the bacterial phylogeny, and these studies indicate that bacteria diverged first from the archaeal/eukaryotic lineage. The most recent common ancestor (MRCA) of bacteria and archaea was probably a hyperthermophile that lived about 2.5 billion–3.2 billion years ago. The earliest life on land may have been bacteria some 3.22 billion years ago. Bacteria were also involved in the second great evolutionary divergence, that of the archaea and eukaryotes. Here, eukaryotes resulted from the entering of ancient bacteria into endosymbiotic associations with the ancestors of eukaryotic cells, which were themselves possibly related to the Archaea. This involved the engulfment by proto-eukaryotic cells of alphaproteobacterial symbionts to form either mitochondria or hydrogenosomes, which are still found in all known Eukarya (sometimes in highly reduced form, e.g. in ancient "amitochondrial" protozoa). Later, some eukaryotes that already contained mitochondria also engulfed cyanobacteria-like organisms, leading to the formation of chloroplasts in algae and plants. This is known as primary endosymbiosis. Habitat Bacteria are ubiquitous, living in every possible habitat on the planet including soil, underwater, deep in Earth's crust and even such extreme environments as acidic hot springs and radioactive waste. There are thought to be approximately 2×1030 bacteria on Earth, forming a biomass that is only exceeded by plants. They are abundant in lakes and oceans, in arctic ice, and geothermal springs where they provide the nutrients needed to sustain life by converting dissolved compounds, such as hydrogen sulphide and methane, to energy. They live on and in plants and animals. Most do not cause diseases, are beneficial to their environments, and are essential for life. The soil is a rich source of bacteria and a few grams contain around a thousand million of them. They are all essential to soil ecology, breaking down toxic waste and recycling nutrients. They are even found in the atmosphere and one cubic metre of air holds around one hundred million bacterial cells. The oceans and seas harbour around 3 x 1026 bacteria which provide up to 50% of the oxygen humans breathe. Only around 2% of bacterial species have been fully studied. Morphology Size. Bacteria display a wide diversity of shapes and sizes. Bacterial cells are about one-tenth the size of eukaryotic cells and are typically 0.5–5.0 micrometres in length. However, a few species are visible to the unaided eye—for example, Thiomargarita namibiensis is up to half a millimetre long, Epulopiscium fishelsoni reaches 0.7 mm, and Thiomargarita magnifica can reach even 2 cm in length, which is 50 times larger than other known bacteria. Among the smallest bacteria are members of the genus Mycoplasma, which measure only 0.3 micrometres, as small as the largest viruses. Some bacteria may be even smaller, but these ultramicrobacteria are not well-studied. Shape. Most bacterial species are either spherical, called cocci (singular coccus, from Greek kókkos, grain, seed), or rod-shaped, called bacilli (sing. bacillus, from Latin baculus, stick). Some bacteria, called vibrio, are shaped like slightly curved rods or comma-shaped; others can be spiral-shaped, called spirilla, or tightly coiled, called spirochaetes. A small number of other unusual shapes have been described, such as star-shaped bacteria. This wide variety of shapes is determined by the bacterial cell wall and cytoskeleton and is important because it can influence the ability of bacteria to acquire nutrients, attach to surfaces, swim through liquids and escape predators. Multicellularity. Most bacterial species exist as single cells; others associate in characteristic patterns: Neisseria forms diploids (pairs), streptococci form chains, and staphylococci group together in "bunch of grapes" clusters. Bacteria can also group to form larger multicellular structures, such as the elongated filaments of Actinomycetota species, the aggregates of Myxobacteria species, and the complex hyphae of Streptomyces species. These multicellular structures are often only seen in certain conditions. For example, when starved of amino acids, myxobacteria detect surrounding cells in a process known as quorum sensing, migrate towards each other, and aggregate to form fruiting bodies up to 500 micrometres long and containing approximately 100,000 bacterial cells. In these fruiting bodies, the bacteria perform separate tasks; for example, about one in ten cells migrate to the top of a fruiting body and differentiate into a specialised dormant state called a myxospore, which is more resistant to drying and other adverse environmental conditions. Biofilms. Bacteria often attach to surfaces and form dense aggregations called biofilms and larger formations known as microbial mats. These biofilms and mats can range from a few micrometres in thickness to up to half a metre in depth, and may contain multiple species of bacteria, protists and archaea. Bacteria living in biofilms display a complex arrangement of cells and extracellular components, forming secondary structures, such as microcolonies, through which there are networks of channels to enable better diffusion of nutrients. In natural environments, such as soil or the surfaces of plants, the majority of bacteria are bound to surfaces in biofilms. Biofilms are also important in medicine, as these structures are often present during chronic bacterial infections or in infections of implanted medical devices, and bacteria protected within biofilms are much harder to kill than individual isolated bacteria. Cellular structure Intracellular structures The bacterial cell is surrounded by a cell membrane, which is made primarily of phospholipids. This membrane encloses the contents of the cell and acts as a barrier to hold nutrients, proteins and other essential components of the cytoplasm within the cell. Unlike eukaryotic cells, bacteria usually lack large membrane-bound structures in their cytoplasm such as a nucleus, mitochondria, chloroplasts and the other organelles present in eukaryotic cells. However, some bacteria have protein-bound organelles in the cytoplasm which compartmentalise aspects of bacterial metabolism, such as the carboxysome. Additionally, bacteria have a multi-component cytoskeleton to control the localisation of proteins and nucleic acids within the cell, and to manage the process of cell division. Many important biochemical reactions, such as energy generation, occur due to concentration gradients across membranes, creating a potential difference analogous to a battery. The general lack of internal membranes in bacteria means these reactions, such as electron transport, occur across the cell membrane between the cytoplasm and the outside of the cell or periplasm. However, in many photosynthetic bacteria, the plasma membrane is highly folded and fills most of the cell with layers of light-gathering membrane. These light-gathering complexes may even form lipid-enclosed structures called chlorosomes in green sulfur bacteria. Bacteria do not have a membrane-bound nucleus, and their genetic material is typically a single circular bacterial chromosome of DNA located in the cytoplasm in an irregularly shaped body called the nucleoid. The nucleoid contains the chromosome with its associated proteins and RNA. Like all other organisms, bacteria contain ribosomes for the production of proteins, but the structure of the bacterial ribosome is different from that of eukaryotes and archaea. Some bacteria produce intracellular nutrient storage granules, such as glycogen, polyphosphate, sulfur or polyhydroxyalkanoates. Bacteria such as the photosynthetic cyanobacteria, produce internal gas vacuoles, which they use to regulate their buoyancy, allowing them to move up or down into water layers with different light intensities and nutrient levels. Extracellular structures Around the outside of the cell membrane is the cell wall. Bacterial cell walls are made of peptidoglycan (also called murein), which is made from polysaccharide chains cross-linked by peptides containing D-amino acids. Bacterial cell walls are different from the cell walls of plants and fungi, which are made of cellulose and chitin, respectively. The cell wall of bacteria is also distinct from that of achaea, which do not contain peptidoglycan. The cell wall is essential to the survival of many bacteria, and the antibiotic penicillin (produced by a fungus called Penicillium) is able to kill bacteria by inhibiting a step in the synthesis of peptidoglycan. There are broadly speaking two different types of cell wall in bacteria, that classify bacteria into Gram-positive bacteria and Gram-negative bacteria. The names originate from the reaction of cells to the Gram stain, a long-standing test for the classification of bacterial species. Gram-positive bacteria possess a thick cell wall containing many layers of peptidoglycan and teichoic acids. In contrast, Gram-negative bacteria have a relatively thin cell wall consisting of a few layers of peptidoglycan surrounded by a second lipid membrane containing lipopolysaccharides and lipoproteins. Most bacteria have the Gram-negative cell wall, and only members of the Bacillota group and actinomycetota (previously known as the low G+C and high G+C Gram-positive bacteria, respectively) have the alternative Gram-positive arrangement. These differences in structure can produce differences in antibiotic susceptibility; for instance, vancomycin can kill only Gram-positive bacteria and is ineffective against Gram-negative pathogens, such as Haemophilus influenzae or Pseudomonas aeruginosa. Some bacteria have cell wall structures that are neither classically Gram-positive or Gram-negative. This includes clinically important bacteria such as mycobacteria which have a thick peptidoglycan cell wall like a Gram-positive bacterium, but also a second outer layer of lipids. In many bacteria, an S-layer of rigidly arrayed protein molecules covers the outside of the cell. This layer provides chemical and physical protection for the cell surface and can act as a macromolecular diffusion barrier. S-layers have diverse functions and are known to act as virulence factors in Campylobacter species and contain surface enzymes in Bacillus stearothermophilus. Flagella are rigid protein structures, about 20 nanometres in diameter and up to 20 micrometres in length, that are used for motility. Flagella are driven by the energy released by the transfer of ions down an electrochemical gradient across the cell membrane. Fimbriae (sometimes called "attachment pili") are fine filaments of protein, usually 2–10 nanometres in diameter and up to several micrometres in length. They are distributed over the surface of the cell, and resemble fine hairs when seen under the electron microscope. Fimbriae are believed to be involved in attachment to solid surfaces or to other cells, and are essential for the virulence of some bacterial pathogens. Pili (sing. pilus) are cellular appendages, slightly larger than fimbriae, that can transfer genetic material between bacterial cells in a process called conjugation where they are called conjugation pili or sex pili (see bacterial genetics, below). They can also generate movement where they are called type IV pili. Glycocalyx is produced by many bacteria to surround their cells, and varies in structural complexity: ranging from a disorganised slime layer of extracellular polymeric substances to a highly structured capsule. These structures can protect cells from engulfment by eukaryotic cells such as macrophages (part of the human immune system). They can also act as antigens and be involved in cell recognition, as well as aiding attachment to surfaces and the formation of biofilms. The assembly of these extracellular structures is dependent on bacterial secretion systems. These transfer proteins from the cytoplasm into the periplasm or into the environment around the cell. Many types of secretion systems are known and these structures are often essential for the virulence of pathogens, so are intensively studied. Endospores Some genera of Gram-positive bacteria, such as Bacillus, Clostridium, Sporohalobacter, Anaerobacter, and Heliobacterium, can form highly resistant, dormant structures called endospores. Endospores develop within the cytoplasm of the cell; generally, a single endospore develops in each cell. Each endospore contains a core of DNA and ribosomes surrounded by a cortex layer and protected by a multilayer rigid coat composed of peptidoglycan and a variety of proteins. Endospores show no detectable metabolism and can survive extreme physical and chemical stresses, such as high levels of UV light, gamma radiation, detergents, disinfectants, heat, freezing, pressure, and desiccation. In this dormant state, these organisms may remain viable for millions of years. Endospores even allow bacteria to survive exposure to the vacuum and radiation of outer space, leading to the possibility that bacteria could be distributed throughout the universe by space dust, meteoroids, asteroids, comets, planetoids, or directed panspermia. Endospore-forming bacteria can cause disease; for example, anthrax can be contracted by the inhalation of Bacillus anthracis endospores, and contamination of deep puncture wounds with Clostridium tetani endospores causes tetanus, which, like botulism, is caused by a toxin released by the bacteria that grow from the spores. Clostridioides difficile infection, a common problem in healthcare settings, is caused by spore-forming bacteria. Metabolism Bacteria exhibit an extremely wide variety of metabolic types. The distribution of metabolic traits within a group of bacteria has traditionally been used to define their taxonomy, but these traits often do not correspond with modern genetic classifications. Bacterial metabolism is classified into nutritional groups on the basis of three major criteria: the source of energy, the electron donors used, and the source of carbon used for growth. Phototrophic bacteria derive energy from light using photosynthesis, while chemotrophic bacteria breaking down chemical compounds through oxidation, driving metabolism by transferring electrons from a given electron donor to a terminal electron acceptor in a redox reaction. Chemotrophs are further divided by the types of compounds they use to transfer electrons. Bacteria that derive electrons from inorganic compounds such as hydrogen, carbon monoxide, or ammonia are called lithotrophs, while those that use organic compounds are called organotrophs. Still, more specifically, aerobic organisms use oxygen as the terminal electron acceptor, while anaerobic organisms use other compounds such as nitrate, sulfate, or carbon dioxide. Many bacteria, called heterotrophs, derive their carbon from other organic carbon. Others, such as cyanobacteria and some purple bacteria, are autotrophic, meaning they obtain cellular carbon by fixing carbon dioxide. In unusual circumstances, the gas methane can be used by methanotrophic bacteria as both a source of electrons and a substrate for carbon anabolism. In many ways, bacterial metabolism provides traits that are useful for ecological stability and for human society. For example, diazotrophs have the ability to fix nitrogen gas using the enzyme nitrogenase. This trait, which can be found in bacteria of most metabolic types listed above, leads to the ecologically important processes of denitrification, sulfate reduction, and acetogenesis, respectively. Bacterial metabolic processes are important drivers in biological responses to pollution; for example, sulfate-reducing bacteria are largely responsible for the production of the highly toxic forms of mercury (methyl- and dimethylmercury) in the environment. Nonrespiratory anaerobes use fermentation to generate energy and reducing power, secreting metabolic by-products (such as ethanol in brewing) as waste. Facultative anaerobes can switch between fermentation and different terminal electron acceptors depending on the environmental conditions in which they find themselves. Reproduction and growth Unlike in multicellular organisms, increases in cell size (cell growth) and reproduction by cell division are tightly linked in unicellular organisms. Bacteria grow to a fixed size and then reproduce through binary fission, a form of asexual reproduction. Under optimal conditions, bacteria can grow and divide extremely rapidly, and some bacterial populations can double as quickly as every 17 minutes. In cell division, two identical clone daughter cells are produced. Some bacteria, while still reproducing asexually, form more complex reproductive structures that help disperse the newly formed daughter cells. Examples include fruiting body formation by myxobacteria and aerial hyphae formation by Streptomyces species, or budding. Budding involves a cell forming a protrusion that breaks away and produces a daughter cell. In the laboratory, bacteria are usually grown using solid or liquid media. Solid growth media, such as agar plates, are used to isolate pure cultures of a bacterial strain. However, liquid growth media are used when the measurement of growth or large volumes of cells are required. Growth in stirred liquid media occurs as an even cell suspension, making the cultures easy to divide and transfer, although isolating single bacteria from liquid media is difficult. The use of selective media (media with specific nutrients added or deficient, or with antibiotics added) can help identify specific organisms. Most laboratory techniques for growing bacteria use high levels of nutrients to produce large amounts of cells cheaply and quickly. However, in natural environments, nutrients are limited, meaning that bacteria cannot continue to reproduce indefinitely. This nutrient limitation has led the evolution of different growth strategies (see r/K selection theory). Some organisms can grow extremely rapidly when nutrients become available, such as the formation of algal and cyanobacterial blooms that often occur in lakes during the summer. Other organisms have adaptations to harsh environments, such as the production of multiple antibiotics by Streptomyces that inhibit the growth of competing microorganisms. In nature, many organisms live in communities (e.g., biofilms) that may allow for increased supply of nutrients and protection from environmental stresses. These relationships can be essential for growth of a particular organism or group of organisms (syntrophy). Bacterial growth follows four phases. When a population of bacteria first enter a high-nutrient environment that allows growth, the cells need to adapt to their new environment. The first phase of growth is the lag phase, a period of slow growth when the cells are adapting to the high-nutrient environment and preparing for fast growth. The lag phase has high biosynthesis rates, as proteins necessary for rapid growth are produced. The second phase of growth is the logarithmic phase, also known as the exponential phase. The log phase is marked by rapid exponential growth. The rate at which cells grow during this phase is known as the growth rate (k), and the time it takes the cells to double is known as the generation time (g). During log phase, nutrients are metabolised at maximum speed until one of the nutrients is depleted and starts limiting growth. The third phase of growth is the stationary phase and is caused by depleted nutrients. The cells reduce their metabolic activity and consume non-essential cellular proteins. The stationary phase is a transition from rapid growth to a stress response state and there is increased expression of genes involved in DNA repair, antioxidant metabolism and nutrient transport. The final phase is the death phase where the bacteria run out of nutrients and die. Genetics Most bacteria have a single circular chromosome that can range in size from only 160,000 base pairs in the endosymbiotic bacteria Carsonella ruddii, to 12,200,000 base pairs (12.2 Mbp) in the soil-dwelling bacteria Sorangium cellulosum. There are many exceptions to this; for example, some Streptomyces and Borrelia species contain a single linear chromosome, while some Vibrio species contain more than one chromosome. Some bacteria contain plasmids, small extra-chromosomal molecules of DNA that may contain genes for various useful functions such as antibiotic resistance, metabolic capabilities, or various virulence factors. Bacteria genomes usually encode a few hundred to a few thousand genes. The genes in bacterial genomes are usually a single continuous stretch of DNA. Although several different types of introns do exist in bacteria, these are much rarer than in eukaryotes. Bacteria, as asexual organisms, inherit an identical copy of the parent's genome and are clonal. However, all bacteria can evolve by selection on changes to their genetic material DNA caused by genetic recombination or mutations. Mutations arise from errors made during the replication of DNA or from exposure to mutagens. Mutation rates vary widely among different species of bacteria and even among different clones of a single species of bacteria. Genetic changes in bacterial genomes emerge from either random mutation during replication or "stress-directed mutation", where genes involved in a particular growth-limiting process have an increased mutation rate. Some bacteria transfer genetic material between cells. This can occur in three main ways. First, bacteria can take up exogenous DNA from their environment in a process called transformation. Many bacteria can naturally take up DNA from the environment, while others must be chemically altered in order to induce them to take up DNA. The development of competence in nature is usually associated with stressful environmental conditions and seems to be an adaptation for facilitating repair of DNA damage in recipient cells. Second, bacteriophages can integrate into the bacterial chromosome, introducing foreign DNA in a process known as transduction. Many types of bacteriophage exist; some infect and lyse their host bacteria, while others insert into the bacterial chromosome. Bacteria resist phage infection through restriction modification systems that degrade foreign DNA and a system that uses CRISPR sequences to retain fragments of the genomes of phage that the bacteria have come into contact with in the past, which allows them to block virus replication through a form of RNA interference. Third, bacteria can transfer genetic material through direct cell contact via conjugation. In ordinary circumstances, transduction, conjugation, and transformation involve transfer of DNA between individual bacteria of the same species, but occasionally transfer may occur between individuals of different bacterial species, and this may have significant consequences, such as the transfer of antibiotic resistance. In such cases, gene acquisition from other bacteria or the environment is called horizontal gene transfer and may be common under natural conditions. Behaviour Movement Many bacteria are motile (able to move themselves) and do so using a variety of mechanisms. The best studied of these are flagella, long filaments that are turned by a motor at the base to generate propeller-like movement. The bacterial flagellum is made of about 20 proteins, with approximately another 30 proteins required for its regulation and assembly. The flagellum is a rotating structure driven by a reversible motor at the base that uses the electrochemical gradient across the membrane for power. Bacteria can use flagella in different ways to generate different kinds of movement. Many bacteria (such as E. coli) have two distinct modes of movement: forward movement (swimming) and tumbling. The tumbling allows them to reorient and makes their movement a three-dimensional random walk. Bacterial species differ in the number and arrangement of flagella on their surface; some have a single flagellum (monotrichous), a flagellum at each end (amphitrichous), clusters of flagella at the poles of the cell (lophotrichous), while others have flagella distributed over the entire surface of the cell (peritrichous). The flagella of a group of bacteria, the spirochaetes, are found between two membranes in the periplasmic space. They have a distinctive helical body that twists about as it moves. Two other types of bacterial motion are called twitching motility that relies on a structure called the type IV pilus, and gliding motility, that uses other mechanisms. In twitching motility, the rod-like pilus extends out from the cell, binds some substrate, and then retracts, pulling the cell forward. Motile bacteria are attracted or repelled by certain stimuli in behaviours called taxes: these include chemotaxis, phototaxis, energy taxis, and magnetotaxis. In one peculiar group, the myxobacteria, individual bacteria move together to form waves of cells that then differentiate to form fruiting bodies containing spores. The myxobacteria move only when on solid surfaces, unlike E. coli, which is motile in liquid or solid media. Several Listeria and Shigella species move inside host cells by usurping the cytoskeleton, which is normally used to move organelles inside the cell. By promoting actin polymerisation at one pole of their cells, they can form a kind of tail that pushes them through the host cell's cytoplasm. Communication A few bacteria have chemical systems that generate light. This bioluminescence often occurs in bacteria that live in association with fish, and the light probably serves to attract fish or other large animals. Bacteria often function as multicellular aggregates known as biofilms, exchanging a variety of molecular signals for intercell communication and engaging in coordinated multicellular behaviour. The communal benefits of multicellular cooperation include a cellular division of labour, accessing resources that cannot effectively be used by single cells, collectively defending against antagonists, and optimising population survival by differentiating into distinct cell types. For example, bacteria in biofilms can have more than five hundred times increased resistance to antibacterial agents than individual "planktonic" bacteria of the same species. One type of intercellular communication by a molecular signal is called quorum sensing, which serves the purpose of determining whether the local population density is sufficient to support investment in processes that are only successful if large numbers of similar organisms behave similarly, such as excreting digestive enzymes or emitting light. Quorum sensing enables bacteria to coordinate gene expression and to produce, release, and detect autoinducers or pheromones that accumulate with the growth in cell population. Classification and identification Classification seeks to describe the diversity of bacterial species by naming and grouping organisms based on similarities. Bacteria can be classified on the basis of cell structure, cellular metabolism or on differences in cell components, such as DNA, fatty acids, pigments, antigens and quinones. While these schemes allowed the identification and classification of bacterial strains, it was unclear whether these differences represented variation between distinct species or between strains of the same species. This uncertainty was due to the lack of distinctive structures in most bacteria, as well as lateral gene transfer between unrelated species. Due to lateral gene transfer, some closely related bacteria can have very different morphologies and metabolisms. To overcome this uncertainty, modern bacterial classification emphasises molecular systematics, using genetic techniques such as guanine cytosine ratio determination, genome-genome hybridisation, as well as sequencing genes that have not undergone extensive lateral gene transfer, such as the rRNA gene. Classification of bacteria is determined by publication in the International Journal of Systematic Bacteriology, and Bergey's Manual of Systematic Bacteriology. The International Committee on Systematic Bacteriology (ICSB) maintains international rules for the naming of bacteria and taxonomic categories and for the ranking of them in the International Code of Nomenclature of Bacteria. Historically, bacteria were considered a part of the Plantae, the plant kingdom, and were called "Schizomycetes" (fission-fungi). For this reason, collective bacteria and other microorganisms in a host are often called "flora". The term "bacteria" was traditionally applied to all microscopic, single-cell prokaryotes. However, molecular systematics showed prokaryotic life to consist of two separate domains, originally called Eubacteria and Archaebacteria, but now called Bacteria and Archaea that evolved independently from an ancient common ancestor. The archaea and eukaryotes are more closely related to each other than either is to the bacteria. These two domains, along with Eukarya, are the basis of the three-domain system, which is currently the most widely used classification system in microbiology. However, due to the relatively recent introduction of molecular systematics and a rapid increase in the number of genome sequences that are available, bacterial classification remains a changing and expanding field. For example, Cavalier-Smith argued that the Archaea and Eukaryotes evolved from Gram-positive bacteria. The identification of bacteria in the laboratory is particularly relevant in medicine, where the correct treatment is determined by the bacterial species causing an infection. Consequently, the need to identify human pathogens was a major impetus for the development of techniques to identify bacteria. The Gram stain, developed in 1884 by Hans Christian Gram, characterises bacteria based on the structural characteristics of their cell walls. The thick layers of peptidoglycan in the "Gram-positive" cell wall stain purple, while the thin "Gram-negative" cell wall appears pink. By combining morphology and Gram-staining, most bacteria can be classified as belonging to one of four groups (Gram-positive cocci, Gram-positive bacilli, Gram-negative cocci and Gram-negative bacilli). Some organisms are best identified by stains other than the Gram stain, particularly mycobacteria or Nocardia, which show acid fastness on Ziehl–Neelsen or similar stains. Other organisms may need to be identified by their growth in special media, or by other techniques, such as serology. Culture techniques are designed to promote the growth and identify particular bacteria while restricting the growth of the other bacteria in the sample. Often these techniques are designed for specific specimens; for example, a sputum sample will be treated to identify organisms that cause pneumonia, while stool specimens are cultured on selective media to identify organisms that cause diarrhea while preventing growth of non-pathogenic bacteria. Specimens that are normally sterile, such as blood, urine or spinal fluid, are cultured under conditions designed to grow all possible organisms. Once a pathogenic organism has been isolated, it can be further characterised by its morphology, growth patterns (such as aerobic or anaerobic growth), patterns of hemolysis, and staining. As with bacterial classification, identification of bacteria is increasingly using molecular methods, and mass spectroscopy. Most bacteria have not been characterised and there are many species that cannot be grown in the laboratory. Diagnostics using DNA-based tools, such as polymerase chain reaction, are increasingly popular due to their specificity and speed, compared to culture-based methods. These methods also allow the detection and identification of "viable but nonculturable" cells that are metabolically active but non-dividing. However, even using these improved methods, the total number of bacterial species is not known and cannot even be estimated with any certainty. Following present classification, there are a little less than 9,300 known species of prokaryotes, which includes bacteria and archaea; but attempts to estimate the true number of bacterial diversity have ranged from 107 to 109 total species—and even these diverse estimates may be off by many orders of magnitude. Phyla The following phyla have been validly published according to the Bacteriological Code: Acidobacteriota Actinomycetota Aquificota Armatimonadota Atribacterota Bacillota Bacteroidota Balneolota Bdellovibrionota Caldisericota Calditrichota Campylobacterota Chlamydiota Chlorobiota Chloroflexota Chrysiogenota Coprothermobacterota Cyanobacteriota Deferribacterota Deinococcota Dictyoglomota Elusimicrobiota Fibrobacterota Fusobacteriota Gemmatimonadota Ignavibacteriota Kiritimatiellota Lentisphaerota Mycoplasmatota Myxococcota Nitrososphaerota Nitrospinota Nitrospirota Planctomycetota Pseudomonadota Rhodothermota Spirochaetota Synergistota Thermodesulfobacteriota Thermomicrobiota Thermoproteota Thermotogota Verrucomicrobiota Interactions with other organisms Despite their apparent simplicity, bacteria can form complex associations with other organisms. These symbiotic associations can be divided into parasitism, mutualism and commensalism. Commensals The word "commensalism" is derived from the word "commensal", meaning "eating at the same table" and all plants and animals are colonised by commensal bacteria. In humans and other animals, millions of them live on the skin, the airways, the gut and other orifices. Referred to as "normal flora", or "commensals", these bacteria usually cause no harm but may occasionally invade other sites of the body and cause infection. Escherichia coli is a commensal in the human gut but can cause urinary tract infections. Similarly, streptococci, which are part of the normal flora of the human mouth, can cause heart disease. Predators Some species of bacteria kill and then consume other microorganisms; these species are called predatory bacteria. These include organisms such as Myxococcus xanthus, which forms swarms of cells that kill and digest any bacteria they encounter. Other bacterial predators either attach to their prey in order to digest them and absorb nutrients or invade another cell and multiply inside the cytosol. These predatory bacteria are thought to have evolved from saprophages that consumed dead microorganisms, through adaptations that allowed them to entrap and kill other organisms. Mutualists Certain bacteria form close spatial associations that are essential for their survival. One such mutualistic association, called interspecies hydrogen transfer, occurs between clusters of anaerobic bacteria that consume organic acids, such as butyric acid or propionic acid, and produce hydrogen, and methanogenic archaea that consume hydrogen. The bacteria in this association are unable to consume the organic acids as this reaction produces hydrogen that accumulates in their surroundings. Only the intimate association with the hydrogen-consuming archaea keeps the hydrogen concentration low enough to allow the bacteria to grow. In soil, microorganisms that reside in the rhizosphere (a zone that includes the root surface and the soil that adheres to the root after gentle shaking) carry out nitrogen fixation, converting nitrogen gas to nitrogenous compounds. This serves to provide an easily absorbable form of nitrogen for many plants, which cannot fix nitrogen themselves. Many other bacteria are found as symbionts in humans and other organisms. For example, the presence of over 1,000 bacterial species in the normal human gut flora of the intestines can contribute to gut immunity, synthesise vitamins, such as folic acid, vitamin K and biotin, convert sugars to lactic acid (see Lactobacillus), as well as fermenting complex undigestible carbohydrates. The presence of this gut flora also inhibits the growth of potentially pathogenic bacteria (usually through competitive exclusion) and these beneficial bacteria are consequently sold as probiotic dietary supplements. Nearly all animal life is dependent on bacteria for survival as only bacteria and some archaea possess the genes and enzymes necessary to synthesise vitamin B12, also known as cobalamin, and provide it through the food chain. Vitamin B12 is a water-soluble vitamin that is involved in the metabolism of every cell of the human body. It is a cofactor in DNA synthesis and in both fatty acid and amino acid metabolism. It is particularly important in the normal functioning of the nervous system via its role in the synthesis of myelin. Pathogens The body is continually exposed to many species of bacteria, including beneficial commensals, which grow on the skin and mucous membranes, and saprophytes, which grow mainly in the soil and in decaying matter. The blood and tissue fluids contain nutrients sufficient to sustain the growth of many bacteria. The body has defence mechanisms that enable it to resist microbial invasion of its tissues and give it a natural immunity or innate resistance against many microorganisms. Unlike some viruses, bacteria evolve relatively slowly so many bacterial diseases also occur in other animals. If bacteria form a parasitic association with other organisms, they are classed as pathogens. Pathogenic bacteria are a major cause of human death and disease and cause infections such as tetanus (caused by Clostridium tetani), typhoid fever, diphtheria, syphilis, cholera, foodborne illness, leprosy (caused by Mycobacterium leprae) and tuberculosis (caused by Mycobacterium tuberculosis). A pathogenic cause for a known medical disease may only be discovered many years later, as was the case with Helicobacter pylori and peptic ulcer disease. Bacterial diseases are also important in agriculture, and bacteria cause leaf spot, fire blight and wilts in plants, as well as Johne's disease, mastitis, salmonella and anthrax in farm animals. Each species of pathogen has a characteristic spectrum of interactions with its human hosts. Some organisms, such as Staphylococcus or Streptococcus, can cause skin infections, pneumonia, meningitis and sepsis, a systemic inflammatory response producing shock, massive vasodilation and death. Yet these organisms are also part of the normal human flora and usually exist on the skin or in the nose without causing any disease at all. Other organisms invariably cause disease in humans, such as Rickettsia, which are obligate intracellular parasites able to grow and reproduce only within the cells of other organisms. One species of Rickettsia causes typhus, while another causes Rocky Mountain spotted fever. Chlamydia, another phylum of obligate intracellular parasites, contains species that can cause pneumonia or urinary tract infection and may be involved in coronary heart disease. Some species, such as Pseudomonas aeruginosa, Burkholderia cenocepacia, and Mycobacterium avium, are opportunistic pathogens and cause disease mainly in people who are immunosuppressed or have cystic fibrosis. Some bacteria produce toxins, which cause diseases. These are endotoxins, which come from broken bacterial cells, and exotoxins, which are produced by bacteria and released into the environment. The bacterium Clostridium botulinum for example, produces a powerful exotoxin that cause respiratory paralysis, and Salmonellae produce an endotoxin that causes gastroenteritis. Some exotoxins can be converted to toxoids, which are used as vaccines to prevent the disease. Bacterial infections may be treated with antibiotics, which are classified as bacteriocidal if they kill bacteria or bacteriostatic if they just prevent bacterial growth. There are many types of antibiotics, and each class inhibits a process that is different in the pathogen from that found in the host. An example of how antibiotics produce selective toxicity are chloramphenicol and puromycin, which inhibit the bacterial ribosome, but not the structurally different eukaryotic ribosome. Antibiotics are used both in treating human disease and in intensive farming to promote animal growth, where they may be contributing to the rapid development of antibiotic resistance in bacterial populations. Infections can be prevented by antiseptic measures such as sterilising the skin prior to piercing it with the needle of a syringe, and by proper care of indwelling catheters. Surgical and dental instruments are also sterilised to prevent contamination by bacteria. Disinfectants such as bleach are used to kill bacteria or other pathogens on surfaces to prevent contamination and further reduce the risk of infection. Significance in technology and industry Bacteria, often lactic acid bacteria, such as Lactobacillus species and Lactococcus species, in combination with yeasts and moulds, have been used for thousands of years in the preparation of fermented foods, such as cheese, pickles, soy sauce, sauerkraut, vinegar, wine, and yogurt. The ability of bacteria to degrade a variety of organic compounds is remarkable and has been used in waste processing and bioremediation. Bacteria capable of digesting the hydrocarbons in petroleum are often used to clean up oil spills. Fertiliser was added to some of the beaches in Prince William Sound in an attempt to promote the growth of these naturally occurring bacteria after the 1989 Exxon Valdez oil spill. These efforts were effective on beaches that were not too thickly covered in oil. Bacteria are also used for the bioremediation of industrial toxic wastes. In the chemical industry, bacteria are most important in the production of enantiomerically pure chemicals for use as pharmaceuticals or agrichemicals. Bacteria can also be used in place of pesticides in biological pest control. This commonly involves Bacillus thuringiensis (also called BT), a Gram-positive, soil-dwelling bacterium. Subspecies of this bacteria are used as Lepidopteran-specific insecticides under trade names such as Dipel and Thuricide. Because of their specificity, these pesticides are regarded as environmentally friendly, with little or no effect on humans, wildlife, pollinators, and most other beneficial insects. Because of their ability to quickly grow and the relative ease with which they can be manipulated, bacteria are the workhorses for the fields of molecular biology, genetics, and biochemistry. By making mutations in bacterial DNA and examining the resulting phenotypes, scientists can determine the function of genes, enzymes, and metabolic pathways in bacteria, then apply this knowledge to more complex organisms. This aim of understanding the biochemistry of a cell reaches its most complex expression in the synthesis of huge amounts of enzyme kinetic and gene expression data into mathematical models of entire organisms. This is achievable in some well-studied bacteria, with models of Escherichia coli metabolism now being produced and tested. This understanding of bacterial metabolism and genetics allows the use of biotechnology to bioengineer bacteria for the production of therapeutic proteins, such as insulin, growth factors, or antibodies. Because of their importance for research in general, samples of bacterial strains are isolated and preserved in Biological Resource Centres. This ensures the availability of the strain to scientists worldwide. History of bacteriology Bacteria were first observed by the Dutch microscopist Antonie van Leeuwenhoek in 1676, using a single-lens microscope of his own design. He then published his observations in a series of letters to the Royal Society of London. Bacteria were Leeuwenhoek's most remarkable microscopic discovery. Their size was just at the limit of what his simple lenses could resolve, and, in one of the most striking hiatuses in the history of science, no one else would see them again for over a century. His observations also included protozoans which he called animalcules, and his findings were looked at again in the light of the more recent findings of cell theory. Christian Gottfried Ehrenberg introduced the word "bacterium" in 1828. In fact, his Bacterium was a genus that contained non-spore-forming rod-shaped bacteria, as opposed to Bacillus, a genus of spore-forming rod-shaped bacteria defined by Ehrenberg in 1835. Louis Pasteur demonstrated in 1859 that the growth of microorganisms causes the fermentation process and that this growth is not due to spontaneous generation (yeasts and molds, commonly associated with fermentation, are not bacteria, but rather fungi). Along with his contemporary Robert Koch, Pasteur was an early advocate of the germ theory of disease. Before them, Ignaz Semmelweis and Joseph Lister had realised the importance of sanitised hands in medical work. Semmelweis, who in the 1840s formulated his rules for handwashing in the hospital, prior to the advent of germ theory, attributed disease to "decomposing animal organic matter". His ideas were rejected and his book on the topic condemned by the medical community. After Lister, however, doctors started sanitising their hands in the 1870s. Robert Koch, a pioneer in medical microbiology, worked on cholera, anthrax and tuberculosis. In his research into tuberculosis, Koch finally proved the germ theory, for which he received a Nobel Prize in 1905. In Koch's postulates, he set out criteria to test if an organism is the cause of a disease, and these postulates are still used today. Ferdinand Cohn is said to be a founder of bacteriology, studying bacteria from 1870. Cohn was the first to classify bacteria based on their morphology. Though it was known in the nineteenth century that bacteria are the cause of many diseases, no effective antibacterial treatments were available. In 1910, Paul Ehrlich developed the first antibiotic, by changing dyes that selectively stained Treponema pallidum—the spirochaete that causes syphilis—into compounds that selectively killed the pathogen. Ehrlich, who had been awarded a 1908 Nobel Prize for his work on immunology, pioneered the use of stains to detect and identify bacteria, with his work being the basis of the Gram stain and the Ziehl–Neelsen stain. A major step forward in the study of bacteria came in 1977 when Carl Woese recognised that archaea have a separate line of evolutionary descent from bacteria. This new phylogenetic taxonomy depended on the sequencing of 16S ribosomal RNA and divided prokaryotes into two evolutionary domains, as part of the three-domain system.
Biology and health sciences
Biology
null
9028960
https://en.wikipedia.org/wiki/Load-following%20power%20plant
Load-following power plant
A load-following power plant, regarded as producing mid-merit or mid-priced electricity, is a power plant that adjusts its power output as demand for electricity fluctuates throughout the day. Load-following plants are typically in between base load and peaking power plants in efficiency, speed of start-up and shut-down, construction cost, cost of electricity and capacity factor. Base load and peaking power plants Base load power plants are dispatchable plants that tend to operate at maximum output. They generally shut down or reduce power only to perform maintenance or repair or due to grid constraints. Power plants operated mostly in this way include coal, fuel oil, nuclear, geothermal, run-of-the-river hydroelectric, biomass and combined cycle natural gas plants. Peaking power plants operate only during times of peak demand. In countries with widespread air conditioning, demand peaks around the middle of the afternoon, so a typical peaking power plant may start up a couple of hours before this point and shut down a couple of hours after. The duration of operation for peaking plants varies from a good portion of the waking day to only a couple of dozen hours per year. Peaking power plants include hydroelectric and gas turbine power plants. Many gas turbine power plants can be fueled with natural gas, fuel oil, and/or diesel, allowing greater flexibility in choice of operation- for example, while most gas turbine plants primarily burn natural gas, a supply of fuel oil and/or diesel is sometimes kept on hand in case the gas supply is interrupted. Other gas turbines can only burn a single fuel. Load-following power plants By way of contrast, load-following power plants usually run during the day and early evening, and are operated in direct response to changing demand for power supply. They either shut down or greatly curtail output during the night and early morning, when the demand for electricity is the lowest. The exact hours of operation depend on numerous factors. One of the most important factors for a particular plant is how efficiently it can convert fuel into electricity. The most efficient plants, which are almost invariably the least costly to run per kilowatt-hour produced, are brought online first. As demand increases, the next most efficient plants are brought on line and so on. The status of the electrical grid in that region, especially how much base load generating capacity it has, and the variation in demand are also very important. An additional factor for operational variability is that demand does not vary just between night and day. There are significant variations in the time of year and day of the week. A region that has large variations in demand will require a large load following or peaking power plant capacity because base load power plants can only cover the capacity equal to that needed during times of lowest demand. Load-following power plants can be hydroelectric power plants, diesel and gas engine power plants, combined cycle gas turbine power plants and steam turbine power plants that run on natural gas or heavy fuel oil, although heavy fuel oil plants make up a very small portion of the energy mix. A relatively efficient model of gas turbine that runs on natural gas can also make a decent load-following plant. Gas turbine power plants Gas turbine power plants are the most flexible in terms of adjusting power level, but are also among the most expensive to operate. Therefore, they are generally used as "peaking" units at times of maximum power demand or Combined cycle or cogeneration power plants where turbine exhaust waste heat can be economically used to generate additional power and thermal energy for process or space heating. Diesel and gas engine power plants Diesel and gas engine power plants can be used for base load to stand-by power production due to their high overall flexibility. Such power plants can be started rapidly to meet the grid demands. These engines can be operated efficiently on a wide variety of fuels, adding to their flexibility. Some applications are: base load power generation, wind-diesel, load following, cogeneration and trigeneration. Hydroelectric power plants Hydroelectric power plants can operate as base load, load following or peaking power plants. They have the ability to start within minutes, and in some cases seconds. How the plant operates depends heavily on its water supply, as many plants do not have enough water to operate near their full capacity on a continuous basis. Where hydroelectric dams or associated reservoirs exist, these can often be backed up, reserving the hydro draw for a peak time. This introduces ecological and mechanical stress, so is practiced less today than previously. Lakes and man-made reservoirs used for hydropower come in all sizes, holding enough water for as little as a one-day supply (a diurnal peak variance), or as much as a year's supply, allowing for seasonal peak variance. A plant with a reservoir that holds less than the annual river flow may change its operating style depending on the season of the year. For example, the plant may operate as a peaking plant during the dry season, as a base load plant during the wet season and as a load-following plant between seasons. A plant with a large reservoir may operate independently of wet and dry seasons, such as operating at maximum capacity during peak heating or cooling seasons. When electrical generation supplying the grid and the consumption or load on the electrical grid are in balance, the frequency of the alternating current is at its normal rate (either 50 or 60 hertz). Hydroelectric power plants can be utilized for making extra revenue in an electric grid with erratic grid frequency. When grid frequency is above normal, e.g. Indian grid frequency is exceeding the rated 50 Hz for most of the duration in a month/day, the extra power available can be consumed by adding extra load, say agriculture water pumps, to the grid and this new energy draw is available at nominal price or no price. However, there may not be a guarantee of continued supply at that price when the grid frequency falls below normal, which would then call for a higher price. To arrest the fall of frequency below normal, the available hydro power plants are kept in no load/nominal load operation and the load is automatically ramped up or down strictly following the grid frequency, i.e. the hydro units would run at no load condition when frequency is above 50 Hz and generate power up to full load in case the grid frequency is below 50 Hz. Thus a utility can draw two or more times energy from the grid by loading the hydro units less than 50% of the duration and the effective use of available water is enhanced more than twice the conventional peak load operation. Example of daily peak load (for the Bonneville Power Administration) with large hydro, base load thermal generation and intermittent wind power. Hydro is load following and managing the peaks, with some response from base load thermal. Note that total generation is always greater than the total BPA load because most of the time BPA is a net exporter of energy. The BPA load does not include scheduled energy to other balancing authority areas. Coal-fired power plants Large size coal fired thermal power plants can also be used as load following / variable load power stations to varying extents, with hard coal fueled plants typically being significantly more flexible than lignite fueled coal plants. Some of the features which may be found in coal plants that have been optimized for load following include: Sliding pressure operation: Sliding pressure operation of the steam generator allows the power plant to generate electricity without much deterioration in fuel efficiency at part load operation down to 75% of the nameplate capacity. Over loading capability: The power plants are generally designed to run at 5 to 7% above the name plate rating for 5% duration in a year Frequency follow governor controls: The load generation can be automatically varied to suit the grid frequency needs. Two shift daily operation for five days in a week: The needed warm and hot start up of these power stations are designed to take lesser time to achieve full load operation. Thus these power plants are not strictly base load power generation units. HP/LP steam bypass systems: This feature allows the steam turbo generator to reduce the load quickly and allows the steam generator to adjust to the load requirement with a lag. Nuclear power plants Historically, nuclear power plants were built as baseload plants, without load following capability to keep the design simple. Their startup or shutdown took many hours as they were designed to operate at maximum power, and heating up steam generators to the desired temperature took time. Nuclear power generation has been also portrayed as inflexible by anti-nuclear activists and the German Federal Environment Ministry, while others claimed "that the plants might clog the power grid". Modern nuclear plants with light water reactors are designed to have maneuvering capabilities in the 30-100% range with 5%/minute slope, up to 140 MW/minute. Nuclear power plants in France operate in load-following mode and so participate in the primary and secondary frequency control. Some units follow a variable load program with one or two large power changes per day. Some designs allow for rapid changes of power level around rated power, a capability that is usable for frequency regulation. A more efficient solution is to maintain the primary circuit at full power and to use the excess power for cogeneration. While most nuclear power plants in operation as of early 2000's were already designed with strong load following capabilities, they might have not been used as such for purely economic reasons: nuclear power generation is composed almost entirely of fixed and sunk costs so lowering the power output doesn't significantly reduce generating costs, so it is more effective to run them at full power most of the time. In countries where the baseload was predominantly nuclear (e.g. France) the load-following mode became economical due to overall electricity demand fluctuating throughout the day. Boiling water reactors Boiling water reactors (BWRs) can vary the speed of recirculation water flow to quickly reduce their power level down to 60% of rated power (up to 10%/minute), making them useful for overnight load-following. They can also use control rod manipulation to achieve deeper reductions in power. A few BWR designs do not have recirculation pumps, and these designs must rely solely on control rod manipulation in order to load follow, which is possibly less ideal. In markets such as Chicago, Illinois where half of the local utility's fleet is BWRs, it is common to load-follow (although potentially less economic to do so). Pressurized water reactors Pressurized water reactors (PWRs) use a combination of a chemical shim, typically boron, in the moderator/coolant, control rod manipulation, and turbine speed control (see nuclear reactor technology) to modify power levels. For PWRs not explicitly designed with load following in mind, load following operation isn't quite as common as it is with BWRs. Modern PWRs are generally designed to handle extensive regular load following, and both French and German PWRs in particular have historically been designed with varying degrees of enhanced load following capabilities. France in particular has a long history of utilizing aggressive load following with their PWRs, which are capable of, and used for, both primary and secondary frequency control, in addition to load following. French PWRs use so called "grey" control rods which have lower neutron absorption capability and are used for fine-tuning reactor power, as opposed to "black" control rods in order to maneuver power more rapidly than chemical shim control or conventional control rods allow. These reactors have the capability to regularly vary their output between 30–100% of rated power, to maneuver power up or down by 2–5%/minute during load following activities, and to participate in primary and secondary frequency control at ±2–3% (primary frequency control) and ±3–5% (secondary frequency control, ≥5% for N4 reactors in Mode X). Depending on the exact design and operating mode, their ability to handle low power operation or fast ramping may be partially limited during the very late stages of the fuel cycle. Pressurized heavy water reactors Modern CANDU designs have extensive steam bypass capabilities that allow for a different method of load following that does not necessarily involve changes in reactor power output. Bruce Nuclear Generating Station is a CANDU pressurized heavy water reactor that regularly utilizes its ability to partially bypass steam to the condenser for extended periods of time while the turbine is operating to provide 300 MW per unit (2400 MW total for the eight-unit plant) of flexible (load following) operation capabilities. Reactor power is maintained at the same level during steam bypass operations, which completely avoids xenon poisoning and other concerns associated with maneuvering reactor power output. Solar thermal power plants Concentrated solar power plants with thermal storage are emerging as an option for load-following power plants. They can cater the load demand and work as base load power plants when the extracted solar energy is found excess in a day. Proper mix of solar thermal storage and solar PV can fully match the load fluctuations without the need of costly battery storage. Fuel cell power plants Hydrogen based fuel cell power plants are perfect load-following power plants like emergency DG sets or battery storage systems. They can be run from zero to full load within few minutes. As the transportation of hydrogen to the far away industrial consumers is costly, the surplus hydrogen produced as byproduct from various chemical plants are used for power generation by the fuel cell power plants. Also they do not cause air and water pollution. In fact they clean the ambient air by extracting PM2.5 particulates and also generate pure water for drinking and industrial applications. Solar PV and wind power plants The variable power from renewable energy such as solar and wind power plants can be used to follow the load or stabilize the grid frequency with the help of various means of storage. For countries that are trending away from coal fired baseload plants and towards intermittent energy sources such as wind and solar, that have not yet fully implemented smart grid measures such as demand side management to rapidly respond to changes in this supply, there may be a need for dedicated peaking or load-following power plants and the use of a grid intertie, at least until the peak blunting and load shifting mechanisms are implemented widely enough to match supply. See smart grid alternatives below. Rechargeable battery storage as of 2018, when custom-built new for this purpose without re-using electric vehicle batteries, cost $209 per kWh on average in the United States. When the grid frequency is below the desired or rated value, the power being generated, if any, and the stored battery power is fed to the grid to raise the grid frequency. When the grid frequency is above the desired or rated value, the power being generated is fed or surplus grid power is drawn, in case cheaply available, to the battery units for energy storage. The grid frequency keeps on fluctuating 50 to 100 times in a day above and below the rated value depending on the type of load encountered and the type of generating plants in the electrical grid. Recently, the cost of battery units, solar power plants, etc. have come down drastically to utilise secondary power for power grid stabilization as an on line spinning reserve. New studies have also evaluated both wind and solar plants to follow fast load changes. A study by Gevorgian et al has shown the ability of solar plants to provide load following and fast reserves in both island power systems like Puerto Rico and large power systems in California. Solar and wind intensive smart grids The decentralized and intermittent nature of solar and wind generation entails building signalling networks across vast areas. These include large consumers with discretionary uses, and increasingly include much smaller users. Collectively, these signalling and communication technologies are called the "smart grid". When these technologies reach into most grid-connected devices the term Energy Internet is sometimes used, though this is more commonly considered to be an aspect of the Internet of Things. In 2010, US FERC Chairman Jon Wellinghof outlined the Obama administration's view that strongly preferred smart grid signalling over dedicated load-following power plants, describing following as inherently inefficient. In Scientific American he listed some such measures: "turning off the defrost cycle on the refrigerator at a given time...the grid could signal...As long as that refrigerator got defrosted at the end of the day, you, as a consumer, wouldn't care but ultimately the grid could operate more efficiently." "...if you didn't do that with the refrigerator you would have do that with the coal plant or combustion turbine running up and down, and doing that makes that unit run much more inefficiently." At the time, electric vehicle battery integration into the grid was beginning. Wellinghof referred (ibid) to "these cars now getting paid in Delaware: $7 to $10 a day per car. They are getting paid over $3,000 a year to use these cars to simply control regulation service on the grid when they are charged". Electric vehicle batteries as distributed load following or storage Due to the very high cost of dedicated battery storage, use of electric vehicle batteries both while charging in vehicles (see smart grid), and in stationary grid energy storage arrays as an end-of-life re-use once they no longer hold enough charge for road use, has become the preferred method of load following over dedicated power plants. Such stationary arrays act as a true load-following power plant, and their deployment can "improve the affordability of purchasing such vehicles...Batteries that reach the end of their useful lifespan within the automotive industry can still be considered for other applications as between 70-80% of their original capacity still remains." Such batteries are often repurposed in home arrays which primarily serve as backup, so can participate much more readily in grid stabilizing. The number of such batteries doing nothing is increasing rapidly, e.g. in Australia where Tesla Powerwall demand rose 30 times after major power outages. Home and vehicle batteries are always and necessarily charged responsively when supply is available, meaning they all participate in a smart grid, because the high load (one Japanese estimate was over 7 GW for half the cars in Kanto) simply cannot be managed on an analog grid, lest "The uncoordinated charging can result in creation of a new peak-load" (ibid). Given the charging must be managed, there is no incremental cost to delay charging or discharge these batteries as required for load following, merely a software change and in some cases a payment for the inconvenience of less than complete charging or for battery wear (e.g. "$7 to $10 a day per car" paid in Delaware). Rocky Mountain Institute in 2015 listed the applications of such distributed networks of batteries as (for "ISOs / RTOs") including "energy storage can bid into wholesale electricity markets" or for utility services including: Frequency regulation Spinning and non-spinning reserves Load following / energy arbitrage Black start Voltage support RMI claimed "batteries can provide these services more reliably and at a lower cost than the technology that currently provides a majority of them thermal power plants (see above re coal and gas)", and also that "storage systems installed behind the customer meter can be dispatched to provide deferral or adequacy services to utilities", such as: "Transmission and distribution upgrade deferral. When load forecasts indicate transmission or distribution nodes will exceed their rated load carrying capacity, incremental investments in energy storage can be used to effectively increase the node’s capacity and avoid large, overbuilt, expensive upgrades to the nodes themselves." "Transmission congestion relief. At certain times of the day, ISOs charge utilities to use congested transmission lines. Discharging energy storage systems located downstream of congested lines can avoid these charges." "Resource adequacy. Instead of using or investing in combustion turbines to meet peak generation requirements, utilities can call upon other assets like energy storage instead."
Technology
Concepts
null
7500259
https://en.wikipedia.org/wiki/Crop
Crop
A crop is a plant that can be grown and harvested extensively for profit or subsistence. In other words, a crop is a plant or plant product that is grown for a specific purpose such as food, fibre, or fuel. When plants of the same species are cultivated in rows or other systematic arrangements, it is called crop field or crop cultivation. Most crops are harvested as food for humans or fodder for livestock. Important non-food crops include horticulture, floriculture, and industrial crops. Horticulture crops include plants used for other crops (e.g. fruit trees). Floriculture crops include bedding plants, houseplants, flowering garden and pot plants, cut cultivated greens, and cut flowers. Industrial crops are produced for clothing (fiber crops e.g. cotton), biofuel (energy crops, algae fuel), or medicine (medicinal plants). Production There was an increase in global production of primary crops by 56% between 2000 and 2022 to 9.6 billion tonnes, which represents a 0.7% compared to 2021. This represents 3.5 billion tonnes more than in 2000. Cereals represented the main group of crops produced in 2022, followed by sugar crops (23%), vegetables (12%) and oil crops (12%). Fruit accounted for 10% of the total production. This production increase is mainly due to a combination of factors, including an increased use of irrigation, pesticides and fertilizers and a larger cultivated area). Moreover, better farming practices and the use of high-yield crops play a role. Four crops account for about half of global primary crop production: sugar cane, maize, wheat and rice. The value of primary crops production increased at a slightly higher pace in real terms as the quantities produced (57%), from USD 1.8 trillion in 2000 to USD 2.8 trillion in 2021. As with quantities produced, cereals accounted for the largest share of the total production value in 2021 (30%). Vegetables and fruit represented 19% and 17%, respectively, of the total value in 2021, which is significantly higher than the shares in quantities. The shares of oil crops and roots and tubers in the total value were similar to the shares in quantities. Sugar crops represented 4% of the total value: such a discrepancy with the share of the quantities produced is due to differences in price compared to fruit and vegetables, and to the fact that the transformation into refined sugar is adding the most value. Globally important crops The importance of a crop varies greatly depending on the region. Globally, the following crops contribute most to human food supply (values of kcal/person/day for 2013 given in parentheses): rice (541 kcal), wheat (527 kcal), sugarcane and other sugar crops (200 kcal), maize (corn) (147 kcal), soybean oil (82 kcal), other vegetables (74 kcal), potatoes (64 kcal), palm oil (52 kcal), cassava (37 kcal), legume pulses (37 kcal), sunflower seed oil (35 kcal), rape and mustard oil (34 kcal), other fruits, (31 kcal), sorghum (28 kcal), millet (27 kcal), groundnuts (25 kcal), beans (23 kcal), sweet potatoes (22 kcal), bananas (21 kcal), various nuts (16 kcal), soybeans (14 kcal), cottonseed oil (13 kcal), groundnut oil (13 kcal), yams (13 kcal). Note that many of the globally apparently minor crops are regionally very important. For example, in Africa, roots & tubers dominate with 421 kcal/person/day, and sorghum and millet contribute 135 kcal and 90 kcal, respectively. In terms of produced weight, the following crops are the most important ones (global production in thousand metric tonnes): Methods of cropping and popular crops in the U.S. There are various methods of cropping that are used in the agricultural industry, such as mono cropping, crop rotation, sequential cropping, and mixed intercropping. Each method of cropping has its purposes and possibly disadvantages as well. Himanshu Arora defines mono cropping as where a field only grows one specific crop year round. Mono Cropping has its disadvantages, according to Himanshu Arora, such as the risk of the soil losing its fertility. Following mono cropping, another method of cropping is relay cropping. According to the National Library of Medicine, relay cropping may solve a number of conflicts such as inefficient use of available resources, controversies in sowing time, fertilizer application, and soil degradation. The result coming from the use of relay cropping is higher crop output. In the United States, corn is the largest crop produced, and soybean follows in second, according to the government of Alberta. Referring to a map given by the Government of Alberta, the most popular region to grow these popular crops is in the inner states of the U.S., it is where the crops are most successful in output.
Technology
Basics
null
1645331
https://en.wikipedia.org/wiki/Trigonal%20pyramidal%20molecular%20geometry
Trigonal pyramidal molecular geometry
In chemistry, a trigonal pyramid is a molecular geometry with one atom at the apex and three atoms at the corners of a trigonal base, resembling a tetrahedron (not to be confused with the tetrahedral geometry). When all three atoms at the corners are identical, the molecule belongs to point group C3v. Some molecules and ions with trigonal pyramidal geometry are the pnictogen hydrides (XH3), xenon trioxide (XeO3), the chlorate ion, , and the sulfite ion, . In organic chemistry, molecules which have a trigonal pyramidal geometry are sometimes described as sp3 hybridized. The AXE method for VSEPR theory states that the classification is AX3E1. Trigonal pyramidal geometry in ammonia The nitrogen in ammonia has 5 valence electrons and bonds with three hydrogen atoms to complete the octet. This would result in the geometry of a regular tetrahedron with each bond angle equal to arccos(−) ≈ 109.5°. However, the three hydrogen atoms are repelled by the electron lone pair in a way that the geometry is distorted to a trigonal pyramid (regular 3-sided pyramid) with bond angles of 107°. In contrast, boron trifluoride is flat, adopting a trigonal planar geometry because the boron does not have a lone pair of electrons. In ammonia the trigonal pyramid undergoes rapid nitrogen inversion.
Physical sciences
Bond structure
Chemistry
1645870
https://en.wikipedia.org/wiki/Ge%20%28unit%29
Ge (unit)
The ge () is a traditional Chinese unit of volume equal to sheng. Its Korean equivalent is the hob or hop and its Japanese equivalent is the gō. China The ge is a traditional Chinese unit of volume equal to 10shao or sheng. Its exact value has varied over time with the size of the sheng. In 1915, the Beiyang Government set the ge as equivalent to . The Nationalist Government's 1929 Weights and Measures Act, effective 1 January 1930, set it equal to the deciliter or 0.182dry pt). The People's Republic of China confirmed that value in 1959, although it made the official Chinese name of the deciliter the fēnshēng and exempted TCM pharmacists from punishment for noncompliance with the new measure when traditional amounts were required for preparing medicine. Korea The hob (South Korea) or hop (North Korea) is a traditional Korean unit based on the ge which is equal to doe (SK) or toe (NK). Its exact value has varied over time with the size of the doe. During its occupation, Korea's native measures were standardized to their Japanese equivalents. The present-day hob is litres (6.1floz or 0.328dry pt), the same as the Japanese gō. Its use for commercial purposes has been criminalized in South Korea, although it continues to be used in the North. Japan Volume The gō or cup is a traditional Japanese unit based on the ge which is equal to or . It was officially equated with liters in 1891. The gō is the traditional amount used for a serving of rice and a cup of sake in Japanese cuisine. Although the gō is no longer used as an official unit, 1-gō measuring cups or their 180mL metric equivalents are often included with Japanese rice cookers. In dining, a 1-gō serving is sometimes equated with 150g of Japanese short-grain rice. It also appears as a serving size for fugu and other fish. Since sake bottles are typically either 720 or 750mL, they can be reckoned as holding about four cups. Area The gō is also used as a unit equal to tsubo. This is approximately equal to 0.3306 m². Mountaineering In Japanese mountaineering terms, the distance from the foot of a mountain to the summit is divided into 10 gō, and the points corresponding to these tenths of the route are generally referred to as "stations" in English.
Physical sciences
Volume
Basics and measurement
1646838
https://en.wikipedia.org/wiki/Grid%20energy%20storage
Grid energy storage
Grid energy storage, also known as large-scale energy storage, are technologies connected to the electrical power grid that store energy for later use. These systems help balance supply and demand by storing excess electricity from variable renewables such as solar and inflexible sources like nuclear power, releasing it when needed. They further provide essential grid services, such as helping to restart the grid after a power outage. , the largest form of grid storage is pumped-storage hydroelectricity, with utility-scale batteries and behind-the-meter batteries coming second and third. Lithium-ion batteries are highly suited for shorter duration storage up to 8 hours. Flow batteries and compressed air energy storage may provide storage for medium duration. Two forms of storage are suited for long-duration storage: green hydrogen, produced via electrolysis and thermal energy storage. Energy storage is one option to making grids more flexible. An other solution is the use of more dispatchable power plants that can change their output rapidly, for instance peaking power plants to fill in supply gaps. Demand response can shift load to other times and interconnections between regions can balance out fluctuations in renewables production. The price of storage technologies typically goes down with experience. For instance, lithium-ion batteries have been getting some 20% cheaper for each doubling of worldwide capacity. Systems with under 40% variable renewables need only short-term storage. At 80%, medium-duration storage becomes essential and beyond 90%, long-duration storage does too. The economics of long-duration storage is challenging, and alternative flexibility options like demand response may be more economic. Roles in the power grid Any electrical power grid must match electricity production to consumption, both of which vary significantly over time. Energy derived from solar and wind sources varies with the weather on time scales ranging from less than a second to weeks or longer. Nuclear power is less flexible than fossil fuels, meaning it cannot easily match the variations in demand. Thus, low-carbon electricity without storage presents special challenges to electric utilities. Electricity storage is one of the three key ways to replace flexibility from fossil fuels in the grid. Other options are demand-side response, in which consumers change when they use electricity or how much they use. For instance, households may have cheaper night tariffs to encourage them to use electricity at night. Industry and commercial consumers can also change their demand to meet supply. Improved network interconnection smooths the variations of renewables production and demand. When there is little wind in one location, another might have a surplus of production. Expansion of transmission lines usually takes a long time. Energy storage has a large set of roles in the electricity grid and can therefore provide many different services. For instance, it can arbitrage by keeping it until the electricity price rises, it can help make the grid more stable, and help reduce investment into transmission infrastructure. The type of service provided by storage depends on who manages the technology, whether the technology is based alongside generation of electricity, within the network, or at the side of consumption. Providing short-term flexibility is a key role for energy storage. On the generation side, it can help with the integration of variable renewable energy, storing it when there is an oversupply of wind and solar and electricity prices are low. More generally, it can exploit the changes in prices of electricity over time in the wholesale market, charging when electricity is cheap and selling when it is expensive. It can further help with grid congestion (where there is insufficient capacity on transmission lines). Consumers can use storage to use more of their self-produced electricity (for instance from rooftop solar power). Storage can also be used to provide essential grid services. On the generation side, storage can smooth out the variations in production, for instance for solar and wind. It can assist in a black start after a power outage. On the network side, these include frequency regulation (continuously) and frequency response (after unexpected changes in supply or demand). On the consumption side, storage can help to improve the quality of the delivered electricity in less stable grids. Investment in storage may make some investments in the transmission and distribution network unnecessary, or may allow them to be scaled down. Additionally, storage can ensure there is sufficient capacity to meet peak demand within the electricity grid. Finally, in off-grid home systems or mini-grids, electricity storage can help provide energy access in areas that were previously not connected to the electricity grid. Forms Electricity can be stored directly for a short time in capacitors, somewhat longer electrochemically in batteries, and much longer chemically (e.g. hydrogen), mechanically (e.g. pumped hydropower) or as heat. The first pumped hydroelectricity was constructed at the end of the 19th century around the Alps in Italy, Austria, and Switzerland. The technique rapidly expanded during the 1960s to 1980s nuclear boom, due to nuclear power's inability to quickly adapt to changes in electricity demand. In the 21st century, interest in storage surged due to the rise of sustainable energy sources, which are often weather-dependent. Commercial batteries have been available for over a century, their widespread use in the power grid is more recent, with only 1 GW available in 2013. Batteries Lithium-ion batteries Lithium-ion batteries are the most commonly used batteries for grid applications, , following the application of batteries in electric vehicles (EVs). In comparison with EVs, grid batteries require less energy density, meaning that more emphasis can be put on costs, the ability to charge and discharge often and lifespan. This has led to a shift towards lithium iron phosphate batteries (LFP batteries), which are cheaper and last longer than traditional lithium-ion batteries. Costs of batteries are declining rapidly; from 2010 to 2023 costs fell by 90%. , utility-scale systems account for two thirds of added capacity, and home applications (behind-the-meter) for one third. Lithium-ion batteries are highly suited to short-duration storage (<8h) due to cost and degradation associated with high states of charge. Electric vehicles The electric vehicle fleet has a large overall battery capacity, which can potentially be used for grid energy storage. This could be in the form of vehicle-to-grid (V2G), where cars store energy when they are not in use, or by repurposing batteries from cars at the end of the vehicle's life. Car batteries typically range between 33 and 100 kWh; for comparison, a typical upper-middle-class household in Spain might use some 18 kWh in a day. By 2030, batteries in electric vehicles may be able to meet all short-term storage demand globally. , there have been more than 100 V2G pilot projects globally. The effect of V2G charging on battery life can be positive or negative. Increased cycling of batteries can lead to faster degradation, but due to better management of the state of charge and gentler charging and discharing, V2G might instead increase the lifetime of batteries. Second-hand batteries may be useable for stationary grid storage for roughly 6 years, when their capacity drops from roughly 80% to 60% of the initial capacity. LFP batteries are particularly suitable for reusing, as they degrade less than other lithium-ion batteries and recycling is less attractive as their materials are not as valuable. Other battery types In redox flow batteries, energy is stored in liquids, which are placed in two separate tanks. When charging or discharging, the liquids are pumped into a cell with the electrodes. The amount of energy stored (as set by the size of the tanks) can be adjusted separately from the power output (as set by the speed of the pumps). Flow batteries have the advantages of low capital cost for charge-discharge duration over 4 h, and of long durability (many years). Flow batteries are inferior to lithium-ion batteries in terms of energy efficiency, averaging efficiencies between 60% and 75%. Vanadium redox batteries is most commercially advanced type of flow battery, with roughly 40 companies making them . Sodium-ion batteries are a possible alternative to lithium-ion batteries, as they are less flammable, and use cheaper and less critical materials. They have a lower energy density, and possibly a shorter lifespan. If produced at the same scale as lithium-ion batteries, they may become 20% to 30% cheaper. Iron-air batteries may be suitable for even longer duration storage than flow batteries (weeks), but the technology is not yet mature. Electrical Storage in supercapacitors works well for applications where a lot of power is needed for short amount of time. In the power grid, they are therefore mostly used in short-term frequency regulation. Hydrogen and chemical storage Various power-to-gas technologies exist that can convert excess electricity into an easier to store chemical. The lowest cost and most efficient one is hydrogen. However, it is easier to use synthetic methane with existing infrastructure and appliances, as it is very similar to natural gas. , there have been a number of demonstration plants where hydrogen is burned in gas turbines, either co-firing with natural gas, or on its own. Similarly, a number of coal plants have demonstrated it is possible to co-fire ammonia when burning coal. In 2022, there was also a small pilot to burn pure ammonia in a gas turbine. A portion of existing gas turbines are capable of co-firing hydrogen, which means there is, as a lower estimate, 80 GW of capacity ready to burn hydrogen. Hydrogen Hydrogen can be used as a long-term storage medium. Green hydrogen is produced from the electrolysis of water and converted back into electricity in an internal combustion engine, or a fuel cell, with a round-trip efficiency of roughly 41%. Together with thermal storage, it is expected to be best suited to seasonal energy storage. Hydrogen can be stored aboveground in tanks or underground in larger quantities. Underground storage is easiest in salt caverns, but only a certain number of places have suitable geology. Storage in porous rocks, for instance in empty gas fields and some aquifers, can store hydrogen at a larger scale, but this type of storage may have some drawbacks. For instance, some of the hydrogen may leak, or react into H2S or methane. Ammonia Hydrogen can be converted into ammonia in a reaction with nitrogen in the Haber-Bosch process. Ammonia, a gas at room temperature, is more expensive to produce than hydrogen. However, it can be stored more cheaply than hydrogen. Tank storage is usually done at between one and ten times atmospheric pressure and at a temperature of , in liquid form. Ammonia has multiple uses besides being an energy carrier: it is the basis for the production of many chemicals; the most common use is for fertilizer. It can be used for power generation directly, or converted back to hydrogen first. Alternatively, it has potential applications as a fuel in shipping. Methane It is possible to further convert hydrogen into methane via the Sabatier reaction, a chemical reaction which combines and H2. While the reaction that converts CO from gasified coal into is mature, the process to form methane out of is less so. Efficiencies of around 80% one-way can be achieved, that is, some 20% of the energy in hydrogen is lost in the reaction. Mechanical Flywheel Flywheels store energy in the form of mechanical energy. They are suited to supplying high levels of electricity over minutes and can also be charged rapidly. They have a long lifetime and can be used in settings with widely varying temperatures. The technology is mature, but more expensive than batteries and supercapacitors and not used frequently. Pumped hydro , pumped-storage hydroelectricity (PSH) was the largest form of grid energy storage globally, with an installed capacity of 181 GW, surpassing the combined capacity of utility-scale and behind-the-meter battery storage, which totaled approximately 88 GW. PSH is particularly effective for managing daily fluctuations in energy demand. During periods of low demand, water is pumped to a higher-elevation reservoir, and during peak demand, the stored water is released to generate electricity through turbines. The system has an efficiency rate of 75% to 85% and can quickly respond to changes in demand, typically within seconds to minutes. While traditional PSH systems require specific geographical conditions, alternative designs have been proposed. These include utilizing deep salt caverns or constructing hollow structures on the seabed, where the ocean serves as the upper reservoir. However, PSH construction is often expensive, time-consuming, and can have significant environmental and social impacts on nearby communities. Innovative solutions, such as installing floating solar panels on reservoirs, can enhance the efficiency of PSH systems. These panels reduce water evaporation and benefit from cooling by the water surface, which improves their energy generation efficiency. Hydroelectric dams Hydroelectric dams with large reservoirs can also be operated to provide peak generation at times of peak demand. Water is stored in the reservoir during periods of low demand and released through the plant when demand is higher. While technically no electricity is stored, the net effect is the similar as pumped storage. The amount of storage available in hydroelectric dams is much larger than in pumped storage. Upgrades may be needed so that these dams can respond to variable demand. For instance, additional investment may be needed in transmission lines, or additional turbines may need to be installed to increase the peak output from the dam. Dams usually have multiple purposes. As well as energy generation, they often play a role in flood defense and protection of ecosystems, recreation, and they supply water for irrigation. This means it is not always possible to change their operation much, but even with low flexibility, they may still play an important role in responding to changes in wind and solar production. Gravity Alternative methods that use gravity include storing energy by moving large solid masses upward against gravity. This can be achieved inside old mine shafts or in specially constructed towers where heavy weights are winched up to store energy and allowed a controlled descent to release it. Compressed air Compressed air energy storage (CAES) stores electricity by compressing air. The compressed air is typically stored in large underground caverns. The expanding air can be used to drive turbines, converting the energy back into electricity. As air cools when expanding, some heat needs to be added in this stage to prevent freezing. This can be provided by a low-carbon source, or in the case of advanced CAES, by reusing the heat that is released when air is compressed. , there are three advanced CAES project in operation in China. Typical efficiencies of advanced CAES are between 60% and 80%. Liquid air or Another electricity storage method is to compress and cool air, turning it into liquid air, which can be stored and expanded when needed, turning a turbine to generate electricity. This is called liquid air energy storage (LAES). The air would be cooled to temperatures of to become liquid. Like with compressed air, heat is needed for the expansion step. In the case of LAES, low-grade industrial heat can be used for this. Energy efficiency for LAES lies between 50% and 70%. , LAES is moving from pre-commercial to commercial. An alternative is the compression of to store electricity. Thermal Electricity can be directly stored thermally with a Carnot battery. A Carnot battery is a type of energy storage system that stores electricity in heat storage and converts the stored heat back to electricity via thermodynamic cycles (for instance, a turbine). While less efficient than pumped hydro or battery storage, this type of system is expected to be cheap and can provide long-duration storage. A pumped-heat electricity storage system is a Carnot battery that uses a reversible heat pump to convert the electricity into heat. It usually stores the energy in both a hot and cold reservoir. To achieve decent efficiencies (>50%), the temperature ratio between the two must reach a factor of 5. Thermal energy storage is also used in combination with concentrated solar power (CSP). In CSP, solar energy is first converted into heat, and then either directly converted into electricity or first stored. The energy is released when there is little or no sunshine. This means that CSP can be used as a dispatchable (flexible) form of generation. The energy in a CSP system can for instance be stored in molten salts or in a solid medium such as sand. Finally, heating and cooling systems in buildings can be controlled to store thermal energy in either the building's mass or dedicated thermal storage tanks. This thermal storage can provide load-shifting or even more complex ancillary services by increasing power consumption (charging the storage) during off-peak times and lowering power consumption (discharging the storage) during higher-priced peak times. Economics Costs The levelized cost of storing electricity (LCOS) is a measure of the lifetime costs of storing electricity per MWh of electricity discharged. It includes investment costs, but also operational costs and charging costs. It depends highly on storage type and purpose; as subsecond-scale frequency regulation, minute/hour-scale peaker plants, or day/week-scale season storage. For power applications (for instance around ancillary services or black starts), a similar metric is the annuitized capacity cost (ACC), which measures the lifetime costs per kW. ACC is lowest when there are few cycles (<300) and when the discharge is less than one hour. This is because the technology is reimbursed only when it provides spare capacity, not when it is discharged. The cost of storage is coming down following technology-dependent experience curves, the price drop for each doubling in cumulative capacity (or experience). Lithium-ion battery prices fast: the price utitlities pay for them falls 19% with each doubling of capacity. Hydrogen production via electrolysis has a similar learning rate, but it is much more uncertain. Vanadium-flow batteries typically get 14% cheaper for each doubling of capacity. Pumped hydropower has not seen prices fall much with increased experience. Market and system value There are four categories of services which provide economic value for storage: those related to power quality (such as frequency regulation), reliability (ensuring peak demand can be met), better use of assets in the system (e.g. avoiding transmission investments) and arbitrage (exploiting price differences over time). Before 2020, most value for storage was in providing power quality services. Arbitrage is the service with the largest economic potential for storage applications. In systems with under 40% of variable renewables, only short-term storage (of less than 4 hours) is needed for integration. When the share of variable renewables climbs to 80%, medium-duration storage (between 4 and 16 hours, for instance compressed air) is needed. Above 90%, large-scale long-duration storage is required. The economics of long-duration storage is challenging even then, as the costs are high. Alternative flexibility options, such as demand response, network expansions or flexible generation (geothermal or fossil gas with carbon capture and storage) may be lower-cost. Like with renewables, storage will "cannibalise" its own income, but even more strongly. That is, with more storage on the market, there is less of an opportunity to do arbitrage or deliver other services to the grid. How markets are designed impacts revenue potential too. The income from arbitrage is quite variable between years, whereas markets that have capacity payments likely show less volatility. Electricity storage is not 100% efficient, so more electricity needs to be bought than can be sold. This implies that if there is only a small variation in price, it may not be economical to charge and discharge. For instance, if the storage application is 75% efficient, the price at which the electricity is sold needs to be at least 1.33 higher than the price for which it was bought. Typically, electricity prices vary most between day and night, which means that storage up to 8 hours has relatively high potential for profit.
Technology
Energy storage
null
1647344
https://en.wikipedia.org/wiki/Boron%20trifluoride
Boron trifluoride
Boron trifluoride is the inorganic compound with the formula . This pungent, colourless, and toxic gas forms white fumes in moist air. It is a useful Lewis acid and a versatile building block for other boron compounds. Structure and bonding The geometry of a molecule of is trigonal planar. Its D3h symmetry conforms with the prediction of VSEPR theory. The molecule has no dipole moment by virtue of its high symmetry. The molecule is isoelectronic with the carbonate anion, . is commonly referred to as "electron deficient," a description that is reinforced by its exothermic reactivity toward Lewis bases. In the boron trihalides, , the length of the B–X bonds (1.30 Å) is shorter than would be expected for single bonds, and this shortness may indicate stronger B–X π-bonding in the fluoride. A facile explanation invokes the symmetry-allowed overlap of a p orbital on the boron atom with the in-phase combination of the three similarly oriented p orbitals on fluorine atoms. Others point to the ionic nature of the bonds in . Synthesis and handling is manufactured by the reaction of boron oxides with hydrogen fluoride: Typically the HF is produced in situ from sulfuric acid and fluorite (). Approximately 2300-4500 tonnes of boron trifluoride are produced every year. Laboratory scale For laboratory scale reactions, is usually produced in situ using boron trifluoride etherate, which is a commercially available liquid. Laboratory routes to the solvent-free materials are numerous. A well documented route involves the thermal decomposition of diazonium salts of : It forms by treatment of a mixture boron trioxide and sodium tetrafluoroborate with sulfuric acid: Alternatively, boron tribromide converts various organofluorine compounds to organobromines, evolving the trifluoride gas: 3 R–F + BBr3 → 3 R–Br + BF3 Properties Anhydrous boron trifluoride has a boiling point of −100.3 °C and a critical temperature of −12.3 °C, so that it can be stored as a refrigerated liquid only between those temperatures. Storage or transport vessels should be designed to withstand internal pressure, since a refrigeration system failure could cause pressures to rise to the critical pressure of 49.85 bar (4.985 MPa). Boron trifluoride is corrosive. Suitable metals for equipment handling boron trifluoride include stainless steel, monel, and hastelloy. In presence of moisture it corrodes steel, including stainless steel. It reacts with polyamides. Polytetrafluoroethylene, polychlorotrifluoroethylene, polyvinylidene fluoride, and polypropylene show satisfactory resistance. The grease used in the equipment should be fluorocarbon based, as boron trifluoride reacts with the hydrocarbon-based ones. Reactions Unlike the aluminium and gallium trihalides, the boron trihalides are all monomeric. They undergo rapid halide exchange reactions: Because of the facility of this exchange process, the mixed halides cannot be obtained in pure form. Boron trifluoride is a versatile Lewis acid that forms adducts with such Lewis bases as fluoride and ethers: Tetrafluoroborate salts are commonly employed as non-coordinating anions. The adduct with diethyl ether, boron trifluoride diethyl etherate, or just boron trifluoride etherate, () is a conveniently handled liquid and consequently is widely encountered as a laboratory source of . Another common adduct is the adduct with dimethyl sulfide (), which can be handled as a neat liquid. Comparative Lewis acidity All three lighter boron trihalides, (X = F, Cl, Br) form stable adducts with common Lewis bases. Their relative Lewis acidities can be evaluated in terms of the relative exothermicities of the adduct-forming reaction. Such measurements have revealed the following sequence for the Lewis acidity: < < < (strongest Lewis acid) This trend is commonly attributed to the degree of π-bonding in the planar boron trihalide that would be lost upon pyramidalization of the molecule. which follows this trend: > > < (most easily pyramidalized) The criteria for evaluating the relative strength of π-bonding are not clear, however. One suggestion is that the F atom is small compared to the larger Cl and Br atoms. As a consequence, the bond length between boron and the halogen increases while going from fluorine to iodine hence spatial overlap between the orbitals becomes more difficult. The lone pair electron in pz of F is readily and easily donated and overlapped to empty pz orbital of boron. As a result, the pi donation of F is greater than that of Cl or Br. In an alternative explanation, the low Lewis acidity for is attributed to the relative weakness of the bond in the adducts . Yet another explanation might be found in the fact that the pz orbitals in each higher period have an extra nodal plane and opposite signs of the wave function on each side of that plane. This results in bonding and antibonding regions within the same bond, diminishing the effective overlap and so lowering the π-donating blockage of the acidity. Hydrolysis Boron trifluoride reacts with water to give boric acid and fluoroboric acid. The reaction commences with the formation of the aquo adduct, , which then loses HF that gives fluoroboric acid with boron trifluoride. The heavier trihalides do not undergo analogous reactions, possibly due to the lower stability of the tetrahedral ions and . Because of the high acidity of fluoroboric acid, the fluoroborate ion can be used to isolate particularly electrophilic cations, such as diazonium ions, that are otherwise difficult to isolate as solids. Uses Organic chemistry Boron trifluoride is most importantly used as a reagent in organic synthesis, typically as a Lewis acid. Examples include: initiates polymerisation reactions of unsaturated compounds, such as polyethers as a catalyst in some isomerization, acylation, alkylation, esterification, dehydration, condensation, Mukaiyama aldol addition, and other reactions Niche uses Other, less common uses for boron trifluoride include: applied as dopant in ion implantation p-type dopant for epitaxially grown silicon used in sensitive neutron detectors in ionization chambers and devices to monitor radiation levels in the Earth's atmosphere in fumigation as a flux for soldering magnesium to prepare diborane Discovery Boron trifluoride was discovered in 1808 by Joseph Louis Gay-Lussac and Louis Jacques Thénard, who were trying to isolate "fluoric acid" (i.e., hydrofluoric acid) by combining calcium fluoride with vitrified boric acid. The resulting vapours failed to etch glass, so they named it fluoboric gas.
Physical sciences
Halide salts
Chemistry
1647421
https://en.wikipedia.org/wiki/Trigonal%20planar%20molecular%20geometry
Trigonal planar molecular geometry
In chemistry, trigonal planar is a molecular geometry model with one atom at the center and three atoms at the corners of an equilateral triangle, called peripheral atoms, all in one plane. In an ideal trigonal planar species, all three ligands are identical and all bond angles are 120°. Such species belong to the point group D3h. Molecules where the three ligands are not identical, such as H2CO, deviate from this idealized geometry. Examples of molecules with trigonal planar geometry include boron trifluoride (BF3), formaldehyde (H2CO), phosgene (COCl2), and sulfur trioxide (SO3). Some ions with trigonal planar geometry include nitrate (), carbonate (), and guanidinium (). In organic chemistry, planar, three-connected carbon centers that are trigonal planar are often described as having sp2 hybridization. Nitrogen inversion is the distortion of pyramidal amines through a transition state that is trigonal planar. Pyramidalization is a distortion of this molecular shape towards a tetrahedral molecular geometry. One way to observe this distortion is in pyramidal alkenes.
Physical sciences
Bond structure
Chemistry
1648132
https://en.wikipedia.org/wiki/Weak%20artificial%20intelligence
Weak artificial intelligence
Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of the mind, or, as Artificial Narrow Intelligence, is focused on one narrow task. Weak AI is contrasted with strong AI, which can be interpreted in various ways: Artificial general intelligence (AGI): a machine with the ability to apply intelligence to any problem, rather than just one specific problem. Artificial superintelligence (ASI): a machine with a vastly superior intelligence to the average human being. Artificial consciousness: a machine that has consciousness, sentience and mind (John Searle uses "strong AI" in this sense). Narrow AI can be classified as being "limited to a single, narrowly defined task. Most modern AI systems would be classified in this category." Artificial general intelligence is conversely the opposite. Applications and risks Some examples of narrow AI are AlphaGo, self-driving cars, robot systems used in the medical field, and diagnostic doctors. Narrow AI systems are sometimes dangerous if unreliable. And the behavior that it follows can become inconsistent. It could be difficult for the AI to grasp complex patterns and get to a solution that works reliably in various environments. This "brittleness" can cause it to fail in unpredictable ways. Narrow AI failures can sometimes have significant consequences. It could for example cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. Medicines could be incorrectly sorted and distributed. Also, medical diagnoses can ultimately have serious and sometimes deadly consequences if the AI is faulty or biased. Simple AI programs have already worked their way into our society unnoticed. Autocorrection for typing, speech recognition for speech-to-text programs, and vast expansions in the data science fields are examples. As much as narrow and relatively general AI is slowly starting to help out societies, they are also starting to hurt them as well. AI had already unfairly put people in jail, discriminated against women in the workplace for hiring, taught some problematic ideas to millions, and even killed people with automatic cars. AI might be a powerful tool that can be used for improving lives, but it could also be a dangerous technology with the potential for misuse. Despite being "narrow" AI, recommender systems are efficient at predicting user reactions based their posts, patterns, or trends. For instance, TikTok's "For You" algorithm can determine user's interests or preferences in less than an hour. Some other social media AI systems are used to detect bots that may be involved in biased propaganda or other potentially malicious activities. Weak AI versus strong AI John Searle contests the possibility of strong AI (by which he means conscious AI). He further believes that the Turing test (created by Alan Turing and originally called the "imitation game", used to assess whether a machine can converse indistinguishably from a human) is not accurate or appropriate for testing whether an AI is "strong". Scholars such as Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling" (as, on the other hand, implied by the strong AI assumption).
Technology
Artificial intelligence concepts
null
1648915
https://en.wikipedia.org/wiki/Chemical%20plant
Chemical plant
A chemical plant is an industrial process plant that manufactures (or otherwise processes) chemicals, usually on a large scale. The general objective of a chemical plant is to create new material wealth via the chemical or biological transformation and or separation of materials. Chemical plants use specialized equipment, units, and technology in the manufacturing process. Other kinds of plants, such as polymer, pharmaceutical, food, and some beverage production facilities, power plants, oil refineries or other refineries, natural gas processing and biochemical plants, water and wastewater treatment, and pollution control equipment use many technologies that have similarities to chemical plant technology such as fluid systems and chemical reactor systems. Some would consider an oil refinery or a pharmaceutical or polymer manufacturer to be effectively a chemical plant. Petrochemical plants (plants using chemicals from petroleum as a raw material or feedstock) are usually located adjacent to an oil refinery to minimize transportation costs for the feedstocks produced by the refinery. Speciality chemical and fine chemical plants are usually much smaller and not as sensitive to location. Tools have been developed for converting a base project cost from one geographic location to another. Chemical processes Chemical plants use chemical processes, which are detailed industrial-scale methods, to transform feedstock chemicals into products. The same chemical process can be used at more than one chemical plant, with possibly differently scaled capacities at each plant. Also, a chemical plant at a site may be constructed to utilize more than one chemical process, for instance to produce multiple products. A chemical plant commonly has usually large vessels or sections called units or lines that are interconnected by piping or other material-moving equipment which can carry streams of material. Such material streams can include fluids (gas or liquid carried in piping) or sometimes solids or mixtures such as slurries. An overall chemical process is commonly made up of steps called unit operations which occur in the individual units. A raw material going into a chemical process or plant as input to be converted into a product is commonly called a feedstock, or simply feed. In addition to feedstocks for the plant, as a whole, an input stream of material to be processed in a particular unit can similarly be considered feed for that unit. Output streams from the plant as a whole are final products and sometimes output streams from individual units may be considered intermediate products for their units. However, final products from one plant may be intermediate chemicals used as feedstock in another plant for further processing. For example, some products from an oil refinery may be used as feedstock in petrochemical plants, which may in turn produce feedstocks for pharmaceutical plants. Either the feedstock(s), the product(s), or both may be individual compounds or mixtures. It is often not worthwhile separating the components in these mixtures completely; specific levels of purity depend on product requirements and process economics. Operations Chemical processes may be run in continuous or batch operation. Batch operation In batch operation, production occurs in time-sequential steps in discrete batches. A batch of feedstock(s) is fed (or charged) into a process or unit, then the chemical process takes place, then the product(s) and any other outputs are removed. Such batch production may be repeated over again and again with new batches of feedstock. Batch operation is commonly used in smaller scale plants such as pharmaceutical or specialty chemicals production, for purposes of improved traceability as well as flexibility. Continuous plants are usually used to manufacture commodity or petrochemicals while batch plants are more common in speciality and fine chemical production as well as active pharmaceutical ingredient (API) manufacture. Continuous operation In continuous operation, all steps are ongoing continuously in time. During usual continuous operation, the feeding and product removal are ongoing streams of moving material, which together with the process itself, all take place simultaneously and continuously. Chemical plants or units in continuous operation are usually in a steady state or approximate steady state. Steady state means that quantities related to the process do not change as time passes during operation. Such constant quantities include stream flow rates, heating or cooling rates, temperatures, pressures, and chemical compositions at any given point (location). Continuous operation is more efficient in many large-scale operations like petroleum refineries. It is possible for some units to operate continuously and others be in batch operation in a chemical plant; for example, see Continuous distillation and Batch distillation. The amount of primary feedstock or product per unit of time which a plant or unit can process is referred to as the capacity of that plant or unit. For examples: the capacity of an oil refinery may be given in terms of barrels of crude oil refined per day; alternatively chemical plant capacity may be given in tons of product produced per day. In actual daily operation, a plant (or unit) will operate at a percentage of its full capacity. Engineers typically assume 90% operating time for plants which work primarily with fluids, and 80% uptime for plants which primarily work with solids. Units and fluid systems Specific unit operations are conducted in specific kinds of units. Although some units may operate at ambient temperature or pressure, many units operate at higher or lower temperatures or pressures. Vessels in chemical plants are often cylindrical with rounded ends, a shape which can be suited to hold either high pressure or vacuum. Chemical reactions can convert certain kinds of compounds into other compounds in chemical reactors. Chemical reactors may be packed beds and may have solid heterogeneous catalysts which stay in the reactors as fluids move through, or may simply be stirred vessels in which reactions occur. Since the surface of solid heterogeneous catalysts may sometimes become "poisoned" from deposits such as coke, regeneration of catalysts may be necessary. Fluidized beds may also be used in some cases to ensure good mixing. There can also be units (or subunits) for mixing (including dissolving), separation, heating, cooling, or some combination of these. For example, chemical reactors often have stirring for mixing and heating or cooling to maintain temperature. When designing plants on a large scale, heat produced or absorbed by chemical reactions must be considered. Some plants may have units with organism cultures for biochemical processes such as fermentation or enzyme production. Separation processes include filtration, settling (sedimentation), extraction or leaching, distillation, recrystallization or precipitation (followed by filtration or settling), reverse osmosis, drying, and adsorption. Heat exchangers are often used for heating or cooling, including boiling or condensation, often in conjunction with other units such as distillation towers. There may also be storage tanks for storing feedstock, intermediate or final products, or waste. Storage tanks commonly have level indicators to show how full they are. There may be structures holding or supporting sometimes massive units and their associated equipment. There are often stairs, ladders, or other steps for personnel to reach points in the units for sampling, inspection, or maintenance. An area of a plant or facility with numerous storage tanks is sometimes called a tank farm, especially at an oil depot. Fluid systems for carrying liquids and gases include piping and tubing of various diameter sizes, various types of valves for controlling or stopping flow, pumps for moving or pressurizing liquid, and compressors for pressurizing or moving gases. Vessels, piping, tubing, and sometimes other equipment at high or very low temperatures are commonly covered with insulation for personnel safety and to maintain temperature inside. Fluid systems and units commonly have instrumentation such as temperature and pressure sensors and flow measuring devices at select locations in a plant. Online analyzers for chemical or physical property analysis have become more common. Solvents can sometimes be used to dissolve reactants or materials such as solids for extraction or leaching, to provide a suitable medium for certain chemical reactions to run, or so they can otherwise be treated as fluids. Chemical plant design Today, the fundamental aspects of designing chemical plants are done by chemical engineers. Historically, this was not always the case, and many chemical plants were constructed haphazardly before the discipline of chemical engineering became established. Chemical engineering was first established as a profession in the United Kingdom when the first chemical engineering course was given at the University of Manchester in 1887 by George E. Davis in the form of twelve lectures covering various aspects of industrial chemical practice. As a consequence George E. Davis is regarded as the world's first chemical engineer. Today chemical engineering is a profession and those professional chemical engineers with experience can gain "Chartered" engineer status through the Institution of Chemical Engineers. In plant design, typically less than 1 percent of ideas for new designs ever become commercialized. During this solution process, typically, cost studies are used as an initial screening to eliminate unprofitable designs. If a process appears profitable, then other factors are considered, such as safety, environmental constraints, controllability, etc. The general goal in plant design, is to construct or synthesize “optimum designs” in the neighborhood of the desired constraints. Many times chemists research chemical reactions or other chemical principles in a laboratory, commonly on a small scale in a "batch-type" experiment. Chemistry information obtained is then used by chemical engineers, along with expertise of their own, to convert to a chemical process and scale up the batch size or capacity. Commonly, a small chemical plant called a pilot plant is built to provide design and operating information before construction of a large plant. From data and operating experience obtained from the pilot plant, a scaled-up plant can be designed for higher or full capacity. After the fundamental aspects of a plant design are determined, mechanical or electrical engineers may become involved with mechanical or electrical details, respectively. Structural engineers may become involved in the plant design to ensure the structures can support the weight of the units, piping, and other equipment. The units, streams, and fluid systems of chemical plants or processes can be represented by block flow diagrams which are very simplified diagrams, or process flow diagrams which are somewhat more detailed. The streams and other piping are shown as lines with arrow heads showing usual direction of material flow. In block diagrams, units are often simply shown as blocks. Process flow diagrams may use more detailed symbols and show pumps, compressors, and major valves. Likely values or ranges of material flow rates for the various streams are determined based on desired plant capacity using material balance calculations. Energy balances are also done based on heats of reaction, heat capacities, expected temperatures, and pressures at various points to calculate amounts of heating and cooling needed in various places and to size heat exchangers. Chemical plant design can be shown in fuller detail in a piping and instrumentation diagram (P&ID) which shows all piping, tubing, valves, and instrumentation, typically with special symbols. Showing a full plant is often complicated in a P&ID, so often only individual units or specific fluid systems are shown in a single P&ID. In the plant design, the units are sized for the maximum capacity each may have to handle. Similarly, sizes for pipes, pumps, compressors, and associated equipment are chosen for the flow capacity they have to handle. Utility systems such as electric power and water supply should also be included in the plant design. Additional piping lines for non-routine or alternate operating procedures, such as plant or unit startups and shutdowns, may have to be included. Fluid systems design commonly includes isolation valves around various units or parts of a plant so that a section of a plant could be isolated in case of a problem such as a leak in a unit. If pneumatically or hydraulically actuated valves are used, a system of pressurizing lines to the actuators is needed. Any points where process samples may have to be taken should have sampling lines, valves, and access to them included in the detailed design. If necessary, provisions should be made for reducing high pressure or temperature of a sampling stream, such including a pressure reducing valve or sample cooler. Units and fluid systems in the plant including all vessels, piping, tubing, valves, pumps, compressors, and other equipment must be rated or designed to be able to withstand the entire range of pressures, temperatures, and other conditions which they could possibly encounter, including any appropriate safety factors. All such units and equipment should also be checked for materials compatibility to ensure they can withstand long-term exposure to the chemicals they will come in contact with. Any closed system in a plant which has a means of pressurizing possibly beyond the rating of its equipment, such as heating, exothermic reactions, or certain pumps or compressors, should have an appropriately sized pressure relief valve included to prevent overpressurization for safety. Frequently all of these parameters (temperatures, pressures, flow, etc.) are exhaustively analyzed in combination through a Hazop or fault tree analysis, to ensure that the plant has no known risk of serious hazard. Within any constraints the plant is subject to, design parameters are optimized for good economic performance while ensuring the safety and welfare of personnel and the surrounding community. For flexibility, a plant may be designed to operate in a range around some optimal design parameters in case feedstock or economic conditions change and re-optimization is desirable. In more modern times, computer simulations or other computer calculations have been used to help in chemical plant design or optimization. Plant operation Process control In process control, information gathered automatically from various sensors or other devices in the plant is used to control various equipment for running the plant, thereby controlling operation of the plant. Instruments receiving such information signals and sending out control signals to perform this function automatically are process controllers. Previously, pneumatic controls were sometimes used. Electrical controls are now common. A plant often has a control room with displays of parameters such as key temperatures, pressures, fluid flow rates and levels, operating positions of key valves, pumps, and other equipment, etc. In addition, operators in the control room can control various aspects of the plant operation, often including overriding automatic control. Process control with a computer represents more modern technology. Based on possible changing feedstock composition, changing products requirements or economics, or other changes in constraints, operating conditions may be re-optimized to maximize profit. Workers As in any industrial setting, there are a variety of workers working throughout a chemical plant facility, often organized into departments, sections, or other work groups. Such workers typically include engineers, plant operators, and maintenance technicians. Other personnel at the site could include chemists, management/administration, and office workers. Types of engineers involved in operations or maintenance may include chemical process engineers, mechanical engineers for maintaining mechanical equipment, and electrical/computer engineers for electrical or computer equipment. Transport Large quantities of fluid feedstock or product may enter or leave a plant by pipeline, railroad tank car, or tanker truck. For example, petroleum commonly comes to a refinery by pipeline. Pipelines can also carry petrochemical feedstock from a refinery to a nearby petrochemical plant. Natural gas is a product which comes all the way from a natural gas processing plant to final consumers by pipeline or tubing. Large quantities of liquid feedstock are typically pumped into process units. Smaller quantities of feedstock or product may be shipped to or from a plant in drums. Use of drums about 55 gallons in capacity is common for packaging industrial quantities of chemicals. Smaller batches of feedstock may be added from drums or other containers to process units by workers. Maintenance In addition to feeding and operating the plant, and packaging or preparing the product for shipping, plant workers are needed for taking samples for routine and troubleshooting analysis and for performing routine and non-routine maintenance. Routine maintenance can include periodic inspections and replacement of worn catalyst, analyzer reagents, various sensors, or mechanical parts. Non-routine maintenance can include investigating problems and then fixing them, such as leaks, failure to meet feed or product specifications, mechanical failures of valves, pumps, compressors, sensors, etc. Statutory and regulatory compliance When working with chemicals, safety is a concern in order to avoid problems such as chemical accidents. In the United States, the law requires that employers provide workers working with chemicals with access to a material safety data sheet (MSDS) for every kind of chemical they work with. An MSDS for a certain chemical is prepared and provided by the supplier to whoever buys the chemical. Other laws covering chemical safety, hazardous waste, and pollution must be observed, including statutes such as the Resource Conservation and Recovery Act (RCRA) and the Toxic Substances Control Act (TSCA), and regulations such as the Chemical Facility Anti-Terrorism Standards in the United States. Hazmat (hazardous materials) teams are trained to deal with chemical leaks or spills. Process Hazard Analysis (PHA) is used to assess potential hazards in chemical plants. In 1998, the U. S. Chemical Safety and Hazard Investigation Board has become operational. Clustering of commodity chemical plants Chemical Plants used particularly for commodity chemical and petrochemical manufacture, are located in relatively few manufacturing locations around the world largely due to infrastructural needs. This is less important for speciality or fine chemical batch plants. Not all commodity/petrochemicals are produced in any one location but groups of related materials often are, to induce industrial symbiosis as well as material, energy and utility efficiency and other economies of scale. These manufacturing locations often have business clusters of units called chemical plants that share utilities and large scale infrastructure such as power stations, port facilities, road and rail terminals. In the United Kingdom for example there are four main locations for commodity chemical manufacture: near the River Mersey in Northwest England, on the Humber on the East coast of Yorkshire, in Grangemouth near the Firth of Forth in Scotland and on Teesside as part of the Northeast of England Process Industry Cluster (NEPIC). Approximately 50% of the UK's petrochemicals, which are also commodity chemicals, are produced by the industry cluster companies on Teesside at the mouth of the River Tees on three large chemical parks at Wilton, Billingham and Seal Sands. Corrosion and use of new materials Corrosion in chemical process plants is a major issue that consumes billions of dollars yearly. Electrochemical corrosion of metals is pronounced in chemical process plants due to the presence of acid fumes and other electrolytic interactions. Recently, FRP (Fibre-reinforced plastic) is used as a material of construction. The British standard specification BS4994 is widely used for design and construction of the vessels, tanks, etc.
Technology
Material and chemical
null
5759122
https://en.wikipedia.org/wiki/Sthenurus
Sthenurus
Sthenurus ("strong tail") is an extinct genus of kangaroos. With a length around 3 m (10 ft), some species were twice as large as modern extant species. Sthenurus was related to the better-known Procoptodon. The subfamily Sthenurinae is believed to have separated from its sister taxon, the Macropodinae (kangaroos and wallabies), halfway through the Miocene, and then its population grew during the Pliocene. Fossil habitats A 1997 study analysed the diets of the fauna at various fossil site localities in South Australia, using stable carbon isotope analysis 13C/12C of collagen. It found that at older localities such as Cooper Creek, the species of Sthenurus were adapted to a diet of leaves and twigs (browsing) due to the wet climate of the time between 132 and 108 thousand years ago (kya - by thermoluminescence dating and uranium dating), which allowed for a more varied vegetation cover. At the Baldina Creek fossil site 30 kya (C14 dating), the genus had transitioned to a diet of grass-grazing. During this time, the area was open grasslands with sparse tree cover as the continent was drier than today, but at Dempsey's Lake (36-25 kya) and Rockey River (19 kya C14 dating), their diet was of both grazing and browsing. This analysis may be because of a wetter climatic period. The overall anatomy of the genus did not alter in response to the change in diet and dentition did not adapt to the varying toughness of the vegetation between grasses, shrubs, and trees. Other animals found in the Cuddie Springs habitat include the flightless bird Genyornis, the red kangaroo, Diprotodon, humans, and many others. Examination of skeletal remains of Sthenurus from Lake Callabonna, northern South Australia, revealed that as the animals were trapped as they floundered in the clay mud while attempting to cross the floor of the lake during low-water or dry times. The data show that three closely allied sthenurine species coexisted sympatrically at Lake Callabonna: a new giant taxon, S. stirlingi, an intermediate-sized S. tindalei, and the considerably smaller S. andersoni. Comparative osteology of these Sthenurus species with Macropus giganteus emphasizes how different sthenurine kangaroos were from extant kangaroos, especially with the sthenurines' short, deep skulls, long front feet with very reduced lateral digits, and the monodactyl hind feet. Teapot Creek, a tributary of the MacLaughlin River in the Southern Monaro, southeastern New South Wales, contains a sequence of terraces. The highest and oldest of these terraces was reported to contain the remains of fossil mammals found in Plio-Pleistocene fossil deposits elsewhere in eastern Australia. Sthenurus atlas, S. occidentalis, and S. newtonae are some of the species identified from the fossils found in the terrace. Paleodiet Examining the structure and lifestyle of this species is difficult because not much material has surfaced in regards to them. However, even within the rarity of discoveries relating to the kangaroo-like species, scientists were able to use their findings to learn more about their lifestyles. For example, scientists broke down the few bones that they had discovered during the process of isotope analysis (which is the study of the distribution of certain isotopes that ease the process of drawing conclusions when determining food chains) and retrieved material which allowed them to draw the conclusion regarding their paleodiet. These animals were herbivores because the material they retrieved drew back to the plantation that was Australia (where their bones were found). Anatomy thumb |Life restoration In anatomy, they had a tail shorter but stronger than present species of kangaroos, and only one toe instead of three like the red kangaroo. At the end of the foot was a small hoof-like nail suited for flat terrain; this toe is considered their fourth toe. Their skeletal structure was very robust with powerful hind limbs, a broad pelvis, a short neck, and longer arms and phalanges than modern species. Their phalanges may have been used to hold stems and twigs. These unique adaptations suited their feeding habits of browsing in the case of S. occidentalis, but other species were most likely grazers. The body mass of the largest species is estimated to be , nearly three times that of the largest extant species. Due to their giant height and weight, the largest species possibly did not hop as a form of locomotion, but rather walked bipedally in a similar manner to hominids. This gait would have been used at slow speeds, since hopping at slow speeds would have been inefficient. Pentapedal movement and bipedal hopping no longer seem to have been options for these massive kangaroos. A morphological difference exists between the scapulae (shoulder blades) of the Sthenurine and the extant and extinct macropodids. They possessed a short, deep skull, which was suited for stereoscopic vision; this allowed for better depth perception. Skull S. stirlingi had a large, dolichocephalic skull with a more elevated braincase position and an inflamed nasal frontal region in comparison to the contemporaneous skull of S. tindelai. S. andersoni skull fossils show a dome-like forehead that is unique to it among other dolichocephalic sthenurines. This is attributed to the continuous high vaulting of the frontals above the orbits and the line of the rostrum. Teeth These structures were tough and strongly enamelled, useful for tough vegetation and with a striation pattern. In S. stirlingi, fossil evidence shows that the tooth row curves medially (anteriorly and posteriorly) from a line tangential to the labial side of the molars at the anterior ridge of the masseteric processes. The fossils of teeth may also suggest that the sthenurines and macropodines shared a common ancestor. They share many synapomorphic character states. They each have well-developed lophs on molars and both lack a posthypocristid. Human interaction From evidence gathered at Cuddie Springs, Native Australians inhabited the same habitat as that of Sthenurus and various other extant and extinct species of animals. At this locality, a lack of any specific tools suitable for hunting seems to occur. Instead, tools used to cut meat off the bone and blood residue left on the stone tools were found. Any material made of wood for hunting, such as the boomerang and spear, has either not survived intact or was not used by the people of the time in this locality. While this evidence may suggest that human contact with Sthenurus spp. and the remainder of the Australian megafauna could have caused the extinction of these mammals, some studies show the extinction was probably under way before human contact. Sthenurus spp. were herbivores, and when a great climate change began to occur, they did not change their eating habits. This probably had a much larger impact on this particular genus regarding extinction.
Biology and health sciences
Diprotodontia
Animals
11200529
https://en.wikipedia.org/wiki/Manufacturing%20engineering
Manufacturing engineering
Manufacturing engineering or production engineering is a branch of professional engineering that shares many common concepts and ideas with other fields of engineering such as mechanical, chemical, electrical, and industrial engineering. Manufacturing engineering requires the ability to plan the practices of manufacturing; to research and to develop tools, processes, machines, and equipment; and to integrate the facilities and systems for producing quality products with the optimum expenditure of capital. The manufacturing or production engineer's primary focus is to turn raw material into an updated or new product in the most effective, efficient & economic way possible. An example would be a company uses computer integrated technology in order for them to produce their product so that it is faster and uses less human labor. Overview Manufacturing Engineering is based on core industrial engineering and mechanical engineering skills, adding important elements from mechatronics, commerce, economics, and business management. This field also deals with the integration of different facilities and systems for producing quality products (with optimal expenditure) by applying the principles of physics and the results of manufacturing systems studies, such as the following: Craft Putting-out system British factory system American system of manufacturing Mass production Computer integrated manufacturing Computer-aided technologies in manufacturing Just in time manufacturing Lean manufacturing Flexible manufacturing Mass customization Agile manufacturing Rapid manufacturing Prefabrication Ownership Fabrication Publication Additive manufacturing Manufacturing engineers develop and create physical artifacts, production processes, and technology. It is a very broad area which includes the design and development of products. Manufacturing engineering is considered to be a subdiscipline of industrial engineering/systems engineering and has very strong overlaps with mechanical engineering. Manufacturing engineers' success or failure directly impacts the advancement of technology and the spread of innovation. This field of manufacturing engineering emerged from the tool and die discipline in the early 20th century. It expanded greatly from the 1960s when industrialized countries introduced factories with: 1. Numerical control machine tools and automated systems of production. 2. Advanced statistical methods of quality control: These factories were pioneered by the American electrical engineer William Edwards Deming, who was initially ignored by his home country. The same methods of quality control later turned Japanese factories into world leaders in cost-effectiveness and production quality. 3. Industrial robots on the factory floor, introduced in the late 1970s: These computer-controlled welding arms and grippers could perform simple tasks such as attaching a car door quickly and flawlessly 24 hours a day. This cut costs and improved production speed. History The history of manufacturing engineering can be traced to factories in the mid-19th century USA and 18th century UK. Although large home production sites and workshops were established in China, ancient Rome, and the Middle East, the Venice Arsenal provides one of the first examples of a factory in the modern sense of the word. Founded in 1104 in the Republic of Venice several hundred years before the Industrial Revolution, this factory mass-produced ships on assembly lines using manufactured parts. The Venice Arsenal apparently produced nearly one ship every day and, at its height, employed 16,000 people. Many historians regard Matthew Boulton's Soho Manufactory (established in 1761 in Birmingham) as the first modern factory. Similar claims can be made for John Lombe's silk mill in Derby (1721), or Richard Arkwright's Cromford Mill (1771). The Cromford Mill was purpose-built to accommodate the equipment it held and to take the material through the various manufacturing processes. One historian, Jack Weatherford, contends that the first factory was in Potosí. The Potosi factory took advantage of the abundant silver that was mined nearby and processed silver ingot slugs into coins. British colonies in the 19th century built factories simply as buildings where a large number of workers gathered to perform hand labor, usually in textile production. This proved more efficient for the administration and distribution of materials to individual workers than earlier methods of manufacturing, such as cottage industries or the putting-out system. Cotton mills used inventions such as the steam engine and the power loom to pioneer the industrial factories of the 19th century, where precision machine tools and replaceable parts allowed greater efficiency and less waste. This experience formed the basis for the later studies of manufacturing engineering. Between 1820 and 1850, non-mechanized factories supplanted traditional artisan shops as the predominant form of manufacturing institution. Henry Ford further revolutionized the factory concept and thus manufacturing engineering in the early 20th century with the innovation of mass production. Highly specialized workers situated alongside a series of rolling ramps would build up a product such as (in Ford's case) an automobile. This concept dramatically decreased production costs for virtually all manufactured goods and brought about the age of consumerism. Modern developments Modern manufacturing engineering studies include all intermediate processes required for the production and integration of a product's components. Some industries, such as semiconductor and steel manufacturers use the term "fabrication" for these processes. Automation is used in different processes of manufacturing such as machining and welding. Automated manufacturing refers to the application of automation to produce goods in a factory. The main advantages of automated manufacturing for the manufacturing process are realized with effective implementation of automation and include higher consistency and quality, reduction of lead times, simplification of production, reduced handling, improved workflow, and improved worker morale. Robotics is the application of mechatronics and automation to create robots, which are often used in manufacturing to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot). Robots are used extensively in manufacturing engineering. Robots allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform economically, and ensure better quality. Many companies employ assembly lines of robots, and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications. Education Manufacturing Engineers Manufacturing Engineers focus on the design, development, and operation of integrated systems of production to obtain high quality & economically competitive products. These systems may include material handling equipment, machine tools, robots, or even computers or networks of computers. Certification Programs Manufacturing engineers possess an associate's or bachelor's degree in engineering with a major in manufacturing engineering. The length of study for such a degree is usually two to five years followed by five more years of professional practice to qualify as a professional engineer. Working as a manufacturing engineering technologist involves a more applications-oriented qualification path. Academic degrees for manufacturing engineers are usually the Associate or Bachelor of Engineering, [BE] or [BEng], and the Associate or Bachelor of Science, [BS] or [BSc]. For manufacturing technologists the required degrees are Associate or Bachelor of Technology [B.TECH] or Associate or Bachelor of Applied Science [BASc] in Manufacturing, depending upon the university. Master's degrees in engineering manufacturing include Master of Engineering [ME] or [MEng] in Manufacturing, Master of Science [M.Sc] in Manufacturing Management, Master of Science [M.Sc] in Industrial and Production Management, and Master of Science [M.Sc] as well as Master of Engineering [ME] in Design, which is a subdiscipline of manufacturing. Doctoral [PhD] or [DEng] level courses in manufacturing are also available depending on the university. The undergraduate degree curriculum generally includes courses in physics, mathematics, computer science, project management, and specific topics in mechanical and manufacturing engineering. Initially, such topics cover most, if not all, of the subdisciplines of manufacturing engineering. Students then choose to specialize in one or more subdisciplines towards the end of their degree work. Syllabus The Foundational Curriculum for a Bachelor's Degree in Manufacturing Engineering or Production Engineering includes below mentioned syllabus. This syllabus is closely related to Industrial Engineering and Mechanical Engineering, but it differs by placing more emphasis on Manufacturing Science or Production Science. It includes the following areas: Mathematics (Calculus, Differential Equations, Statistics and Linear Algebra) Mechanics (Statics & Dynamics) Solid Mechanics Fluid Mechanics Materials Science Strength of Materials Fluid Dynamics Hydraulics Pneumatics HVAC (Heating, Ventilation & Air Conditioning) Heat Transfer Applied Thermodynamics Energy Conversion Instrumentation and Measurement Engineering Drawing (Drafting) & Engineering Design Engineering Graphics Mechanism Design including Kinematics and Dynamics Manufacturing Processes Mechatronics Circuit Analysis Lean Manufacturing Automation Reverse Engineering Quality Control CAD (Computer Aided Design) CAM (Computer Aided Manufacturing) Project Management A degree in Manufacturing Engineering typically differs from Mechanical Engineering in only a few specialized classes. Mechanical Engineering degrees focus more on the product design process and on complex products which requires more mathematical expertise. Manufacturing engineering certification Certification and licensure: In some countries, "professional engineer" is the term for registered or licensed engineers who are permitted to offer their professional services directly to the public. Professional Engineer, abbreviated (PE - USA) or (PEng - Canada), is the designation for licensure in North America. To qualify for this license, a candidate needs a bachelor's degree from an ABET-recognized university in the USA, a passing score on a state examination, and four years of work experience usually gained via a structured internship. In the USA, more recent graduates have the option of dividing this licensure process into two segments. The Fundamentals of Engineering (FE) exam is often taken immediately after graduation and the Principles and Practice of Engineering exam is taken after four years of working in a chosen engineering field. Society of Manufacturing Engineers (SME) certification (USA): The SME administers qualifications specifically for the manufacturing industry. These are not degree level qualifications and are not recognized at the professional engineering level. The following discussion deals with qualifications in the USA only. Qualified candidates for the Certified Manufacturing Technologist Certificate (CMfgT) must pass a three-hour, 130-question multiple-choice exam. The exam covers math, manufacturing processes, manufacturing management, automation, and related subjects. Additionally, a candidate must have at least four years of combined education and manufacturing-related work experience. Certified Manufacturing Engineer (CMfgE) is an engineering qualification administered by the Society of Manufacturing Engineers, Dearborn, Michigan, USA. Candidates qualifying for a Certified Manufacturing Engineer credential must pass a four-hour, 180-question multiple-choice exam which covers more in-depth topics than the CMfgT exam. CMfgE candidates must also have eight years of combined education and manufacturing-related work experience, with a minimum of four years of work experience. Certified Engineering Manager (CEM). The Certified Engineering Manager Certificate is also designed for engineers with eight years of combined education and manufacturing experience. The test is four hours long and has 160 multiple-choice questions. The CEM certification exam covers business processes, teamwork, responsibility, and other management-related categories. Modern tools Many manufacturing companies, especially those in industrialized nations, have begun to incorporate computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and ease of use in designing mating interfaces and tolerances. Other CAE programs commonly used by product manufacturers include product life cycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM). Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of relatively few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows. Just as manufacturing engineering is linked with other disciplines, such as mechatronics, multidisciplinary design optimization (MDO) is also being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also utilize sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems. On the business side of manufacturing engineering, enterprise resource planning (ERP) tools can overlap with PLM tools and use connector programs with CAD tools to share drawings, sync revisions, and be the master for certain data used in the other modern tools above, like part numbers and descriptions. Manufacturing Engineering around the world Manufacturing engineering is an extremely important discipline worldwide. It goes by different names in different countries. In the United States and the continental European Union it is commonly known as Industrial Engineering and in the United Kingdom and Australia it is called Manufacturing Engineering. Subdisciplines Mechanics Mechanics, in the most general sense, is the study of forces and their effects on matter. Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic) of objects under known forces (also called loads) or stresses. Subdisciplines of mechanics include: Statics, the study of non-moving bodies under known loads Dynamics (or kinetics), the study of how forces affect moving bodies Mechanics of materials, the study of how different materials deform under various types of stress Fluid mechanics, the study of how fluids react to forces Continuum mechanics, a method of applying mechanics that assumes that objects are continuous (rather than discrete) If the engineering project were to design a vehicle, statics might be employed to design the frame of the vehicle to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the manufacture of the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle or to design the intake system for the engine. Kinematics Kinematics is the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion. The movement of a crane and the oscillations of a piston in an engine are both simple kinematic systems. The crane is a type of open kinematic chain, while the piston is part of a closed four-bar linkage. Engineers typically use kinematics in the design and analysis of mechanisms. Kinematics can be used to find the possible range of motion for a given mechanism, or, working in reverse, can be used to design a mechanism that has a desired range of motion. Drafting Drafting or technical drawing is the means by which manufacturers create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A U.S engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions. Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings, but this is becoming an increasing rarity with the advent of computer numerically controlled (CNC) manufacturing. Engineers primarily manufacture parts manually in the areas of applied spray coatings, finishes, and other processes that cannot economically or practically be done by a machine. Drafting is used in nearly every subdiscipline of mechanical and manufacturing engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD). Machine tools and metal fabrication Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and providing a guided movement of the parts of the machine. Metal fabrication is the building of metal structures by cutting, bending, and assembling processes. Computer Integrated Manufacturing Computer-integrated manufacturing (CIM) is the manufacturing approach of using computers to control the entire production process. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries. Mechatronics Mechatronics is an engineering discipline that deals with the convergence of electrical, mechanical and manufacturing systems. Such combined systems are known as electromechanical systems and are widespread. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various aircraft and automobile subsystems. The term mechatronics is typically used to refer to macroscopic systems, but futurists have predicted the emergence of very small electromechanical devices. Already such small devices, known as Microelectromechanical systems (MEMS), are used in automobiles to initiate the deployment of airbags, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high-definition printing. In the future, it is hoped that such devices will be used in tiny implantable medical devices and to improve optical communication. Textile engineering Textile engineering courses deal with the application of scientific and engineering principles to the design and control of all aspects of fiber, textile, and apparel processes, products, and machinery. These include natural and man-made materials, interaction of materials with machines, safety and health, energy conservation, and waste and pollution control. Additionally, students are given experience in plant design and layout, machine and wet process design and improvement, and designing and creating textile products. Throughout the textile engineering curriculum, students take classes from other engineering and disciplines including: mechanical, chemical, materials and industrial engineering. Advanced composite materials Advanced composite materials (engineering) (ACMs) are also known as advanced polymer matrix composites. These are generally characterized or determined by unusually high strength fibres with unusually high stiffness, or modulus of elasticity characteristics, compared to other materials, while bound together by weaker matrices. Advanced composite materials have broad, proven applications, in the aircraft, aerospace, and sports equipment sectors. Even more specifically ACMs are very attractive for aircraft and aerospace structural parts. Manufacturing ACMs is a multibillion-dollar industry worldwide. Composite products range from skateboards to components of the space shuttle. The industry can be generally divided into two basic segments, industrial composites and advanced composites. Employment Manufacturing engineering is just one facet of the engineering manufacturing industry. Manufacturing engineers enjoy improving the production process from start to finish. They have the ability to keep the whole production process in mind as they focus on a particular portion of the process. Successful students in manufacturing engineering degree programs are inspired by the notion of starting with a natural resource, such as a block of wood, and ending with a usable, valuable product, such as a desk, produced efficiently and economically. Manufacturing engineers are closely connected with engineering and industrial design efforts. Examples of major companies that employ manufacturing engineers in the United States include General Motors Corporation, Ford Motor Company, Chrysler, Boeing, Gates Corporation and Pfizer. Examples in Europe include Airbus, Daimler, BMW, Fiat, Navistar International, and Michelin Tyre. Industries where manufacturing engineers are generally employed include: Aerospace industry Automotive industry Chemical industry Computer industry Engineering management Food processing industry Garment industry Industrial engineering Mechanical engineering Pharmaceutical industry Process engineering Pulp and paper industry Systems engineering Toy industry Frontiers of research Flexible manufacturing systems A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react to changes, whether predicted or unpredicted. This flexibility is generally considered to fall into two categories, both of which have numerous subcategories. The first category, machine flexibility, covers the system's ability to be changed to produce new product types and the ability to change the order of operations executed on a part. The second category, called routing flexibility, consists of the ability to use multiple machines to perform the same operation on a part, as well as the system's ability to absorb large-scale changes, such as in volume, capacity, or capability. Most FMS systems comprise three main systems. The work machines, which are often automated CNC machines, are connected by a material handling system to optimize parts flow, and to a central control computer, which controls material movements and machine flow. The main advantages of an FMS is its high flexibility in managing manufacturing resources like time and effort in order to manufacture a new product. The best application of an FMS is found in the production of small sets of products from a mass production. Computer integrated manufacturing Computer-integrated manufacturing (CIM) in engineering is a method of manufacturing in which the entire production process is controlled by computer. Traditionally separated process methods are joined through a computer by CIM. This integration allows the processes to exchange information and to initiate actions. Through this integration, manufacturing can be faster and less error-prone, although the main advantage is the ability to create automated manufacturing processes. Typically CIM relies on closed-loop control processes based on real-time input from sensors. It is also known as flexible design and manufacturing. Friction stir welding Friction stir welding was discovered in 1991 by The Welding Institute (TWI). This innovative steady state (non-fusion) welding technique joins previously un-weldable materials, including several aluminum alloys. It may play an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include: welding the seams of the aluminum main space shuttle external tank, the Orion Crew Vehicle test article, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket; armor plating for amphibious assault ships; and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation, among an increasingly growing range of uses. Other areas of research are Product Design, MEMS (Micro-Electro-Mechanical Systems), Lean Manufacturing, Intelligent Manufacturing Systems, Green Manufacturing, Precision Engineering, Smart Materials, etc.
Technology
Disciplines
null
4321490
https://en.wikipedia.org/wiki/Fish%20trap
Fish trap
A fish trap is a trap used for catching fish and other aquatic animals of value. Fish traps include fishing weirs, cage traps, fish wheels and some fishing net rigs such as fyke nets. The use of traps are culturally almost universal around the world and seem to have been independently invented many times. There are two main types of trap, a permanent or semi-permanent structure placed in a river or tidal area and bottle or pot trap that are usually, but not always baited to attract prey, and are periodically lifted out of the water. A typical contemporary trap consists of a frame of thick steel wire in the shape of a heart, with chicken wire stretched around it. The mesh wraps around the frame and then tapers into the inside of the trap. Fishes that swim inside through this opening cannot get out, as the chicken wire opening bends back into its original narrowness. In earlier times, traps were constructed of wood and fibre. Fish traps contribute to the problems of marine debris and bycatch. History Traps are culturally almost universal and seem to have been independently invented many times. There are essentially two types of trap, a permanent or semi-permanent structure placed in a river or tidal area and bottle or pot trap that are usually, but not always baited to attract prey, and are periodically lifted out of the water. The Mediterranean Sea, with an area of about of , is shaped according to the principle of a bottle trap. It is easy for fish from the Atlantic Ocean to swim into the Mediterranean through the narrow neck at Gibraltar, and difficult for them to find their way out. It has been described as "the largest fish trap in the world". The prehistoric Yaghan people who inhabited the Tierra Del Fuego area constructed stonework in shallow inlets that would effectively confine fish at low tide levels. Some of this extant stonework survives at Bahia Wulaia at the Bahia Wulaia Dome Middens archaeological site. In southern Italy, during the 17th century, a new fishing technique began to be used. The trabucco is an old fishing machine typical of the coast of Gargano protected as historical monuments by the homonym National Park. This giant trap, built in structural wood, is spread along the coast of southern Adriatic especially in the province of Foggia, in some areas of the Abruzzese coastlines and also in some parts of the coast of southern Tyrrhenian Sea. The Stilbaai Tidal Fish Traps are ancient intertidal stonewall fish traps that occur in various spots on the Western Cape coast of South Africa from Gansbaai to Mosselbaai. The existing fish traps that can still be seen have been built during the past 300 years, some as recently as the latter part of the 20th century, whilst others could date as far back as 3,000 years. Indigenous Australians were, prior to European colonization, most populous in Australia's better-watered areas such as the Murray-Darling river system of the south-east. Here, where water levels fluctuate seasonally, they constructed ingenious stone fish traps. Most have been completely or partially destroyed. The largest and best-known are those on the Barwon River at Brewarrina, New South Wales, which are at least partly preserved. The Brewarrina fish traps caught huge numbers of migratory native fish as the Barwon River rose in flood and then fell. In southern Victoria, such as at Budj Bim (now a UNESCO world heritage site) indigenous people created an elaborate system of canals, some more than 2 km long. The purpose of these canals was to attract and catch eels, a fish of short coastal rivers (as opposed to rivers of the Murray-Darling system). The eels were caught by a variety of traps including stone walls constructed across canals with a net placed across an opening in the wall. Traps at different levels in the marsh came into operation as the water level rose and fell. The traps at Budj Bim are seen as a form of Indigenous aquaculture dating back at least 6,600 years (older than the Pyramids of Giza), with the Muldoon traps system seen as the world's oldest stone walled fish trap, and longest used fish trap in the world. Somewhat similar stone-wall traps were constructed by Native American Pit River people in north-eastern California. In South Australia, the Barngarla people of Eyre Peninsula combined the use of fish traps with singing "to call sharks and dolphins to chase the fish into the fish traps, where the Barngarla people would appear to spear and stone the fish." A technique called dam fishing is used by the Baka pygmies. This involves the construction of a temporary dam resulting in a drop in the water levels downstream— allowing fish to be easily collected. Also used in Chile, mainly in Chiloé, which were unusually abundant (fish were and basket fish trap). Types and methods The manner in which fish traps are used depends on local conditions and the behaviour of the local fish. For example, a fish trap might be placed in shallow water near rocks where pikes like to lie. If placed correctly, traps can be very effective. It is usually not necessary to check the trap daily, since the fish remain alive inside the trap, relatively unhurt. Because of this, the trap also allows for the release of undersized fish as per fishing regulations. Fish traps contribute to the problem of marine debris, unless they are made of biodegradable material, says a United Nations report. For example, fishers lost 31,600 crab traps in the Bristol Bay (Alaska) in a period of two years. Each year, fisheries in Chesapeake Bay (Northeastern United States) lose or abandon 12 to 20 percent of their crab traps, according to a government report. These traps continue to trap animals. Fish traps can also trap protected species such as platypus in Australia. Portable traps These are usually in the shape of a pot or bottle. Fixed and semi-fixed structures More images
Technology
Hunting and fishing
null
4322402
https://en.wikipedia.org/wiki/Globally%20Harmonized%20System%20of%20Classification%20and%20Labelling%20of%20Chemicals
Globally Harmonized System of Classification and Labelling of Chemicals
The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) is an internationally agreed-upon standard managed by the United Nations that was set up to replace the assortment of hazardous material classification and labelling schemes previously used around the world. Core elements of the GHS include standardized hazard testing criteria, universal warning pictograms, and safety data sheets which provide users of dangerous goods relevant information with consistent organization. The system acts as a complement to the UN numbered system of regulated hazardous material transport. Implementation is managed through the UN Secretariat. Although adoption has taken time, as of 2017, the system has been enacted to significant extents in most major countries of the world. This includes the European Union, which has implemented the United Nations' GHS into EU law as the CLP Regulation, and United States Occupational Safety and Health Administration standards. History Before the GHS was created and implemented, there were many different regulations on hazard classification in use in different countries, resulting in multiple standards, classifications and labels for the same hazard. Given the $1.7 trillion per year international trade in chemicals requiring hazard classification, the cost of compliance with multiple systems of classification and labeling is significant. Developing a worldwide standard accepted as an alternative to local and regional systems presented an opportunity to reduce costs and improve compliance. The GHS development began at the 1992 Rio Conference on Environment and Development by the United Nations, also called Earth Summit (1992), when the International Labour Organization (ILO), the Organisation for Economic Co-operation and Development (OECD), various governments, and other stakeholders agreed that "A globally harmonized hazard classification and compatible labelling system, including material safety data sheets and easily understandable symbols, should be available if feasible, by the year 2000". The universal standard for all countries was to replace all the diverse classification systems; however, it is not a compulsory provision of any treaty. The GHS provides a common infrastructure for participating countries to use when implementing a hazard classification and Hazard Communication Standard. Hazard classification The GHS classification system defines and classifies the physical, health, and/or environmental hazards of a substance. Each category within the classifications has associated pictograms to be used when applied to a material or mixture. Physical hazards As of the 10th revision of the GHS, substances or articles are assigned to 17 different hazard classes largely based on the United Nations Dangerous Goods System. Explosives are assigned to one of four subcategories depending on the type of hazard they present, similar to the categories used in the UN Dangerous Goods System. Category 1 includes explosives not covered by the 6 Dangerous Goods categories. Flammable gases are assigned to one of 3 categories based on reactivity: Category 1A includes extremely flammable gases ignitable at 20 °C and standard pressure of 101.3 kPa, pyrophoric gases, and chemically unstable gases that may react in the absence of oxygen. Category 1B gases meet the flammability criteria of 1A, but are not pyrophoric or chemically unstable and have a lower flammability limit in air. Category 2 includes gases which do not meet the above criteria but otherwise are flammable at 20 °C and standard pressure. Aerosols and chemicals under pressure are categorized into one of 3 categories, but may be additionally classified as explosives or flammable gases if material properties match the previous classifications. From category 1 to 3, aerosols are classified as most to least flammable. All aerosols under these categories carry a bursting hazard. Oxidizing gases are any gaseous substance which contribute to combustion of other materials more than air would. There is only one category of oxidizing gases. Gases under pressure are categorized as compressed, liquefied, refrigerated, or dissolved gases, all of which may explode when heated or (in the case of refrigerated gases) cause cryogenic injury, such as frostbite. Flammable liquids are categorized by flammability, from Category 1 with flash point < 23 °C and initial boiling point < 35 °C to Category 4 with flash point > 60 °C and < 93 °C. Flammable solids are classified as solid substances which are readily combustible or may contribute to a fire through friction, and ignitable metal powders. They are placed into Category 1 if a fire is not stopped by wetting the substance, and Category 2 if wetting stops the fire for at least 4 minutes. Self-reactive substances and mixtures are liable to detonate or combust without the participation of air and are placed into 7 categories from A to G with decreasing reactivity. Pyrophoric liquids are liable to ignite after 5 minutes of coming in contact with air. Pyrophoric solids follow the same criteria as pyrophoric liquids. Self-heating substances, which differ from self-reactive substances in that they will only ignite in large quantities (kilograms) and after a long duration of time (hours or days). Category 1 is reserved for samples which self-heat in small quantities (25 mm2), and all other self-heating substances that only heat in large quantities are listed under Category 2. Substances and mixtures which, in contact with water, emit flammable gases are categorized from 1 to 3 based on the ignitability of the gas emitted. Oxidizing liquids contribute to the combustion of other materials and are categorized from 1 to 3 in decreasing oxidizing potential. Oxidizing solids follow the same criteria as oxidizing liquids. Organic peroxides are unstable substances or mixtures and may be derivatives of hydrogen peroxide. They are categorized from A to G based on inherent ability to explode or otherwise combust. Corrosive to metals materials may damage or destroy metals, based on tests done on aluminum and steel. The corrosion rate must be greater than 6.25 mm/year on either material to qualify under this classification. Desensitized explosives are materials that would otherwise be classified as explosive, but have been stabilized, or phlegmatized, to be exempted from said class. Health hazards Acute toxicity includes five GHS categories from which the appropriate elements relevant to transport, consumer, worker and environment protection can be selected. Substances are assigned to one of the five toxicity categories on the basis of LD50 (oral, dermal) or LC50 (inhalation). Skin corrosion means the production of irreversible damage to the skin following the application of a test substance for up to 4 hours. Substances and mixtures in this hazard class are assigned to a single harmonized corrosion category. Skin irritation means the production of reversible damage to the skin following the application of a test substance for up to 4 hours. Substances and mixtures in this hazard class are assigned to a single irritant category. For those authorities, such as pesticide regulators, wanting more than one designation for skin irritation, an additional mild irritant category is provided. Serious eye damage means the production of tissue damage in the eye, or serious physical decay of vision, following application of a test substance to the front surface of the eye, which is not fully reversible within 21 days of application. Substances and mixtures in this hazard class are assigned to a single harmonized category. Eye irritation means changes in the eye following the application of a test substance to the front surface of the eye, which are fully reversible within 21 days of application. Substances and mixtures in this hazard class are assigned to a single harmonized hazard category. For authorities, such as pesticide regulators, wanting more than one designation for eye irritation, one of two subcategories can be selected, depending on whether the effects are reversible in 21 or 7 days. Respiratory sensitizer means a substance that induces hypersensitivity of the airways following inhalation of the substance. Substances and mixtures in this hazard class are assigned to one hazard category. Skin sensitizer means a substance that will induce an allergic response following skin contact. The definition for "skin sensitizer" is equivalent to "contact sensitizer". Substances and mixtures in this hazard class are assigned to one hazard category. Germ cell mutagenicity means an agent giving rise to an increased occurrence of mutations in populations of cells and/or organisms. Substances and mixtures in this hazard class are assigned to one of two hazard categories. Category 1 has two subcategories. Carcinogenicity means a chemical substance or a mixture of chemical substances that induce cancer or increase its incidence. Substances and mixtures in this hazard class are assigned to one of two hazard categories. Category 1 has two subcategories. Reproductive toxicity includes adverse effects on sexual function and fertility in adult males and females, as well as developmental toxicity in offspring. Substances and mixtures with reproductive and/or developmental effects are assigned to one of two hazard categories, 'known or presumed' and 'suspected'. Category 1 has two subcategories for reproductive and developmental effects. Materials which cause concern for the health of breastfed children have a separate category: effects on or via Lactation. Specific target organ toxicity (STOT) category distinguishes between single and repeated exposure for Target Organ Effects. All significant health effects, not otherwise specifically included in the GHS, that can impair function, both reversible and irreversible, immediate and/or delayed are included in the non-lethal target organ/systemic toxicity class (TOST). Narcotic effects and respiratory tract irritation are considered to be target organ systemic effects following a single exposure. Substances and mixtures of the single exposure target organ toxicity hazard class are assigned to one of three hazard categories. Substances and mixtures of the repeated exposure target organ toxicity hazard class are assigned to one of two hazard categories. Aspiration hazard includes severe acute effects such as chemical pneumonia, varying degrees of pulmonary injury or death following aspiration. Aspiration is the entry of a liquid or solid directly through the oral or nasal cavity, or indirectly from vomiting, into the trachea and lower respiratory system. Substances and mixtures of this hazard class are assigned to one of two hazard categories this hazard class on the basis of viscosity. Environmental hazards Acute aquatic toxicity indicates the intrinsic property of a material of causing injury to an aquatic organism in a short-term exposure. Substances and mixtures of this hazard class are assigned to one of three toxicity categories on the basis of acute toxicity data: LC50 (fish) or EC50 (crustacean) or ErC50 (for algae or other aquatic plants). These acute toxicity categories may be subdivided or extended for certain sectors. Chronic aquatic toxicity indicates the potential or actual properties of a material to cause adverse effects to aquatic organisms during exposures that are determined in relation to the lifecycle of the organism. Substances and mixtures in this hazard class are assigned to one of four toxicity categories on the basis of acute data and environmental fate data: LC50 (fish), EC50 (crustacea) ErC50 (for algae or other aquatic plants), and degradation or bioaccumulation. Ozone Depleting Potential indicates the ability of the materials to damage the Ozone Layer, determined by the Montreal Protocol. Substances and mixtures bearing this quality have the Hazard Statement H420. Classification of mixtures The GHS approach to the classification of mixtures for health and environmental hazards uses a tiered approach and is dependent upon the amount of information available for the mixture itself and for its components. Principles that have been developed for the classification of mixtures, drawing on existing systems such as the European Union (EU) system for classification of preparations laid down in Directive 1999/45/EC. The process for the classification of mixtures is based on the following steps: Where toxicological or ecotoxicological test data are available for the mixture itself, the classification of the mixture will be based on that data; Where test data are not available for the mixture itself, then the appropriate bridging principles should be applied, which uses test data for components and/or similar mixtures; If (1) test data are not available for the mixture itself, and (2) the bridging principles cannot be applied, then use the calculation or cutoff values described in the specific endpoint to classify the mixture. Substitute substances Companies are encouraged to replace hazardous substances with substances featuring a reduced health risk. As an assistance to assess possible substitute substances, the Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA) has developed the Column Model. On the basis of just a small amount of information on a product, substitute substances can be evaluated with the support of this table. The current version from 2020 already includes the amendments of the 12th CLP Adaptation Regulation 2019/521. Testing requirements The GHS generally defers to the United States Environmental Protection Agency and OECD to provide and verify toxicity testing requirements for substances or mixtures. Overall, the GHS criteria for determining health and environmental hazards are test method neutral, allowing different approaches as long as they are scientifically sound and validated according to international procedures and criteria already referred to in existing systems. Test data already generated for the classification of chemicals under existing systems should be accepted when classifying these chemicals under the GHS, thereby avoiding duplicative testing and the unnecessary use of test animals. For physical hazards, the test criteria are linked to specific UN test methods. Hazard communication Per GHS, hazards need to be communicated: in more than one form (for example, placards, labels or Safety Data Sheets), with hazard statements and precautionary statements, in an easily comprehensible and standardized manner, consistent with other statements to reduce confusion, and taking into account all existing research and any new evidence. Comprehensibility is a significant consideration in GHS implementation. The GHS Purple Book includes a comprehensibility-testing instrument in Annex 6. Factors that were considered in developing the GHS communication tools include: Different philosophies in existing systems on how and what should be communicated; Language differences around the world; Ability to translate phrases meaningfully; Ability to understand and appropriately respond to pictograms. GHS label elements The standardized label elements included in the GHS are: Symbols (GHS hazard pictograms): Convey health, physical and environmental hazard information, assigned to a GHS hazard class and category. Pictograms include the harmonized hazard symbols plus other graphic elements, such as borders, background patterns or cozers and substances which have target organ toxicity. Also, harmful chemicals and irritants are marked with an exclamation mark, replacing the European saltire. Pictograms will have a black symbol on a white background with a red diamond frame. For transport, pictograms will have the background, symbol and colors currently used in the UN Recommendations on the Transport of Dangerous Goods. Where a transport pictogram appears, the GHS pictogram for the same hazard should not appear. Signal word: "Danger" or "Warning" will be used to emphasize hazards and indicate the relative level of severity of the hazard, assigned to a GHS hazard class and category. Some lower level hazard categories do not use signal words. Only one signal word corresponding to the class of the most severe hazard should be used on a label. GHS hazard statements: Standard phrases assigned to a hazard class and category that describe the nature of the hazard. An appropriate statement for each GHS hazard should be included on the label for products possessing more than one hazard. The additional label elements included in the GHS are: GHS precautionary statements: Measures to minimize or prevent adverse effects. There are four types of precautionary statements covering: prevention, response in cases of accidental spillage or exposure, storage, and disposal. The precautionary statements have been linked to each GHS hazard statement and type of hazard. Product identifier (ingredient disclosure): Name or number used for a hazardous product on a label or in the SDS. The GHS label for a substance should include the chemical identity of the substance. For mixtures, the label should include the chemical identities of all ingredients that contribute to acute toxicity, skin corrosion or serious eye damage, germ cell mutagenicity, carcinogenicity, reproductive toxicity, skin or respiratory sensitization, or Specific Target Organ Toxicity (STOT), when these hazards appear on the label. Supplier identification: The name, address and telephone number should be provided on the label. Supplemental information: Non-harmonized information on the container of a hazardous product that is not required or specified under the GHS. Supplemental information may be used to provide further detail that does not contradict or cast doubt on the validity of the standardized hazard information. GHS label format The GHS includes directions for application of the hazard communication elements on the label. In particular, it specifies for each hazard, and for each class within the hazard, what signal word, pictogram, and hazard statement should be used. The GHS hazard pictograms, signal words and hazard statements should be located together on the label. The actual label format or layout is not specified. National authorities may choose to specify where information should appear on the label, or to allow supplier discretion in the placement of GHS information. The diamond shape of GHS pictograms resembles the shape of signs mandated for use by the United States Department of Transportation. To address this, in cases where a pictogram would be required by both the Department of Transportation and the GHS indicating the same hazard, only the Transportation pictogram is to be used. Safety data sheet Safety data sheets or SDS are specifically aimed at use in the workplace. Safety data sheets take precedence over and are intended to replace the previously used material safety data sheets (MSDS), which did not have a standard layout and section format. It should provide comprehensive information about the chemical product that allows employers and workers to obtain concise, relevant and accurate information in perspective to the hazards, uses and risk management of the chemical product in the workplace. Compared to the differences found between manufacturers in MSDS, SDS have specific requirements to include the following headings in the order specified: Identification Hazard(s) identification Composition/ information on ingredients First-aid measures Fire-fighting measures Accidental release measures Handling and storage Exposure control/ personal protection Physical and chemical properties Chemical stability and reactivity Toxicological information Ecological information Disposal considerations Transport information Regulatory information Other information The primary difference between the GHS and previous international industry recommendations is that sections 2 and 3 have been reversed in order. The GHS SDS headings, sequence, and content are similar to the ISO, European Union and ANSI MSDS/SDS requirements. A table comparing the content and format of a MSDS/SDS versus the GHS SDS is provided in Appendix A of the U.S. Occupational Safety and Health Administration (OSHA) GHS guidance. Training Current training procedures for hazard communication in the United States are more detailed than the GHS training recommendations. Training is a key component of the overall GHS approach. Employees and emergency responders must be trained on all program elements, though there has been confusion among these groups of workers in the implementation process regarding which training elements have changed and are required to maintain regulatory compliance. Implementation The United Nations goal was for broad international adoption of the system, and as of 2017, the GHS had been adopted to varying degrees in many major countries. Smaller economies continue to develop regulations to implement the GHS throughout the 2020s. GHS adoption by country Australia: In 2012, adopted regulation for GHS implementation, setting January 1, 2017 as the GHS implementation deadline. Brazil: Established an implementation deadline of February 2011 for substances and June 2015 for mixtures. Canada: GHS was incorporated into WHMIS 2015 as of February 2015. In 2023 the WHMIS requirements were updated to align with the 7th revised edition and certain provisions of the 8th revised edition of the Globally Harmonized System of Classification and Labelling of Chemicals (GHS). China: Established implementation deadline of December 1, 2011. Colombia: Following the issuance of Resolution 0773/2021 on April 9, 2021, Colombia enforced the implementation of the GHS, with deadlines taking effect April 7, 2023 for pure substances, with mixtures following the same protocols the following year. European Union: The deadline for substance classification was December 1, 2010 and for mixtures it was June 1, 2015, per regulation for GHS implementation on December 31, 2008. Japan: Established deadline of December 31, 2010 for products containing one of 640 designated substances. South Korea: Established the GHS implementation deadline of July 1, 2013. Malaysia: Deadline for substance and mixture was April 17, 2015 per its Industry Code of Practice on Chemicals Classification and Hazard Communication (ICOP) on 16 April 2014. Mexico: GHS has been incorporated into the Official Mexican Standard as of 2015. Pakistan: Country does not a single streamlined system for chemical labeling, although there are many rules in place. The Pakistani government has requested assistance in developing future regulations to implement GHS. Philippines: The deadline for substances and mixtures was March 14, 2015, per Guidelines for the Implementation of GHS in Chemical Safety Program in the Workplace in 2014. Russian Federation: GHS was approved for optional use as of August 2014. Manufacturers may continue using non-GHS Russian labels through 2021, after which compliance with the system is compulsory. Taiwan: Full GHS implementation was scheduled for 2016 for all hazardous chemicals with physical and health hazards. Thailand: The deadline for substances was March 13, 2013. The deadline for mixtures was March 13, 2017. Turkey: Published Turkish CLP regulation and SDS regulation in 2013 and 2014 respectively. The deadline for substance classification was June 1, 2015, for mixtures, it was June 1, 2016. United Kingdom: Implemented under EU directive by REACH regulations, this may be subject to change due to Brexit. United States: GHS compliant labels and SDSs are required for many applications including laboratory chemicals, commercial cleaning agents, and other workplace cases regulated by previous US Occupational Health and Safety Administration (OSHA) standards. First widespread implementation set by OSHA was on March 26, 2012, requiring manufacturers to adopt the standard by June 1, 2015, and product distributors to adopt the standard by December 1, 2015. Workers had to be trained by December 1, 2013. In the US, GHS labels are not required on most hazardous consumer grade products (ex. laundry detergent) however some manufacturers which also sell the same product in Canada or Europe include GHS compliant warnings on these products too. The US Consumer Product Safety Commission is not opposed to this and has been evaluating the possibility of incorporating elements of GHS into future consumer regulations. Uruguay: regulation approved in 2011, setting December 31, 2012 as deadline for pure substances and December 31, 2017, for compounds. Vietnam: The deadline for substances was March 30, 2014. The deadline for mixtures was March 30, 2016.
Physical sciences
Basics: General
Chemistry
4325458
https://en.wikipedia.org/wiki/Canadaspis
Canadaspis
Canadaspis ("Shield of Canada") is an extinct genus of bivalved Cambrian marine arthropod, known from North America and China. They are thought to have been benthic feeders that moved mainly by walking and possibly used its biramous appendages to stir mud in search of food. They have been placed within the Hymenocarina, which includes other bivalved Cambrian arthropods. Description Canadaspis perfecta The bivalved carapaces of Canadaspis perfecta are typically in length, which taper towards the front end. The head had a small pair of eyes borne on short stalks. Between the eyes is a forward pointing spine, as well as a pair of short antennae, which appear to lack segmentation. Similar antennae are known from Waptia, and are probably homologous to the hemi-ellipsoid bodies of crustaceans, and thus likely have an olfactory function. The head also has another pair of larger, segmented antennae, probably with more than 12 segments, the segments increased in length toward the end of the antenna, with the front end of the segments bearing slender, forward-facing spines. The head had a pair of mandibles and maxillae. The mandible bore a mandibular palp, which was fringed with setae, with the mandible having a toothed margin. The head had two pairs of cephalothoracic legs which have prominently developed endites, with the legs ending in a terminal claw. It is unclear whether these limbs were uniramous or biramous. The body had over a dozen segments divided into an anterior thorax with legs, covered by the carapace, and a posterior legless exposed abdomen. The thorax had 8 associated pairs of biramous legs. The limb endopods were segmented, probably with 13-14 segments, and also ended in a terminal claw. The exopods were lobe-shaped, with 9 or 10 rays radiating outwards from their edges. The abdomen terminated with a telson, which bore a pair of spinose projections directed posteriorly on its lower edge, each spinose projection consisted of one large spine and 5 smaller spines. Canadaspis laevigata The bivalved carapace of C. laevigata is similar to that of C. perfecta, though typically smaller in size. The head has a pair of stalked eyes and a pair of segmented uniramous antennae. The body has 19 ring-like tergites. There are ten pairs of biramous appendages, the first of which appear to be located on the head, which the remaining nine run along the body. The first five pairs are roughly equal in size, while the remaining pairs gradually decrease in size posteriorly. The biramous limbs are all relatively similar in morphology. The endopods are robust, and end in claws. The exopod is flat and rounded. The body ends a telson, which is proportionally longer than that of C. perfecta, which bore one large and one small pair of spines, projecting posteriorly. Ecology Canadaspis was likely a benthic animal that lived walking along the seafloor. C. perfecta had claws on the end of its appendages which may have been used to stir up sediment, or to scrape off the top layer, which Derek Briggs suggested may have been a nutritious layer of microbes. Large particles it stirred up would have been captured by spines on the inside of its legs; these spines would have directed the food particles to the organism's mouth, where it used its mandibles to grind larger particles. Its antennae served a sensory function. The spines on the head of C. perfecta probably served to protect its vulnerable eyes from predators. Its limbs probably moved in a metachronal sequence to produce a rippling motion. Although Canadaspis probably did not swim, this could have helped propel the organism from under soft sediments. The appendages also produced currents which would have helped with feeding and respiration. Members of C. perfecta appear to have engaged in synchronised group moulting. Classification Canadaspis perfecta was originally described by Charles Doolittle Walcott in 1912 as Hymenocaris perfecta. It was placed into the new separate genus Canadaspis in 1960 by Novozhilov. Canadaspis was historically interpreted as a crustacean, but this interpretation is now rejected. It has alternatively been suggested to be a stem-group euarthropod. It is currently thought to be a member of the group Hymenocarina, which are interpreted as mandibulates. Some scientists believe that Canadaspis laevigata should be placed in a separate genus. Fossil occurrences 4525 specimens of Canadaspis perfecta are known from the Greater Phyllopod bed, of the Burgess Shale in British Columbia, Canada, where they comprise 8.6% of the community. Other specimens of Canadaspis, considered closely related or belonging to C. perfecta, are also found in the Spence Shale of western Utah as well as the Pioche Shale of Nevada. Canadaspis laevigata comes from the Chengjiang biota of Yunnan, China and is thus some 10 million years older than Canadapsis perfecta.
Biology and health sciences
Fossil arthropods
Animals
4325491
https://en.wikipedia.org/wiki/Microsoft%20Bing
Microsoft Bing
Microsoft Bing (also known simply as Bing) is a search engine owned and operated by Microsoft. The service traces its roots back to Microsoft's earlier search engines, including MSN Search, Windows Live Search, and Live Search. Bing offers a broad spectrum of search services, encompassing web, video, image, and map search products, all developed using ASP.NET. The transition from Live Search to Bing was announced by Microsoft CEO Steve Ballmer on May 28, 2009, at the All Things Digital conference in San Diego, California. The official release followed on June 3, 2009. Bing introduced several notable features at its inception, such as search suggestions during query input and a list of related searches, known as the 'Explore pane'. These features leveraged semantic technology from Powerset, a company Microsoft acquired in 2008. Microsoft also struck a deal with Yahoo! that led to Bing powering Yahoo! Search. Microsoft made significant strides towards open-source technology in 2016, making the BitFunnel search engine indexing algorithm and various components of Bing open source. In February 2023, Microsoft launched Bing Chat (later renamed Microsoft Copilot), an artificial intelligence chatbot experience based on GPT-4, integrated directly into the search engine. This was well-received, with Bing reaching 100 million active users by the following month. As of April 2024, Bing holds the position of the second-largest search engine worldwide, with a market share of 3.64%, behind Google's 90.91%. Other competitors include Yandex with 1.61%, Baidu with 1.15%, and Yahoo!, which is largely powered by Bing, with 1.13%. History Background (1998–2009) MSN Search Microsoft launched MSN Search in the third quarter of 1998, using search results from Inktomi. It consisted of a search engine, index, and web crawler. In early 1999, MSN Search launched a version which displayed listings from Looksmart blended with results from Inktomi except for a short time in 1999 when results from AltaVista were used instead. Microsoft decided to make a large investment in web search by building its own web crawler for MSN Search, the index of which was updated weekly and sometimes daily. The upgrade started as a beta program in November 2004, and came out of beta in February 2005. This occurred a year after rival Yahoo! Search rolled out its own crawler. Image search was powered by a third party, Picsearch. The service also started providing its search results to other search engine portals in an effort to better compete in the market. Windows Live Search The first public beta of Windows Live Search was unveiled on March 8, 2006, with the final release on September 11, 2006 replacing MSN Search. The new search engine used search tabs that include Web, news, images, music, desktop, local, and Microsoft Encarta. In the roll-over from MSN Search to Windows Live Search, Microsoft stopped using Picsearch as their image search provider and started performing their own image search, fueled by their own internal image search algorithms. Live Search On March 21, 2007, Microsoft announced that it would separate its search developments from the Windows Live services family, rebranding the service as Live Search. Live Search was integrated into the Live Search and Ad Platform headed by Satya Nadella, part of Microsoft's Platform and Systems division. As part of this change, Live Search was merged with Microsoft adCenter. A series of reorganizations and consolidations of Microsoft's search offerings were made under the Live Search branding. On May 23, 2008, Microsoft discontinued Live Search Books and Live Search Academic and integrated all academic and book search results into regular search. This also included the closure of the Live Search Books Publisher Program. Windows Live Expo was discontinued on July 31, 2008. Live Search Macros, a service for users to create their own custom search engines or use macros created by other users, was also discontinued. On May 15, 2009, Live Product Upload, a service which allowed merchants to upload products information onto Live Search Products, was discontinued. The final reorganization came as Live Search QnA was rebranded MSN QnA on February 18, 2009, then discontinued on May 21, 2009. Beginnings (2009) Rebrand as Bing Microsoft recognized that there would be a problem with branding as long as the word "Live" remained in the name. As an effort to create a new identity for Microsoft's search services, Live Search was officially replaced by Bing on June 3, 2009. The Bing name was chosen through focus groups, and Microsoft decided that the name was memorable, short, and easy to spell, and that it would function well as a URL around the world. The word would remind people of the sound made during "the moment of discovery and decision making". Microsoft was assisted by branding consultancy Interbrand in finding the new name. The name also has strong similarity to the word bingo, which means that something sought has been found, as called out when winning the game Bingo. Microsoft advertising strategist David Webster proposed the name "Bang" for the same reasons the name Bing was ultimately chosen (easy to spell, one syllable, and easy to remember). He noted, "It's there, it's an exclamation point [...] It's the opposite of a question mark." Bang was ultimately not chosen because it could not be properly used as a verb in the context of an internet search; Webster commented "Oh, 'I banged it' is very different than 'I binged it'". Qi Lu, president of Microsoft Online Services, also announced that Bing's official Chinese name is bì yìng (), which literally means "very certain to respond" or "very certain to answer" in Chinese. While being tested internally by Microsoft employees, Bing's codename was Kumo (くも), which came from the Japanese word for spider (蜘蛛; くも, kumo) as well as cloud (雲; くも, kumo), referring to the manner in which search engines "spider" Internet resources to add them to their database, as well as cloud computing. Deal with Yahoo! On July 29, 2009, Microsoft and Yahoo! announced that they had made a ten-year deal in which the Yahoo! search engine would be replaced by Bing, retaining the Yahoo! user interface. Yahoo! got to keep 88% of the revenue from all search ad sales on its site for the first five years of the deal, and have the right to sell advertising on some Microsoft sites. All Yahoo! Search global customers and partners made the transition by early 2012. Legal challenges On July 31, 2009, The Laptop Company, Inc. stated in a press release that it would challenge Bing's trademark application, alleging that Bing may cause confusion in the marketplace as Bing and their product BongoBing both do online product search. Software company TeraByte Unlimited, which has a product called BootIt Next Generation (abbreviated to BING), also contended the trademark application on similar grounds, as did a Missouri-based design company called Bing! Information Design. Microsoft contended that claims challenging its trademark were without merit because these companies filed for U.S. federal trademark applications only after Microsoft filed for the Bing trademark in March 2009. Growth (2009–2023) In October 2011, Microsoft stated that they were working on new back-end search infrastructure with the goal of delivering faster and slightly more relevant search results for users. Known as "Tiger", the new index-serving technology had been incorporated into Bing globally since August that year. In May 2012, Microsoft announced another redesign of its search engine that includes "Sidebar", a social feature that searches users' social networks for information relevant to the search query. The BitFunnel search engine indexing algorithm and various components of the search engine were made open source by Microsoft in 2016. AI integration (2023–present) On February 7, 2023, Microsoft began rolling out a major overhaul to Bing, called the new Bing. The new Bing included a new chatbot feature, at the time known as Bing Chat, based on OpenAI's GPT-4. According to Microsoft, one million people joined its waitlist within a span of 48 hours. Bing Chat was available only to users of Microsoft Edge and Bing mobile app, and Microsoft said that waitlisted users would be prioritized if they set Edge and Bing as their defaults, and installed the Bing mobile app. When Microsoft demoed Bing Chat to journalists, it produced several hallucinations, including when asked to summarize financial reports. The new Bing was criticized in February 2023 for being more argumentative than ChatGPT, sometimes to an unintentionally humorous extent. The chat interface proved vulnerable to prompt injection attacks with the bot revealing its hidden initial prompts and rules, including its internal codename "Sydney". Upon scrutiny by journalists, Bing claimed it spied on Microsoft employees via laptop webcams and phones. It confessed to spying on, falling in love with, and then murdering one of its developers at Microsoft to The Verge reviews editor Nathan Edwards. The New York Times journalist Kevin Roose reported on strange behavior of Bing Chat, writing that "In a two-hour conversation with our columnist, Microsoft's new chatbot said it would like to be human, had a desire to be destructive and was in love with the person it was chatting with." In a separate case, Bing researched publications of the person with whom it was chatting, claimed they represented an existential danger to it, and threatened to release damaging personal information in an effort to silence them. Microsoft released a blog post stating that the errant behavior was caused by extended chat sessions of 15 or more questions which "can confuse the model on what questions it is answering." Microsoft later restricted the total number of chat turns to 5 per session and 50 per day per user (a turn is "a conversation exchange which contains both a user question and a reply from Bing"), and reduced the model's ability to express emotions. This aimed to prevent such incidents. Microsoft began to slowly ease the conversation limits, eventually relaxing the restrictions to 30 turns per session and 300 sessions per day. In March 2023, Bing reached 100 million active users. That same month, Bing incorporated an AI image generator powered by OpenAI's DALL-E 2, which can be accessed either through the chat function or a standalone image-generating website. In October, the image-generating tool was updated to the more recent DALL-E 3. Although Bing blocks prompts including various keywords that could generate inappropriate images, within days many users reported being able to bypass those constraints, such as to generate images of popular cartoon characters committing terrorist attacks. Microsoft would respond to these shortly after by imposing a new, tighter filter on the tool. On May 4, 2023, Microsoft switched the chatbot from Limited Preview to Open Preview and eliminated the waitlist, however, it remained available only on Microsoft's Edge browser or Bing app until July, when it became available for use on non-Edge browsers. Use is limited without a Microsoft account. On November 15, 2023, Microsoft announced that Bing Chat was to be merged into Microsoft Copilot. On 23 April 2024, Microsoft launched Phi-3-mini, a cost-effective AI model designed for simpler tasks. Features Microsoft Copilot Microsoft Copilot, formerly known as Bing Chat, is an chatbot developed by Microsoft and released in 2023. Copilot utilizes the Microsoft Prometheus model, built upon OpenAI's GPT-4 foundational large language model, which in turn has been fine-tuned using both supervised and reinforcement learning techniques. Copilot can serve as a chat tool, write different types of content from poems to songs to stories to reports, provide the user with information and insights on the website page open in the browser, and use its Microsoft Designer feature to design a logo, drawing, artwork, or other image based on text. Microsoft Designer supports over a hundred languages. Copilot can also cite its sources, similarly to Google's Bard after its Gemini integration, xAI's Grok, and OpenAI's ChatGPT, which Copilot's conversational interface style appears to mimic. Copilot is capable of understanding and communicating in major languages including English, French, Italian, Chinese, Japanese, and Portuguese, but also dialects such as Bavarian. The chatbot is designed to function primarily in Microsoft Edge, Skype, or the Bing app, through a dedicated webpage or internally using built-in app features. Third-party integration Facebook users have the option to share their searches with their Facebook friends using Facebook Connect. On June 10, 2013, Apple announced that it would be dropping Google as its web search engine in favor of Bing. This feature is only integrated with iOS 7 and higher and for users with an iPhone 4S or higher as the feature is only integrated with Siri, Apple's personal assistant. Integration with Windows 8.1 Windows 8.1 includes Bing "Smart Search" integration, which processes all queries submitted through the Windows Start Screen. Translator Bing Translator is a user facing translation portal provided by Microsoft to translate texts or entire web pages into different languages. All translation pairs are powered by the Microsoft Translator, a statistical machine translation platform and web service, developed by Microsoft Research, as its backend translation software. Two transliteration pairs (between Chinese (Simplified) and Chinese (Traditional)) are provided by Microsoft's Windows International team. As of September 2020, Bing Translator offers translations in 70 different language systems. Knowledge and Action Graph In 2015 Microsoft announced its knowledge and action API to correspond with Google's Knowledge graph with 1 billion instances and 20 billion related facts. Bing Predicts The idea for a prediction engine was suggested by Walter Sun, Development Manager for the Core Ranking team at Bing, when he noticed that school districts were more frequently searched before a major weather event in the area was forecasted, because searchers wanted to find out if a closing or delay was caused. He concluded that the time and location of major weather events could accurately be predicted without referring to a weather forecast by observing major increases in search frequency of school districts in the area. This inspired Bing to use its search data to infer outcomes of certain events, such as winners of reality shows. Bing Predicts launched on April 21, 2014. The first reality shows to be featured on Bing Predicts were The Voice, American Idol, and Dancing with the Stars. The prediction accuracy for Bing Predicts is 80% for American Idol, and 85% for The Voice. Bing Predicts also predicts the outcomes of major political elections in the United States. Bing Predicts had 97% accuracy for the 2014 United States Senate elections, 96% accuracy for the 2014 United States House of Representatives elections, and an 89% accuracy for the 2014 United States gubernatorial elections. Bing Predicts also made predictions for the results of the 2016 United States presidential primaries. It has also done predictions in sports, including a perfect 15 for 15 in the 2014 World Cup, and an article on how Microsoft CEO Satya Nadella did well in his March Madness bracket entry. In 2016, Bing Predicts failed to predict the correct winner of the 2016 US presidential election, suggesting that Hillary Clinton would win by 81%. International Bing is available in many languages and has been localized for many countries. Even if the language of the search and of the results are the same, Bing delivers substantially different results for different parts of the world. Webmaster services Bing allows webmasters to manage the web crawling status of their own websites through Bing Webmaster Center. Users may also submit contents to Bing via the Bing Local Listing Center, which allows businesses to add business listings onto Bing Maps and Bing Local. Mobile services Bing Mobile allows users to conduct search queries on their mobile devices, either via the mobile browser or a downloadable mobile application. Bing News Bing News (previously Live Search News) is a news aggregator powered by artificial intelligence. In August 2015 Microsoft announced that Bing News for mobile devices added algorithmic-deduced "smart labels" that essentially act as topic tags, allowing users to click through and explore possible relationships between different news stories. The feature emerged as a result from Microsoft research that found out about 60% of the people consume news by only reading headlines, rather than read the articles. Other labels that have been deployed since then include publisher logos and fact-check tags. Software Toolbars The Bing Bar, a browser extension toolbar that replaced the MSN Toolbar, provides users with links to Bing and MSN content from within their web browser without needing to navigate away from a web page they are already on. The user can customize the theme and color scheme of the Bing Bar and choose which MSN content buttons to display. Bing Bar also has the local weather forecast and stock market positions. The Bing Bar integrates with the Bing search engine. It allows searches on other Bing services such as Images, Video, News and Maps. When users perform a search on a different search engine, the Bing Bar's search box automatically populates itself, allowing the user to view the results from Bing, should it be desired. Bing Bar also links to Outlook.com, Skype and Facebook. Desktop Microsoft released a beta version of Bing Desktop, a program developed to allow users to search Bing from the desktop, on April 4, 2012. The production release followed on April 24, supporting Windows 7 only. Upon the release of version 1.1 in December 2012 it supported Windows XP and higher. Bing Desktop allows users to initiate a web search from the desktop, view news headlines, automatically set their background to the Bing homepage image, or choose a background from the previous nine background images. A similar program, the Bing Search gadget, was a Windows Sidebar Gadget that used Bing to fetch the user's search results and render them directly in the gadget. Another gadget, the Bing Maps gadget, displayed real-time traffic conditions using Bing Maps. The gadget provided shortcuts to driving directions, local search and full-screen traffic view of major US and Canadian cities, including Atlanta, Boston, Chicago, Denver, Detroit, Houston, Los Angeles, Milwaukee, Montreal, New York City, Oklahoma City, Ottawa, Philadelphia, Phoenix, Pittsburgh, Portland, Providence, Sacramento, Salt Lake City, San Diego, San Francisco, Seattle, St. Louis, Tampa, Toronto, Vancouver, and Washington, D.C. Prior to October 30, 2007, the gadgets were known as Live Search gadget and Live Search Maps gadget; both gadgets were removed from Windows Live Gallery due to possible security concerns. The Live Search Maps gadget was made available for download again on January 24, 2008 with the security concern addressed. However, around the introduction of Bing in June 2009 both gadgets were removed again. Marketing Debut Bing's debut featured an $80 to $100 million online, TV, print, and radio advertising campaign in the US. The advertisements did not mention other search engine competitors, such as Google and Yahoo!, directly by name; rather, they tried to convince users to switch to Bing by focusing on Bing's search features and functionality. The ads claimed that Bing does a better job countering "search overload". Market share Before the launch of Bing, the market share of Microsoft web search pages (MSN and Live search) had been small. By January 2011, Experian Hitwise showed that Bing's market share had increased to 12.8% at the expense of Yahoo! and Google. In the same period, Comscore's "2010 U.S. Digital Year in Review" report showed that "Bing was the big gainer in year-over-year search activity, picking up 29% more searches in 2010 than it did in 2009". The Wall Street Journal noted the jump in share "appeared to come at the expense of rival Google Inc". In February 2011, Bing beat Yahoo! for the first time with 4.37% search share while Yahoo! received 3.93%. Counting core searches only, i.e., those where the user has an intent to interact with the search result, Bing had a market share of 14.54% in the second quarter of 2011 in the United States. The combined "Bing Powered" U.S. searches declined from 26.5% in 2011 to 25.9% in April 2012. By November 2015, its market share had declined further to 20.9%. As of October 2018, Bing was the third-largest search engine in the US, with a query volume of 4.58%, behind Google (77%) and Baidu (14.45%). Yahoo! Search, which Bing largely powers, has 2.63%. UK advertising agencies in 2018 pointed to a study by a Microsoft Regional Sales Director suggesting the demographic of Bing users is older people (who are less likely to change the default browser of Windows), and that this audience is wealthier and more likely to respond to advertisements. To counter EU accusations that it was trying to establish a market monopoly, in September 2021 Google's lawyers claimed that one of the most commonly searched words on Microsoft Bing was Google, which is a strong indication that Google is superior to Bing. Search partners In July 2009, Microsoft and Yahoo! announced a deal in which Bing would power Yahoo! Search. All Yahoo! Search global customers and partners made the transition by early 2012. The deal was altered in 2015, meaning Yahoo! was only required to use Bing for a "majority" of searches. DuckDuckGo has used multiple sources for its search engine, including Bing, since 2010. Ecosia uses Bing to provide its search results as of 2017. Bing was added into the list of search engines available in Opera browser from v10.6, but Google remained the default search engine. Mozilla Firefox made a deal with Microsoft to jointly release "Firefox with Bing", an edition of Firefox using Bing instead of Google as the default search engine. The standard edition of Firefox has Google as its default search engine, but has included Bing as an option since Firefox 4.0. In 2009 Microsoft paid Verizon Wireless US$550 million to use Bing as the default search provider on Verizon's BlackBerry and have the others "turned off". Users could still access other search engines via the mobile browser. Live Search Since 2006, Microsoft had conducted tie-ins and promotions to promote Microsoft's search offerings. These included: Amazon's A9 search service and the experimental Ms. Dewey interactive search site syndicated all search results from Microsoft's then search engine, Live Search. This tie-in started on May 1, 2006. Search and Give – a promotional website launched on January 17, 2007 where all searches done from a special portal site would lead to a donation to the UNHCR's organization for refugee children, ninemillion.org. Reuters AlertNet reported in 2007 that the amount to be donated would be $0.01 per search, with a minimum of $100,000 and a maximum of $250,000 (equivalent to 25 million searches). According to the website, the service was decommissioned on June 1, 2009, having donated over $500,000 to charity and schools. Club Bing – a promotional website where users can win prizes by playing word games that generate search queries on Microsoft's then search service Live Search. This website began in April 2007 as Live Search Club. Big Snap Search – a promotional website similar to Live Search Club. This website began in February 2008, but was discontinued shortly after. Live Search SearchPerks! — a promotional website which allowed users to redeem tickets for prizes while using Microsoft's search engine. This website began on October 1, 2008 and was decommissioned on April 15, 2009. "Decision engine" Bing has been heavily advertised as a "decision engine", though thought by columnist David Berkowitz to be more closely related to a web portal. Bing Rewards Bing Rewards was a loyalty program launched by Microsoft in September 2010. It was similar to two earlier services, SearchPerks! and Bing Cashback, which were subsequently discontinued. Bing Rewards provided credits to users through regular Bing searches and special promotions. These credits were then redeemed for various products including electronics, gift cards, sweepstakes, and charitable donations. Initially, participants were required to download and use the Bing Bar for Internet Explorer in order to earn credits; but later the service was made to work with all desktop browsers. The Bing Rewards program was rebranded as "Microsoft Rewards" in 2016, at which point it was modified to only two levels, Level 1 and Level 2. Level 1 is similar to "Member", and Level 2 is similar to "Gold" of the previous Bing Rewards. The Colbert Report During the episode of The Colbert Report that aired on June 8, 2010, Stephen Colbert stated that Microsoft would donate $2,500 to help clean up the Gulf oil spill each time he mentioned the word "Bing" on air. Colbert mostly mentioned Bing in out-of-context situations, such as Bing Crosby and Bing cherries. By the end of the show, Colbert had said the word 40 times, for a total donation of $100,000. Colbert poked fun at their rivalry with Google, stating "Bing is a great website for doing Internet searches. I know that, because I Googled it." Bing It On In 2012, a Bing marketing campaign asked the public which search engine they believed was better when its results were presented unbranded, similar to the Pepsi Challenge in the 1970s. This poll was nicknamed "Bing It On". Microsoft's study of almost 1,000 people showed that 57% of participants preferred Bing's results, with only 30% preferring Google. Potential sale CNBC reported in February 2024 that a legal filing from Google in its antitrust case said Microsoft offered to sell the search engine to Apple in 2018. This came after earlier reporting in September 2023 from Bloomberg that Microsoft discussed selling it to Apple in 2020. The CNBC article also stated Apple said no to repeated attempts to make Bing the default search engine on its devices. Adult content Bing censors results for "adult" search terms for some regions, including India, People's Republic of China, Germany and Arab countries where required by local laws. However, Bing allows users to change their country or region preference to somewhere without restrictions, such as the United States, United Kingdom or Republic of Ireland. Criticism Censorship in China Microsoft has been criticized for censoring Bing search results to queries made in simplified Chinese characters which are used in mainland China. This is done to comply with the censorship requirements of the government in China. Microsoft has not indicated a willingness to stop censoring search results in simplified Chinese characters in the wake of Google's decision to do so. All simplified Chinese searches in Bing are censored regardless of the user's country. The English-language search results of Bing in China has been skewed to show more content from state-run media like Xinhua News Agency and China Daily. On 23 January 2019, Bing was blocked in China. According to a source quoted by The Financial Times, the order was from the Chinese government to block Bing for "illegal content". On 24 January, Bing was accessible again in China. Around 4 June 2021, the anniversary of the 1989 Tiananmen Square protests and massacre, Bing blocked image and video search results for the English term "Tank Man" in the US, UK, France, Germany, Singapore, Switzerland, and other countries. Microsoft responded that "This is due to an accidental human error". According to an investigation by Bloomberg Businessweek, the full explanation was that Microsoft accidentally applied its Chinese blacklist globally. In December 2021, it was required by a "relevant government agency" to suspend its auto-suggest function in China for 30 days. The search engine became partially unavailable in mainland China from 16 December until its resumption on 18 December 2021. According to the company, a government agency in March 2022 required that it suspend auto-suggest function in China for seven days; Bing did not specify the reason. In May 2022, a report released by the Citizen Lab of the University of Toronto found that Bing's autosuggestion system censored the names of Chinese Communist Party leaders, dissidents, and other persons considered politically sensitive in China in both Chinese and English, not only in China but also in the United States and Canada. In April 2023, Citizen Lab reported that Bing was more censorious in China than native Chinese search engines. Copyright-infringing content On February 20, 2017, Bing agreed to a voluntary United Kingdom code of practice obligating it to demote links to copyright-infringing content in its search results. Performance issues Bing was criticized in 2010 for being slower to index websites than Google. It was also criticized for not indexing some websites at all. Alleged copying of Google results Bing has been criticized by competitor Google for utilizing user input via Internet Explorer, the Bing Toolbar, or Suggested Sites, to add results to Bing. After discovering in October 2010 that Bing appeared to be imitating Google's auto-correct results for a misspelling, despite not actually fixing the spelling of the term, Google set up a honeypot, configuring the Google search engine to return specific unrelated results for 100 nonsensical queries such as hiybbprqag. Over the next couple of weeks, Google engineers entered the search term into Google, while using Microsoft Internet Explorer, with the Bing Toolbar installed and the optional Suggested Sites enabled. In 9 out of the 100 queries, Bing later started returning the same results as Google, despite the only apparent connection between the result and search term being that Google's results connected the two. Microsoft's response to this issue, coming from a company spokesperson, was: "We do not copy Google's results." Bing's Vice President, Harry Shum, later reiterated that the search result data Google claimed that Bing copied had in fact come from Bing's very own users. Shum wrote that "we use over 1,000 different signals and features in our ranking algorithm. A small piece of that is clickstream data we get from some of our customers, who opt into sharing anonymous data as they navigate the web in order to help us improve the experience for all users." Microsoft stated that Bing was not intended to be a duplicate of any existing search engines. Child pornography A study released in 2019 of Bing Image search showed that it both freely offered up images that had been tagged as illegal child pornography in national databases, as well as automatically suggesting via its auto-completion feature queries related to child pornography. This easy accessibility was considered particularly surprising since Microsoft pioneered PhotoDNA, the main technology used for tracking images reported as originating from child pornography. Additionally, some arrested child pornographers reported using Bing as their main search engine for new content. Microsoft vowed to fix the problem and assign additional staff to combat the issue after the report was released. Privacy In 2022, France imposed a €60 million fine on Microsoft for privacy law violations using Bing cookies that prevented users from rejecting those cookies.
Technology
Search engines
null
17720432
https://en.wikipedia.org/wiki/Series%20%28stratigraphy%29
Series (stratigraphy)
Series are subdivisions of rock layers based on the age of the rock and formally defined by international conventions of the geological timescale. A series is therefore a sequence of strata defining a chronostratigraphic unit. Series are subdivisions of systems and are themselves divided into stages. Series is a term defining a unit of rock layers formed during a certain interval of time (a chronostratigraphic unit); it is equivalent (but not synonymous) to the term geological epoch (see epoch criteria) which defines the interval of time itself, although the two words are sometimes confused in informal literature. Series in the geological timescale The geological timescale has all systems in the Phanerozoic eonothem subdivided into series. Some of these have their own names; in other cases a system is simply divided into a Lower, Middle and Upper series, with official series being capitalized and unofficial designations (such as "middle Cretaceous") being left uncapitalized. The Cretaceous system is, for example, divided into the Upper Cretaceous and Lower Cretaceous Series, while the Carboniferous System is divided into the Pennsylvanian and Mississippian Series. As of 2008, the International Commission on Stratigraphy had not yet named all four series of the Cambrian. Currently series are limited to the Phanerozoic, but the ICS has stated its intention of subdividing the three systems of the Neoproterozoic (Ediacaran, Cryogenian and Tonian) into stages too. Systems and lithostratigraphy Systems can include many lithostratigraphic units (for example formations, beds, members, etc.) of differing rock types that were being laid down in different environments at the same time. In the same way, a lithostratigraphic unit can include a number of systems or parts of them.
Physical sciences
Stratigraphy
Earth science
17720707
https://en.wikipedia.org/wiki/System%20%28stratigraphy%29
System (stratigraphy)
A system in stratigraphy is a sequence of strata (rock layers) that were laid down together within the same corresponding geological period. The associated period is a chronological time unit, a part of the geological time scale, while the system is a unit of chronostratigraphy. Systems are unrelated to lithostratigraphy, which subdivides rock layers on their lithology. Systems are subdivisions of erathems and are themselves divided into series and stages. Systems in the geological timescale The systems of the Phanerozoic were defined during the 19th century, beginning with the Cretaceous (by Belgian geologist Jean d'Omalius d'Halloy in the Paris Basin) and the Carboniferous (by British geologists William Conybeare and William Phillips in 1822). The Paleozoic and Mesozoic were divided into the currently used systems before the second half of the 19th century, except for a minor revision when the Ordovician system was added in 1879. The Cenozoic has seen more recent revisions by the International Commission on Stratigraphy. It has been divided into three systems with the Paleogene and Neogene replacing the former Tertiary System though the succeeding Quaternary remains. The one-time system names of Paleocene, Eocene, Oligocene, Miocene and Pliocene are now series within the Paleogene and Neogene. Another recent development is the official division of the Proterozoic into systems, which was decided in 2004.
Physical sciences
Stratigraphy
Earth science
9727748
https://en.wikipedia.org/wiki/Coral%20island
Coral island
A coral island is a type of island formed from coral detritus and associated organic material. It occurs in tropical and sub-tropical areas, typically as part of a coral reef which has grown to cover a far larger area under the sea. The term low island can be used to distinguish such islands from high islands, which are formed through volcanic action. Low islands are formed as a result of sedimentation upon a coral reef or of the uplifting of such islands. Ecosystem Coral reefs are some of the oldest ecosystems on the planet, over geological time, they form massive reefs of limestone. The reef environment supports more plant and animal species than any other habitat. Coral reefs are vital for life in multiple aspects some of which include structure, ecology, and nutrient cycles which all support biodiversity in the reefs. Coral reefs build massive calcareous skeletons that serve as homes for animals such as fish hiding inside the crooks and crannies of the reef and barnacles attaching themselves directly to the coral's structure. The structures also help plants that need the sun to photosynthesize, by lifting the plants to the ocean's surface where the sunlight can penetrate the water. The structures also create calm zones in the ocean providing a place for fish and plant species to thrive. Over geological time a reef may reach the surface and can become a coral island, where it begins a whole new ecosystem for land-based creatures. Formation A coral island begins as a volcanic island over a hot spot. As the volcano emerges from the sea, a fringing reef grows on the outskirt of the volcano. The volcano eventually moves off the hot spot by means of plate tectonics. Once this occurs, the volcano can no longer keep up with the wave erosion and undergoes subsidence. Once the island is submerged, the coral must keep growing to stay in the epipelagic zone. This causes the coral to grow into an atoll with a shallow lagoon in the middle. The lagoon undergoes accretion and creates an island completely made of carbonate materials. The process is later enhanced with the remains of plant life which grows on the island. Low vs. high island The term "low island" refers to geologic origin rather than a strict classification of height. Some low islands, such as Banaba, Makatea, Nauru, and Niue, rise several hundred feet above sea level, while numerous high islands (those of volcanic origin) rise a few feet above sea level, often classified as "rocks". Low islands ring the lagoons of atolls. The two types of islands are often found in proximity to each other. This is especially the case among the islands of the South Pacific Ocean, where low islands are found on the fringing reefs that surround most high islands. Distribution Most of the world's coral islands are in the Pacific Ocean, but they are also found in the Atlantic and Indian Oceans. The American territories of Jarvis, Baker and Howland Islands are clear examples of coral islands in the Pacific. Atolls in the Atlantic are found in Colombia's Archipelago of San Andrés, Providencia and Santa Catalina. The Lakshadweep Islands union territory of India is a group of 39 coral islands, along with some minor islets and banks. Some of the islands belonging to Kiribati are considered coral islands. The Maldives consist of coral islands. St. Martin's Island is an coral island located in Bangladesh. Coral islands are located near Pattaya and Ko Samui, Thailand. Ecology Coral is important for biodiversity and the growth of fish populations, so maintaining coral reefs is important. Many coral islands are small with low elevation above sea level. Thus they are at threat from storms and rising sea levels. Through chemical and physical changes humans can cause significant harm to reef systems and slow the creation of coral island chains. Coral reefs are threatened by numerous anthropogenic impacts, some of which have already had major effects worldwide. Reefs grow in shallow, warm, nutrient-poor waters where they are not outcompeted by phytoplankton. By adding fertilizers into the water runoff, phytoplankton populations can explode and choke out coral reef systems. Adding too many sediments can cause a similar problem by blocking out the sun, starving the zooxanthellae that live on coral causing it to undergo a process known as coral bleaching. The ocean's acidity is also a factor. Coral is made of calcium carbonate and is dissolved by carbonic acid. With the increase in carbon dioxide from combustion reactions in the atmosphere through precipitation, carbon dioxide mixes with water and forms carbonic acid, raising the ocean's acidity which slows coral growth. Although low islands may have fewer potential habitats than high islands, thus lower species diversity, studies of both types of islands in Palau found that species diversity, at least in the waters around the island, is more affected by island size than by its origin. Climate and habitability Low islands have poor, sandy soil and little fresh water, which makes them difficult to farm. They cannot support human habitation as well as high islands. They are also threatened by sea level rise due to global warming. The people that do live on low islands survive mostly by fishing. Low islands usually have an oceanic climate.
Physical sciences
Oceanic and coastal landforms
Earth science
9727927
https://en.wikipedia.org/wiki/One-way%20mirror
One-way mirror
A one-way mirror, also called two-way mirror (or one-way glass, half-silvered mirror, and semi-transparent mirror), is a reciprocal mirror that appears reflective from one side and transparent from the other. The perception of one-way transmission is achieved when one side of the mirror is brightly lit and the other side is dark. This allows viewing from the darkened side but not vice versa. History The first U.S. patent for a one-way mirror appeared in 1903, then named a "transparent mirror". Principle of operation The glass is coated with, or has been encased within, a thin and almost transparent layer of metal (window film usually containing aluminium). The result is a mirrored surface that reflects some light and is penetrated by the rest. Light always passes equally in both directions. However, when one side is brightly lit and the other kept dark, the darker side becomes difficult to see from the brightly lit side because it is masked by the much brighter reflection of the lit side. Applications A one-way mirror is typically used as an apparently normal mirror in a brightly lit room, with a much darker room on the other side. People on the brightly lit side see their own reflection—it looks like a normal mirror. People on the dark side see through it—it looks like a transparent window. The light from the bright room reflected from the mirror back into the room itself is much greater than the light transmitted from the dark room, overwhelming the small amount of light transmitted from the dark to the bright room; conversely, the light reflected back into the dark side is overwhelmed by the light transmitted from the bright side. This allows a viewer in the dark side to observe the bright room covertly. When such mirrors are used for one-way observation, the viewing room is kept dark by a darkened curtain or a double door vestibule. These observation rooms have been used in: Execution chambers Experimental psychology research Interrogation rooms Market research Reality television, as in the series Big Brother, which makes extensive use of one-way mirrors throughout its set to allow cameramen in special black hallways to use movable cameras to film contestants without being seen. Security observation decks in public areas Train driver or conductor compartments in newer metro trains, such as Bombardier Transportation's Movia family of metro trains, including the Toronto Rocket Smaller versions are sometimes used in: Low-emissivity windows on vehicles and buildings Mobile phone and tablet screen covers, enabling the screen to be used as a mirror when it is off Security cameras, where the camera is hidden in a mirrored enclosure Stage effects (particularly Pepper's ghost) Teleprompters, where they allow a presenter to read from text projected onto glass directly in front of a film or television camera Common setups of an infinity mirror illusion Smart mirror (virtual mirror) and mirror TV Arcade video games, such as Taito's Space Invaders The same type of mirror, when used in an optical instrument, is called a beam splitter and works on the same principle as a pellicle mirror. A partially transparent mirror is also an integral part of the Fabry–Pérot interferometer.
Technology
Optical components
null
9734557
https://en.wikipedia.org/wiki/MKS%20units
MKS units
The metre, kilogram, second system of units, also known more briefly as MKS units or the MKS system, is a physical system of measurement based on the metre, kilogram, and second (MKS) as base units. Distances are described in terms of metres, mass in terms of kilograms and time in seconds. Derived units are defined using the appropriate combinations, such as velocity in metres per second. Some units have their own names, such as the newton unit of force which is the combination kilogram metre per second squared. The modern International System of Units (SI), from the French Système international d'unités, was originally created as a formalization of the MKS system. The SI has been redefined several times since then and is now based entirely on fundamental physical constants, but still closely approximates the original MKS units for most practical purposes. History By the mid-19th century, there was a demand by scientists to define a coherent system of units. A coherent system of units is one where all units are directly derived from a set of base units, without the need of any conversion factors. The United States customary units are an example of a non-coherent set of units. In 1874, the British Association for the Advancement of Science (BAAS) introduced the CGS system, a coherent system based on the centimetre, gram and second. These units were inconvenient for electromagnetic applications, since electromagnetic units derived from these did not correspond to the commonly used practical units, such as the volt, ampere and ohm. After the Metre Convention of 1875, work started on international prototypes for the kilogram and the metre, which were formally sanctioned by the General Conference on Weights and Measures (CGPM) in 1889, thus formalizing the MKS system by using the kilogram and metre as base units. In 1901, Giovanni Giorgi proposed to the Associazione elettrotecnica italiana (AEI) that the MKS system, extended with a fourth unit to be taken from the practical units of electromagnetism, such as the volt, ohm or ampere, be used to create a coherent system using practical units. This system was strongly promoted by electrical engineer George A. Campbell. The CGS and MKS systems were both widely used in the 20th century, with the MKS system being primarily used in practical areas, such as commerce and engineering. The International Electrotechnical Commission (IEC) adopted Giorgi's proposal as the M.K.S. System of Giorgi in 1935 without specifying which electromagnetic unit would be the fourth base unit. In 1939, the Consultative Committee for Electricity (CCE) recommended the adoption of Giorgi's proposal, using the ampere as the fourth base unit. This was subsequently approved by the CGPM in 1954. The rmks system (rationalized metre–kilogram–second) combines MKS with rationalization of electromagnetic equations. The MKS units with the ampere as a fourth base unit is sometimes referred to as the MKSA system. This system was extended by adding the kelvin and candela as base units in 1960, thus forming the International System of Units. The mole was added as a seventh base unit in 1971. Derived units Mechanical units Electromagnetic units
Physical sciences
Measurement systems
Basics and measurement
3178674
https://en.wikipedia.org/wiki/Day%27s%20journey
Day's journey
A day's journey in pre-modern literature, including the Bible and ancient geographers and ethnographers such as Herodotus, is a measurement of distance. In the Bible, it is not as precisely defined as other Biblical measurements of distance; the distance has been estimated from . Judges 19 records a party of three people and two mules who traveled from Bethlehem to Gibeah, a distance of about 10 miles, in an afternoon. Porter notes that a mule can travel about 3 miles per hour, covering 24 miles in an eight-hour day. Another citation comes from Priscus (fr. 8 in Müller's Fragmenta Historicorum Graecorum) and is translated thus by J. B. Bury: We set out with the barbarians, and arrived at Sardica, which is thirteen days for a fast traveller from Constantinople. From Constantinople (Istanbul) to Sofia is 550–720 km (311–447 mi.) distance; the passage, then, implies a pace between 42 and 55 km /day (26–34 mi./day). Based on a comprehensive review of references in Herodotus, Geus concludes that "Herodotus has a very well-defined notion of what distance a traveller can cover under normal circumstances in a day (between 150 and 200 stades or roughly, between 27 and 40 kilometres [17 and 26 mi.])," though he cites some exceptional examples of over 100 km (62 mi.) per day.
Physical sciences
Other
Basics and measurement