id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
240963
https://en.wikipedia.org/wiki/Star%20system
Star system
A star system or stellar system is a small number of stars that orbit each other, bound by gravitational attraction. A large group of stars bound by gravitation is generally called a star cluster or galaxy, although, broadly speaking, they are also star systems. Star systems are not to be confused with planetary systems, which include planets and similar bodies (such as comets). A star system of two stars is known as a binary star, binary star system or physical double star. If there are no tidal effects, no perturbation from other forces, and no transfer of mass from one star to the other, such a system is stable, and both stars will trace out an elliptical orbit around the barycenter of the system indefinitely. (See Two-body problem). Examples of binary systems are Sirius, Procyon and Cygnus X-1, the last of which probably consists of a star and a black hole. Multiple star systems A multiple star system consists of two or more stars that appear from Earth to be close to one another in the sky. This may result from the stars actually being physically close and gravitationally bound to each other, in which case it is a physical multiple star, or this closeness may be merely apparent, in which case it is an optical multiple star Physical multiple stars are also commonly called multiple stars or multiple star systems. Most multiple star systems are triple stars. Systems with four or more components are less likely to occur. Multiple-star systems are called triple, ternary, or trinary if they contain 3 stars; quadruple or quaternary if they contain 4 stars; quintuple or quintenary with 5 stars; sextuple or sextenary with 6 stars; septuple or septenary with 7 stars; octuple or octenary with 8 stars. These systems are smaller than open star clusters, which have more complex dynamics and typically have from 100 to 1,000 stars. Most multiple star systems known are triple; for higher multiplicities, the number of known systems with a given multiplicity decreases exponentially with multiplicity. For example, in the 1999 revision of Tokovinin's catalog of physical multiple stars, 551 out of the 728 systems described are triple. However, because of suspected selection effects, the ability to interpret these statistics is very limited. Multiple-star systems can be divided into two main dynamical classes: (1) hierarchical systems, which are stable, and consist of nested orbits that do not interact much, and so each level of the hierarchy can be treated as a Two-body problem or (2) the trapezia which have unstable strongly interacting orbits and are modelled as an n-body problem, exhibiting chaotic behavior. They can have 2, 3, or 4 stars. Hierarchical systems Most multiple-star systems are organized in what is called a hierarchical system: the stars in the system can be divided into two smaller groups, each of which traverses a larger orbit around the system's center of mass. Each of these smaller groups must also be hierarchical, which means that they must be divided into smaller subgroups which themselves are hierarchical, and so on. Each level of the hierarchy can be treated as a two-body problem by considering close pairs as if they were a single star. In these systems there is little interaction between the orbits and the stars' motion will continue to approximate stable Keplerian orbits around the system's center of mass, unlike the unstable trapezia systems or the even more complex dynamics of the large number of stars in star clusters and galaxies. Triple star systems In a physical triple star system, each star orbits the center of mass of the system. Usually, two of the stars form a close binary system, and the third orbits this pair at a distance much larger than that of the binary orbit. This arrangement is called hierarchical. The reason for this arrangement is that if the inner and outer orbits are comparable in size, the system may become dynamically unstable, leading to a star being ejected from the system. EZ Aquarii is an example of a physical hierarchical triple system, which has an outer star orbiting an inner physical binary composed of two more red dwarf stars. Triple stars that are not all gravitationally bound might comprise a physical binary and an optical companion (such as Beta Cephei) or, in rare cases, a purely optical triple star (such as Gamma Serpentis). Higher multiplicities Hierarchical multiple star systems with more than three stars can produce a number of more complicated arrangements. These arrangements can be organized by what Evans (1968) called mobile diagrams, which look similar to ornamental mobiles hung from the ceiling. Examples of hierarchical systems are given in the figure to the right (Mobile diagrams). Each level of the diagram illustrates the decomposition of the system into two or more systems with smaller size. Evans calls a diagram multiplex if there is a node with more than two children, i.e. if the decomposition of some subsystem involves two or more orbits with comparable size. Because, as we have already seen for triple stars, this may be unstable, multiple stars are expected to be simplex, meaning that at each level there are exactly two children. Evans calls the number of levels in the diagram its hierarchy. A simplex diagram of hierarchy 1, as in (b), describes a binary system. A simplex diagram of hierarchy 2 may describe a triple system, as in (c), or a quadruple system, as in (d). A simplex diagram of hierarchy 3 may describe a system with anywhere from four to eight components. The mobile diagram in (e) shows an example of a quadruple system with hierarchy 3, consisting of a single distant component orbiting a close binary system, with one of the components of the close binary being an even closer binary. A real example of a system with hierarchy 3 is Castor, also known as Alpha Geminorum or α Gem. It consists of what appears to be a visual binary star which, upon closer inspection, can be seen to consist of two spectroscopic binary stars. By itself, this would be a quadruple hierarchy 2 system as in (d), but it is orbited by a fainter more distant component, which is also a close red dwarf binary. This forms a sextuple system of hierarchy 3. The maximum hierarchy occurring in A. A. Tokovinin's Multiple Star Catalogue, as of 1999, is 4. For example, the stars Gliese 644A and Gliese 644B form what appears to be a close visual binary star; because Gliese 644B is a spectroscopic binary, this is actually a triple system. The triple system has the more distant visual companion Gliese 643 and the still more distant visual companion Gliese 644C, which, because of their common motion with Gliese 644AB, are thought to be gravitationally bound to the triple system. This forms a quintuple system whose mobile diagram would be the diagram of level 4 appearing in (f). Higher hierarchies are also possible. Most of these higher hierarchies either are stable or suffer from internal perturbations. Others consider complex multiple stars will in time theoretically disintegrate into less complex multiple stars, like more common observed triples or quadruples are possible. Trapezia Trapezia are usually very young, unstable systems. These are thought to form in stellar nurseries, and quickly fragment into stable multiple stars, which in the process may eject components as galactic high-velocity stars. They are named after the multiple star system known as the Trapezium Cluster in the heart of the Orion Nebula. Such systems are not rare, and commonly appear close to or within bright nebulae. These stars have no standard hierarchical arrangements, but compete for stable orbits. This relationship is called interplay. Such stars eventually settle down to a close binary with a distant companion, with the other star(s) previously in the system ejected into interstellar space at high velocities. This dynamic may explain the runaway stars that might have been ejected during a collision of two binary star groups or a multiple system. This event is credited with ejecting AE Aurigae, Mu Columbae and 53 Arietis at above 200 km·s−1 and has been traced to the Trapezium cluster in the Orion Nebula some two million years ago. Designations and nomenclature Multiple star designations The components of multiple stars can be specified by appending the suffixes A, B, C, etc., to the system's designation. Suffixes such as AB may be used to denote the pair consisting of A and B. The sequence of letters B, C, etc. may be assigned in order of separation from the component A. Components discovered close to an already known component may be assigned suffixes such as Aa, Ba, and so forth. Nomenclature in the Multiple Star Catalogue A. A. Tokovinin's Multiple Star Catalogue uses a system in which each subsystem in a mobile diagram is encoded by a sequence of digits. In the mobile diagram (d) above, for example, the widest system would be given the number 1, while the subsystem containing its primary component would be numbered 11 and the subsystem containing its secondary component would be numbered 12. Subsystems which would appear below this in the mobile diagram will be given numbers with three, four, or more digits. When describing a non-hierarchical system by this method, the same subsystem number will be used more than once; for example, a system with three visual components, A, B, and C, no two of which can be grouped into a subsystem, would have two subsystems numbered 1 denoting the two binaries AB and AC. In this case, if B and C were subsequently resolved into binaries, they would be given the subsystem numbers 12 and 13. Future multiple star system nomenclature The current nomenclature for double and multiple stars can cause confusion as binary stars discovered in different ways are given different designations (for example, discoverer designations for visual binary stars and variable star designations for eclipsing binary stars), and, worse, component letters may be assigned differently by different authors, so that, for example, one person's A can be another's C. Discussion starting in 1999 resulted in four proposed schemes to address this problem: KoMa, a hierarchical scheme using upper- and lower-case letters and Arabic and Roman numerals; The Urban/Corbin Designation Method, a hierarchical numeric scheme similar to the Dewey Decimal Classification system; The Sequential Designation Method, a non-hierarchical scheme in which components and subsystems are assigned numbers in order of discovery; and WMC, the Washington Multiplicity Catalog, a hierarchical scheme in which the suffixes used in the Washington Double Star Catalog are extended with additional suffixed letters and numbers. For a designation system, identifying the hierarchy within the system has the advantage that it makes identifying subsystems and computing their properties easier. However, it causes problems when new components are discovered at a level above or intermediate to the existing hierarchy. In this case, part of the hierarchy will shift inwards. Components which are found to be nonexistent, or are later reassigned to a different subsystem, also cause problems. During the 24th General Assembly of the International Astronomical Union in 2000, the WMC scheme was endorsed and it was resolved by Commissions 5, 8, 26, 42, and 45 that it should be expanded into a usable uniform designation scheme. A sample of a catalog using the WMC scheme, covering half an hour of right ascension, was later prepared. The issue was discussed again at the 25th General Assembly in 2003, and it was again resolved by commissions 5, 8, 26, 42, and 45, as well as the Working Group on Interferometry, that the WMC scheme should be expanded and further developed. The sample WMC is hierarchically organized; the hierarchy used is based on observed orbital periods or separations. Since it contains many visual double stars, which may be optical rather than physical, this hierarchy may be only apparent. It uses upper-case letters (A, B, ...) for the first level of the hierarchy, lower-case letters (a, b, ...) for the second level, and numbers (1, 2, ...) for the third. Subsequent levels would use alternating lower-case letters and numbers, but no examples of this were found in the sample. Examples Binary Sirius, a binary consisting of a main-sequence type A star and a white dwarf Procyon, which is similar to Sirius Mira, a variable consisting of a red giant and a white dwarf Delta Cephei, a Cepheid variable Almaaz, an eclipsing binary Spica Triple Alpha Centauri is a triple star composed of a main binary Yellow dwarf and an Orange dwarf pair (Rigil Kentaurus and Toliman), and an outlying red dwarf, Proxima Centauri. Together, Rigil Kentaurus and Toliman form a physical binary star, designated as Alpha Centauri AB, α Cen AB, or RHD 1 AB, where the AB denotes this is a binary system. The moderately eccentric orbit of the binary can make the components be as close as 11 AU or as far away as 36 AU. Proxima Centauri, also (though less frequently) called Alpha Centauri C, is much farther away (between 4300 and 13,000 AU) from α Cen AB, and orbits the central pair with a period of 547,000 (+66,000/-40,000) years. Polaris or Alpha Ursae Minoris (α UMi), the north star, is a triple star system in which the closer companion star is extremely close to the main star—so close that it was only known from its gravitational tug on Polaris A (α UMi A) until it was imaged by the Hubble Space Telescope in 2006. Gliese 667 is a triple star system with two K-type main sequence stars and a red dwarf. The red dwarf, C, hosts between two and seven planets, of which one, Cc, alongside the unconfirmed Cf and Ce, are potentially habitable. HD 188753 is a triple star system located approximately 149 light-years away from Earth in the constellation Cygnus. The system is composed of HD 188753A, a yellow dwarf; HD 188753B, an orange dwarf; and HD 188753C, a red dwarf. B and C orbit each other every 156 days, and, as a group, orbit A every 25.7 years. Fomalhaut (α PsA, α Piscis Austrini) is a triple star system in the constellation Piscis Austrinus. It was discovered to be a triple system in 2013, when the K type flare star TW Piscis Austrini and the red dwarf LP 876-10 were all confirmed to share proper motion through space. The primary has a massive dust disk similar to that of the early Solar System, but much more massive. It also contains a gas giant, Fomalhaut b. That same year, the tertiary star, LP 876-10 was also confirmed to house a dust disk. HD 181068 is a unique triple system, consisting of a red giant and two main-sequence stars. The orbits of the stars are oriented in such a way that all three stars eclipse each other. Quadruple Capella, a pair of giant stars orbited by a pair of red dwarfs, around 42 light years away from the Solar System. It has an apparent magnitude of around 0.08, making Capella one of the brightest stars in the night sky. 4 Centauri Mizar is often said to have been the first binary star discovered when it was observed in 1650 by Giovanni Battista Riccioli, p. 1 but it was probably observed earlier, by Benedetto Castelli and Galileo. Later, spectroscopy of its components Mizar A and B revealed that they are both binary stars themselves. HD 98800 The PH1 system has the planet PH1 b (discovered in 2012 by the Planet Hunters group, a part of the Zooniverse) orbiting two of the four stars, making it the first known planet to be in a quadruple star system. KOI-2626 is the first quadruple star system with an Earth-sized planet. Xi Tauri (ξ Tau, ξ Tauri), located about 222 light years away, is a spectroscopic and eclipsing quadruple star consisting of three blue-white B-type main-sequence stars, along with an F-type star. Two of the stars are in a close orbit and revolve around each other once every 7.15 days. These in turn orbit the third star once every 145 days. The fourth star orbits the other three stars roughly every fifty years. Quintuple Dabih Mintaka HD 155448 KIC 4150611 1SWASP J093010.78+533859.5 Sextuple Beta Tucanae Castor HD 139691 TYC 7037-89-1 If Alcor is considered part of the Mizar system, the system can be considered a sextuple. Septuple Jabbah AR Cassiopeiae V871 Centauri Octuple Gamma Cassiopeiae Nonuple QZ Carinae
Physical sciences
Stellar astronomy
null
240972
https://en.wikipedia.org/wiki/White%20hole
White hole
In general relativity, a white hole is a hypothetical region of spacetime and singularity that cannot be entered from the outside, although energy-matter, light and information can escape from it. In this sense, it is the reverse of a black hole, from which energy-matter, light and information cannot escape. White holes appear in the theory of eternal black holes. In addition to a black hole region in the future, such a solution of the Einstein field equations has a white hole region in its past. This region does not exist for black holes that have formed through gravitational collapse, however, nor are there any observed physical processes through which a white hole could be formed. Supermassive black holes (SMBHs) are theoretically predicted to be at the center of every galaxy and may be essential for their formation. Stephen Hawking and others have proposed that these supermassive black holes could spawn supermassive white holes. Overview Like black holes, white holes have properties such as mass, charge, and angular momentum. They attract matter like any other mass, but objects falling towards a white hole would never actually reach the white hole's event horizon (though in the case of the maximally extended Schwarzschild solution, discussed below, the white hole event horizon in the past becomes a black hole event horizon in the future, so any object falling towards it will eventually reach the black hole horizon). Imagine a gravitational field, without a surface. Acceleration due to gravity is the greatest on the surface of any body. But since black holes lack a surface, acceleration due to gravity increases exponentially, but never reaches a final value as there is no considered surface in a singularity. In quantum mechanics, the black hole emits Hawking radiation and so it can come to thermal equilibrium with a gas of radiation (not compulsory). Because a thermal-equilibrium state is time-reversal-invariant, Stephen Hawking argued that the time reversal of a black hole in thermal equilibrium results in a white hole in thermal equilibrium (each absorbing and emitting energy to equivalent degrees). Consequently, this may imply that black holes and white holes are reciprocal in structure, wherein the Hawking radiation from an ordinary black hole is identified with a white hole's emission of energy and matter. Hawking's semi-classical argument is reproduced in a quantum mechanical AdS/CFT treatment, where a black hole in anti-de Sitter space is described by a thermal gas in a gauge theory, whose time reversal is the same as itself. History In the 1930s, physicists Robert Oppenheimer and Hartland Snyder introduced the idea of white holes as a solution to Einstein's equations of general relativity. These equations, the foundation of modern physics, describe the curvature of spacetime due to massive objects. Whereas black holes are born from the collapse of stars, white holes represent the theoretical birth of space, time, and potentially even universes. At the center, space and time do not end into a singularity, but continue across a short transition region where the Einstein equations are violated by quantum effects. From this region, space and time emerge with the structure of a white hole interior, a possibility already suggested by John Lighton Synge. The possibility of the existence of white holes was put forward by cosmologist Igor Novikov in 1964, developed by Nikolai Kardashev. White holes are predicted as part of a solution to the Einstein field equations known as the maximally extended version of the Schwarzschild metric describing an eternal black hole with no charge and no rotation. Here, "maximally extended" implies that spacetime should not have any "edges". For any possible trajectory of a free-falling particle (following a geodesic) in spacetime, it should be possible to continue this path arbitrarily far into the particle's future, unless the trajectory hits a gravitational singularity like the one at the center of the black hole's interior. In order to satisfy this requirement, it turns out that in addition to the black hole interior region that particles enter when they fall through the event horizon from the outside, there must be a separate white hole interior region, which allows us to extrapolate the trajectories of particles that an outside observer sees rising up away from the event horizon. For an observer outside using Schwarzschild coordinates, infalling particles take an infinite time to reach the black hole horizon infinitely far in the future, while outgoing particles that pass the observer have been traveling outward for an infinite time since crossing the white hole horizon infinitely far in the past (however, the particles or other objects experience only a finite proper time between crossing the horizon and passing the outside observer). The black hole/white hole appears "eternal" from the perspective of an outside observer, in the sense that particles traveling outward from the white hole interior region can pass the observer at any time, and particles traveling inward, which will eventually reach the black hole interior region can also pass the observer at any time. Just as there are two separate interior regions of the maximally extended spacetime, there are also two separate exterior regions, sometimes called two different "universes", with the second universe allowing us to extrapolate some possible particle trajectories in the two interior regions. This means that the interior black-hole region can contain a mix of particles that fell in from either universe (and thus an observer who fell in from one universe might be able to see light that fell in from the other one), and likewise particles from the interior white-hole region can escape into either universe. All four regions can be seen in a spacetime diagram that uses Kruskal–Szekeres coordinates (see figure). In this spacetime, it is possible to come up with coordinate systems such that if you pick a hypersurface of constant time (a set of points that all have the same time coordinate, such that every point on the surface has a space-like separation, giving what is called a 'space-like surface') and draw an "embedding diagram" depicting the curvature of space at that time, the embedding diagram will look like a tube connecting the two exterior regions, known as an "Einstein-Rosen bridge" or Schwarzschild wormhole. Depending on where the space-like hypersurface is chosen, the Einstein-Rosen bridge can either connect two black hole event horizons in each universe (with points in the interior of the bridge being part of the black hole region of the spacetime), or two white hole event horizons in each universe (with points in the interior of the bridge being part of the white hole region). It is impossible to use the bridge to cross from one universe to the other, however, because it is impossible to enter a white hole event horizon from the outside, and anyone entering a black hole horizon from either universe will inevitably hit the black hole singularity. Note that the maximally extended Schwarzschild metric describes an idealized black hole/white hole that exists eternally from the perspective of external observers; a more realistic black hole that forms at some particular time from a collapsing star would require a different metric. When the infalling stellar matter is added to a diagram of a black hole's history, it removes the part of the diagram corresponding to the white hole interior region. But because the equations of general relativity are time-reversible – they exhibit Time reversal symmetry – general relativity must also allow the time-reverse of this type of "realistic" black hole that forms from collapsing matter. The time-reversed case would be a white hole that has existed since the beginning of the universe, and that emits matter until it finally "explodes" and disappears. Despite the fact that such objects are permitted theoretically, they are not taken as seriously as black holes by physicists, since there would be no processes that would naturally lead to their formation; they could exist only if they were built into the initial conditions of the Big Bang. Additionally, it is predicted that such a white hole would be highly "unstable" in the sense that if any small amount of matter fell towards the horizon from the outside, this would prevent the white hole's explosion as seen by distant observers, with the matter emitted from the singularity never able to escape the white hole's gravitational radius. Properties Depending on the type of black hole solution considered, there are several types of white holes. In the case of the Schwarzschild black hole mentioned above, a geodesic coming out of a white hole comes from the "gravitational singularity" it contains. In the case of a black hole possessing an electric charge ψ ** Ώ ** ώ (Reissner-Nordström black hole) or an angular momentum, then the white hole happens to be the "exit door" of a black hole existing in another universe. Such a black hole – white hole configuration is called a wormhole. In both cases, however, it is not possible to reach the region "in" the white hole, so the behavior of it – and, in particular, what may come out of it – is completely impossible to predict. In this sense, a white hole is a configuration according to which the evolution of the universe cannot be predicted, because it is not deterministic. A "bare singularity" is another example of a non-deterministic configuration, but does not have the status of a white hole, however, because there is no region inaccessible from a given region. In its basic conception, the Big Bang can be seen as a naked singularity in outer space, but does not correspond to a white hole. Physical relevance In its mode of formation, a black hole comes from a residue of a massive star whose core contracts until it turns into a black hole. Such a configuration is not static: we start from a massive and extended body which contracts to give a black hole. The black hole therefore does not exist for all eternity, and there is no corresponding white hole. To be able to exist, a white hole must either arise from a physical process leading to its formation, or be present from the creation of the universe. None of these solutions appears satisfactory: there is no known astrophysical process that can lead to the formation of such a configuration, and imposing it from the creation of the universe amounts to assuming a very specific set of initial conditions which has no concrete motivation. In view of the enormous quantities radiated by quasars, whose luminosity makes it possible to observe them from several billion light-years away, it had been assumed that they were the seat of exotic physical phenomena such as a white hole, or a phenomenon of continuous creation of matter (see the article on the steady state theory). These ideas are now abandoned, the observed properties of quasars being very well explained by those of an accretion disk in the center of which is a supermassive black hole. Big Bang/Supermassive White Hole A view of black holes first proposed in the late 1980s might be interpreted as shedding some light on the nature of classical white holes. Some researchers have proposed that when a black hole forms, a Big Bang may occur at the core/singularity, which would create a new universe that expands outside of the parent universe. The Einstein–Cartan–Sciama–Kibble theory of gravity extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. Torsion naturally accounts for the quantum-mechanical, intrinsic angular momentum (spin) of matter. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular black hole. In the Einstein–Cartan theory, however, the minimal coupling between torsion and Dirac spinors generates a repulsive spin–spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction prevents the formation of a gravitational singularity. Instead, the collapsing matter on the other side of the event horizon reaches an enormous but finite density and rebounds, forming a regular Einstein–Rosen bridge. The other side of the bridge becomes a new, growing baby universe. For observers in the baby universe, the parent universe appears as the only white hole. Accordingly, the observable universe is the Einstein–Rosen interior of a black hole existing as one of possibly many inside a larger universe. The Big Bang was a nonsingular Big Bounce at which the observable universe had a finite, minimum scale factor. Shockwave cosmology, proposed by Joel Smoller and Blake Temple in 2003, has the “big bang” as an explosion inside a black hole, producing the expanding volume of space and matter that includes the observable universe. This black hole eventually becomes a white hole as the matter density reduces with the expansion. A related theory gives an alternative to dark energy. A 2012 paper argues that the Big Bang itself is a white hole. It further suggests that the emergence of a white hole, which was named a "Small Bang", is spontaneous—all the matter is ejected at a single pulse. Thus, unlike black holes, white holes cannot be continuously observed; rather, their effects can be detected only around the event itself. The paper even proposed identifying a new group of gamma-ray bursts with white holes. Various hypotheses Unlike black holes for which there is a well-studied physical process, gravitational collapse (which gives rise to black holes when a star somewhat more massive than the sun exhausts its nuclear "fuel"), there is no clear analogous process that leads reliably to the production of white holes. Although some hypotheses have been put forward: White holes as a kind of "exit" from black holes, both types of singularities would probably be connected by a wormhole (note that, like white holes, wormholes have not yet been found); when quasars were discovered it was assumed that these were the sought-after white holes but this assumption has now been discarded. Another widespread idea is that white holes would be very unstable, would last a very short time and even after forming could collapse and become black holes. Astronomers Alon Retter and Shlomo Heller suggest that the GRB 060614 anomalous gamma-ray burst that occurred in 2006 was a "white hole". In 2014, the idea of the Big Bang being produced by a supermassive white hole explosion was explored in the framework of a five-dimensional vacuum by Madriz Aguilar, Moreno and Bellini. Finally, it has been postulated that white holes could be the temporal inverse of a black hole. At present, very few scientists believe in the existence of white holes and it is considered only a mathematical exercise with no real-world counterpart. In popular culture A white hole appears in the Red Dwarf episode of the same name, wherein the protagonists must find a way to deal with its temporal effects. A white hole serves as a major source of conflict in the Yu-Gi-Oh! GX anime, as the radiance it exudes is both sentient and evil, known as the Light of Destruction. A white hole serves as a very important location in the video game Outer Wilds. In this game, falling into the black hole in the center of the planet Brittle Hollow leads to this white hole. A white hole appears in the animated television series Voltron: Legendary Defender.
Physical sciences
Theory of relativity
Physics
240988
https://en.wikipedia.org/wiki/Tetrahydrofuran
Tetrahydrofuran
Tetrahydrofuran (THF), or oxolane, is an organic compound with the formula (CH2)4O. The compound is classified as heterocyclic compound, specifically a cyclic ether. It is a colorless, water-miscible organic liquid with low viscosity. It is mainly used as a precursor to polymers. Being polar and having a wide liquid range, THF is a versatile solvent. It is an isomer of another solvent, butanone. Production About 200,000 tonnes of tetrahydrofuran are produced annually. The most widely used industrial process involves the acid-catalyzed dehydration of 1,4-butanediol. Ashland/ISP is one of the biggest producers of this chemical route. The method is similar to the production of diethyl ether from ethanol. The butanediol is derived from condensation of acetylene with formaldehyde followed by hydrogenation. DuPont developed a process for producing THF by oxidizing n-butane to crude maleic anhydride, followed by catalytic hydrogenation. A third major industrial route entails hydroformylation of allyl alcohol followed by hydrogenation to 1,4-butanediol. Other methods THF can also be synthesized by catalytic hydrogenation of furan. This allows certain sugars to be converted to THF via acid-catalyzed digestion to furfural and decarbonylation to furan, although this method is not widely practiced. THF is thus derivable from renewable resources. Applications Polymerization In the presence of strong acids, THF converts to a linear polymer called poly(tetramethylene ether) glycol (PTMEG), also known as polytetramethylene oxide (PTMO): This polymer is primarily used to make elastomeric polyurethane fibers like Spandex. As a solvent The other main application of THF is as an industrial solvent for polyvinyl chloride (PVC) and in varnishes. It is an aprotic solvent with a dielectric constant of 7.6. It is a moderately polar solvent and can dissolve a wide range of nonpolar and polar chemical compounds. THF is water-miscible and can form solid clathrate hydrate structures with water at low temperatures. THF has been explored as a miscible co-solvent in aqueous solution to aid in the liquefaction and delignification of plant lignocellulosic biomass for production of renewable platform chemicals and sugars as potential precursors to biofuels. Aqueous THF augments the hydrolysis of glycans from biomass and dissolves the majority of biomass lignin making it a suitable solvent for biomass pretreatment. THF is often used in polymer science. For example, it can be used to dissolve polymers prior to determining their molecular mass using gel permeation chromatography. THF dissolves PVC as well, and thus it is the main ingredient in PVC adhesives. It can be used to liquefy old PVC cement and is often used industrially to degrease metal parts. THF is used as a component in mobile phases for reversed-phase liquid chromatography. It has a greater elution strength than methanol or acetonitrile, but is less commonly used than these solvents. THF is used as a solvent in 3D printing when printing with PLA, PETG and substantially similar filaments. It can be used to clean clogged 3D printer parts, to remove extruder lines and add a shine to the finished product as well as to solvent weld printed parts. Laboratory use In the laboratory, THF is a popular solvent when its water miscibility is not an issue. It is more basic than diethyl ether and forms stronger complexes with Li+, Mg2+, and boranes. It is a popular solvent for hydroboration reactions and for organometallic compounds such as organolithium and Grignard reagents. Thus, while diethyl ether remains the solvent of choice for some reactions (e.g., Grignard reactions), THF fills that role in many others, where strong coordination is desirable and the precise properties of ethereal solvents such as these (alone and in mixtures and at various temperatures) allows fine-tuning modern chemical reactions. Commercial THF contains substantial water that must be removed for sensitive operations, e.g. those involving organometallic compounds. Although THF is traditionally dried by distillation from an aggressive desiccant such as elemental sodium, molecular sieves have been shown to be superior water scavengers. Reaction with hydrogen sulfide In the presence of a solid acid catalyst, THF reacts with hydrogen sulfide to give tetrahydrothiophene. Lewis basicity THF is a Lewis base that bonds to a variety of Lewis acids such as I2, phenols, triethylaluminum and bis(hexafluoroacetylacetonato)copper(II). THF has been classified in the ECW model and it has been shown that there is no one order of base strengths. Many complexes are of the stoichiometry MCl3(THF)3. Precautions THF is a relatively acutely nontoxic solvent, with the median lethal dose (LD50) comparable to that for acetone. However, chronic exposure is suspected of causing cancer. Reflecting its remarkable solvent properties, it penetrates the skin, causing rapid dehydration. THF readily dissolves latex and thus should be handled with nitrile rubber gloves. It is highly flammable. One danger posed by THF is its tendency to form the explosive compound 2-hydroperoxytetrahydrofuran upon reaction with air: To minimize this problem, commercial supplies of THF are often stabilized with butylated hydroxytoluene (BHT). Distillation of THF to dryness is unsafe because the explosive peroxides can concentrate in the residue. Related compounds Tetrahydrofurans The tetrahydrofuran ring is found in diverse natural products including lignans, acetogenins, and polyketide natural products. Diverse methodology has been developed for the synthesis of substituted THFs. Oxolanes Tetrahydrofuran is one of the class of pentic cyclic ethers called oxolanes. There are seven possible structures, namely, Monoxolane, the root of the group, synonymous with tetrahydrofuran 1,3-dioxolane 1,2-dioxolane 1,2,4-trioxolane 1,2,3-trioxolane tetroxolane pentoxolane
Physical sciences
Esters and ethers
Chemistry
241014
https://en.wikipedia.org/wiki/Scyliorhinidae
Scyliorhinidae
Scyliorhinidae is a family of sharks, one of a few families whose members share the common name catsharks, belonging to the order Carcharhiniformes, the ground sharks. Although they are generally known as catsharks, some species can also be called dogfish due to previous naming. However, a dogfish may generally be distinguished from a catshark as catsharks lay eggs while dogfish have live young. Like most bottom feeders, catsharks feed on benthic invertebrates and smaller fish. They are not harmful to humans. The family is paraphyletic, containing several distinct lineages that do not form a monophyletic group. Genera Scyliorhinidae includes the following genera: Cephaloscyllium T. N. Gill, 1862 Poroderma A. Smith, 1838 Scyliorhinus Blainville, 1816 Anatomy and appearance Scyliorhinidae catsharks may be distinguished by their elongated, cat-like eyes and two small dorsal fins set far back. Most species are fairly small, growing no longer than 80 cm (31 in); a few, such as the nursehound (Scyliorhinus stellaris) can reach 1.6 m (5.2 ft) in length. Most of the species have a patterned appearance, ranging from stripes to patches to spots. Characteristics of genus Apristurus are mostly dark bodies, and having a long anal fin that ends in front of where the lower caudal fin begins. The snouts of the species of Apristurus are flat. They also present upper and lower labial furrows. The sonic hedgehog dentition expression is first found as a bilateral symmetrical pattern and is found in certain areas of the embryonic jaw. Sonic hedgehog (a secreted protein that, in humans, is encoded by the SHH gene) is involved in the growth and patterning of different organs. Every 18–38 days the teeth are replaced as is a common characteristic of the developmental process of sharks. The "swell sharks" of the genus Cephaloscyllium have the curious ability to fill their stomachs with water or air when threatened, increasing their girth by a factor of one to three. Some catsharks, such as the chain catshark are biofluorescent. Distribution Scyliorhinidae catsharks are found around seabeds in temperate and tropical seas worldwide, ranging from very shallow intertidal waters to depths of 2,000 m (6,600 ft) or more, such as the members of genus Apristurus. The red-spotted catshark lives in the shallower rocky waters ranging from Peru to Chile and migrates to deeper waters during the winter. They are usually restricted to small ranges. Juvenile and adult chain dogfish live on the soft or rocky bottom of the Atlantic from Massachusetts to Nicaragua. Adults tend to live on the soft, sandy bottoms possibly due to their need of egg deposition sites. Behaviour Scyliorhinidae includes species that do not undergo long distance migrations because they are poor swimmers. Due to being nocturnal, some species sleep close together in crevices throughout the day and then go hunting at night. Some species such as the small spotted catshark, Scyliorhinus canicula, are sexually monomorphic and exhibit habitat segregation, where males and females live in separate areas; males tend to live in open seabeds, while females tend to live in caves. Some species of catsharks may deposit egg cases in structured habitats, which may also act as nurseries for the newly hatched sharks. Reproduction Scyliorhinidae includes many species of catsharks, that like the chain dogfish, are oviparous and lay eggs in tough egg cases with curly tendrils at each end, known as "mermaid's purses", for protection, onto the seabed. Almost a year is needed for a catshark to hatch from the egg. Instead of laying the eggs and letting them sit for a year, some species of catsharks hold onto the eggs until a few months before the shark hatches. Some catsharks exhibit ovoviviparity, aplacental viviparous, by holding onto the embryos until they are completely developed and then give live birth. Some species of catsharks mate by biting and holding the female’s pectoral fins and wrestle her into a mating position.
Biology and health sciences
Sharks
Animals
241026
https://en.wikipedia.org/wiki/Up%20quark
Up quark
The up quark or u quark (symbol: u) is the lightest of all quarks, a type of elementary particle, and a significant constituent of matter. It, along with the down quark, forms the neutrons (one up quark, two down quarks) and protons (two up quarks, one down quark) of atomic nuclei. It is part of the first generation of matter, has an electric charge of + e and a bare mass of . Like all quarks, the up quark is an elementary fermion with spin , and experiences all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. The antiparticle of the up quark is the up antiquark (sometimes called antiup quark or simply antiup), which differs from it only in that some of its properties, such as charge have equal magnitude but opposite sign. Its existence (along with that of the down and strange quarks) was postulated in 1964 by Murray Gell-Mann and George Zweig to explain the Eightfold Way classification scheme of hadrons. The up quark was first observed by experiments at the Stanford Linear Accelerator Center in 1968. History In the beginnings of particle physics (first half of the 20th century), hadrons such as protons, neutrons and pions were thought to be elementary particles. However, as new hadrons were discovered, the 'particle zoo' grew from a few particles in the early 1930s and 1940s to several dozens of them in the 1950s. The relationships between each of them were unclear until 1961, when Murray Gell-Mann and Yuval Ne'eman (independently of each other) proposed a hadron classification scheme called the Eightfold Way, or in more technical terms, SU(3) flavor symmetry. This classification scheme organized the hadrons into isospin multiplets, but the physical basis behind it was still unclear. In 1964, Gell-Mann and George Zweig (independently of each other) proposed the quark model, then consisting only of up, down, and strange quarks. However, while the quark model explained the Eightfold Way, no direct evidence of the existence of quarks was found until 1968 at the Stanford Linear Accelerator Center. Deep inelastic scattering experiments indicated that protons had substructure, and that protons made of three more-fundamental particles explained the data (thus confirming the quark model). At first people were reluctant to describe the three bodies as quarks, instead preferring Richard Feynman's parton description, but over time the quark theory became accepted (see November Revolution). Mass Despite being extremely common, the bare mass of the up quark is not well determined, but probably lies between 1.8 and . Lattice QCD calculations give a more precise value: . When found in mesons (particles made of one quark and one antiquark) or baryons (particles made of three quarks), the 'effective mass' (or 'dressed' mass) of quarks becomes greater because of the binding energy caused by the gluon field between each quark (see mass–energy equivalence). The bare mass of up quarks is so light, it cannot be straightforwardly calculated because relativistic effects have to be taken into account.
Physical sciences
Fermions
Physics
241027
https://en.wikipedia.org/wiki/Down%20quark
Down quark
The down quark (symbol: d) is a type of elementary particle, and a major constituent of matter. The down quark is the second-lightest of all quarks, and combines with other quarks to form composite particles called hadrons. Down quarks are most commonly found in atomic nuclei, where it combines with up quarks to form protons and neutrons. The proton is made of one down quark with two up quarks, and the neutron is made up of two down quarks with one up quark. Because they are found in every single known atom, down quarks are present in all everyday matter that we interact with. The down quark is part of the first generation of matter, has an electric charge of − e and a bare mass of . Like all quarks, the down quark is an elementary fermion with spin , and experiences all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. The antiparticle of the down quark is the down antiquark (sometimes called antidown quark or simply antidown), which differs from it only in that some of its properties have equal magnitude but opposite sign. Its existence (along with that of the up and strange quarks) was postulated in 1964 by Murray Gell-Mann and George Zweig to explain the Eightfold Way classification scheme of hadrons. The down quark was first observed by experiments at the Stanford Linear Accelerator Center in 1968. History In the beginnings of particle physics (first half of the 20th century), hadrons such as protons, neutrons, and pions were thought to be elementary particles. However, as new hadrons were discovered, the 'particle zoo' grew from a few particles in the early 1930s and 1940s to several dozens of them in the 1950s. The relationships between each of them was unclear until 1961, when Murray Gell-Mann and Yuval Ne'eman (independently of each other) proposed a hadron classification scheme called the Eightfold Way, or in more technical terms, SU(3) flavor symmetry. This classification scheme organized the hadrons into isospin multiplets, but the physical basis behind it was still unclear. In 1964, Gell-Mann and George Zweig (independently of each other) proposed the quark model, then consisting only of up, down, and strange quarks. However, while the quark model explained the Eightfold Way, no direct evidence of the existence of quarks was found until 1968 at the Stanford Linear Accelerator Center. Deep inelastic scattering experiments indicated that protons had substructure, and that protons made of three more-fundamental particles explained the data (thus confirming the quark model). At first people were reluctant to identify the three-bodies as quarks, instead preferring Richard Feynman's parton description, but over time the quark theory became accepted (see November Revolution). Mass Despite being extremely common, the bare mass of the down quark is not well determined, but probably lies between . Lattice QCD calculations give a more precise value: . When found in mesons (particles made of one quark and one antiquark) or baryons (particles made of three quarks), the 'effective mass' (or 'dressed' mass) of quarks becomes greater because of the binding energy caused by the gluon field between quarks (see mass–energy equivalence). For example, the effective mass of down quarks in a proton is around . Because the bare mass of down quarks is so small, it cannot be straightforwardly calculated because relativistic effects have to be taken into account.
Physical sciences
Fermions
Physics
241028
https://en.wikipedia.org/wiki/Charm%20quark
Charm quark
The charm quark, charmed quark, or c quark is an elementary particle found in composite subatomic particles called hadrons such as the J/psi meson and the charmed baryons created in particle accelerator collisions. Several bosons, including the W and Z bosons and the Higgs boson, can decay into charm quarks. All charm quarks carry charm, a quantum number. This second-generation particle is the third-most-massive quark, with a mass of as measured in 2022, and a charge of + e. The existence of the charm quark was first predicted by James Bjorken and Sheldon Glashow in 1964, and in 1970, Glashow, John Iliopoulos, and Luciano Maiani showed how its existence would account for experimental and theoretical discrepancies. In 1974, its existence was confirmed through the independent discoveries of the J/psi meson at Brookhaven National Laboratory and the Stanford Linear Accelerator Center. In the next few years, several other charmed particles, including the D meson and the charmed strange mesons, were found. In the 21st century, a baryon containing two charm quarks has been found. There is recent evidence that intrinsic charm quarks exist in the proton, and the coupling of the charm quark and the Higgs boson has been studied. Recent evidence also indicates CP violation in the decay of the D0 meson, which contains the charm quark. Naming According to Sheldon Glashow, the charm quark received its name because of the "symmetry it brought to the subnuclear world". Glashow also justified the name as "a magical device to avert evil", because adding the charm quark would prohibit unwanted and unseen decays in the three-quark theory at the time. The charm quark is also called the "charmed quark" in both academic and non-academic contexts. The symbol of the charm quark is "c". History Background In 1961, Murray Gell-Mann introduced the Eightfold Way as a pattern to group baryons and mesons. In 1964, Gell-Mann and George Zweig independently proposed that all hadrons are composed of elementary constituents, which Gell-Mann called "quarks". Initially, only the up quark, the down quark, and the strange quark were proposed. These quarks would produce all of the particles in the Eightfold Way. Gell-Mann and Kazuhiko Nishijima had established strangeness, a quantum number, in 1953 to describe processes involving strange particles such as and . Theoretical prediction In 1964, James Bjorken and Sheldon Glashow theorized "charm" as a new quantum number. At the time, there were four known leptons—the electron, the muon, and each of their neutrinos—but Gell-Mann initially proposed only three quarks. Bjorken and Glashow thus hoped to establish parallels between the leptons and the quarks with their theory. According to Glashow, the conjecture came from "aesthetic arguments". In 1970, Glashow, John Iliopoulos, and Luciano Maiani proposed a new quark that differed from the three then-known quarks by the charm quantum number. They further predicted the existence of "charmed particles" and offered suggestions on how to experimentally produce them. They also suggested the charmed quark could provide a mechanism—the GIM mechanism—to facilitate the unification of the weak and electromagnetic forces. At the Conference on Experimental Meson Spectroscopy (EMS) in April 1974, Glashow delivered his paper titled "Charm: An Invention Awaits Discovery". Glashow asserted because neutral currents were likely to exist, a fourth quark was "sorely needed" to explain the rarity of the decays of certain kaons. He also made several predictions on the properties of charm quarks. He wagered that, by the next EMS conference in 1976: In July 1974, at the 17th International Conference on High Energy Physics (ICHEP), Iliopoulos said: Applying an argument of naturalness to the kaon mass splitting between the K and K states, the mass of the charm quark was estimated by Mary K. Gaillard and Benjamin W. Lee in 1974 to be less than . Discovery Glashow predicted that the down quark of a proton could absorb a and become a charm quark. Then, the proton would be transformed into a charmed baryon before it decayed into several particles, including a lambda baryon. In late May 1974, Robert Palmer and Nicholas P. Samios found an event generating a lambda baryon from their bubble chamber at Brookhaven National Laboratory. It took months for Palmer to be convinced the lambda baryon came from a charmed particle. When the magnet of the bubble chamber failed in October 1974, they did not encounter the same event. The two scientists published their observations in early 1975. Michael Riordan commented that this event was "ambiguous" and "encouraging but not convincing evidence". J/psi meson (1974) In 1974, Samuel C. C. Ting was searching for charmed particles at Brookhaven National Laboratory (BNL). His team was using an electron-pair detector. By the end of August, they found a peak at and the signal's width was less than . The team was eventually convinced they had observed a massive particle and named it "J". Ting considered announcing his discovery in October 1974, but postponed the announcement due to his concern about the μ/π ratio. At the Stanford Linear Accelerator Center (SLAC), Burton Richter's team performed experiments on 9–10 November 1974. They also found a high probability of interaction at . They called the particle "psi". On 11 November 1974, Richter met Ting at the SLAC, and they announced their discovery. Theorists immediately began to analyze the new particle. It was shown to have a lifetime on the scale of 10−20 seconds, suggesting special characteristics. Thomas Appelquist and David Politzer suggested that the particle was composed of a charm quark and a charm antiquark whose spins were aligned in parallel. The two called this configuration "charmonium". Charmonium would have two forms: "orthocharmonium", where the spins of the two quarks are parallel, and "paracharmonium", where the spins align oppositely. Murray Gell-Mann also believed in the idea of charmonium. Some other theorists, such as Richard Feynman, initially thought the new particle consisted of an up quark with a charm antiquark. On 15 November 1974, Ting and Richter issued a press release about their discovery. On 21 November at the SLAC, SPEAR found a resonance of the J/psi particle at as Martin Breidenbach and Terence Goldman had predicted. This particle was called ψ′ ("psi-prime"). In late November, Appelquist and Politzer published their paper theorizing charmonium. Glashow and Alvaro De Rujula also published a paper called "Is Bound Charm Found?", in which they used the charm quark and asymptotic freedom to explain the properties of the J/psi meson. Eventually, on 2 December 1974, Physical Review Letters (PRL) published the discovery papers of J and psi, by Ting and Richter respectively. The discovery of the psi-prime was published the following week. Then, on 6 January 1975, PRL published nine theoretical papers on the J/psi particle; according to Michael Riordan, five of them "promoted the charm hypothesis and its variations". In 1976, Ting and Richter shared the Nobel Prize in Physics for their discovery "of a heavy elementary particle of the new kind". In August 1976, in The New York Times, Glashow recalled his wager and commented, "John [Iliopoulos]'s wine and my hat had been saved in the nick of time". At the next EMS conference, spectroscopists ate Mexican candy hats supplied by the organizers. Frank Close wrote a Nature article titled "Iliopoulos won his bet" in the same year, saying the 18th ICHEP was "indeed dominated by that very discovery". No-one paid off their bets to Iliopoulos. Other charmed particles (1975–1977) In April 1975, E. G. Cazzoli et al., including Palmer and Samios, published their earlier ambiguous evidence for the charmed baryon. By the time of the Lepton–Photon Symposium in August 1975, eight new heavy particles had been discovered. These particles, however, have zero total charm. Starting from the fourth quarter of that year, physicists began to look for particles with a net, or "naked", charm. On 3 May 1976 at SLAC, Gerson Goldhaber and François Pierre identified a peak, which suggested the presence of a neutral charmed D meson according to Glashow's prediction. On 5 May, Goldhaber and Pierre published a joint memorandum about their discovery of the "naked charm". By the time of the 18th International Conference on High Energy Physics, more charmed particles had been discovered. Riordan said "solid evidence for charm surfaced in session after session" at the conference, confirming the existence of the charm quark. The charmed strange meson was discovered in 1977. Later and current research In 2002, the SELEX Collaboration at Fermilab published the first observation of the doubly charmed baryon ("double charmed xi+"). It is a three-quark particle containing two charm quarks. The team found doubly charmed baryons with an up quark are more massive and have a higher rate of production than those with a down quark. In 2007, the BaBar and Belle collaborations each reported evidence for the mixing of two neutral charmed mesons, and . The evidence confirmed the mixing rate is small, as is predicted by the standard model. Neither studies found evidence for CP violation between the decays of the two charmed particles. In 2022, the NNPDF Collaboration found evidence for the existence of intrinsic charm quarks in the proton. In the same year, physicists also conducted a direct search for Higgs boson decays into charm quarks using the ATLAS detector of the Large Hadron Collider. They have determined that the Higgs–charm coupling is weaker than the Higgs–bottom coupling. On 7 July 2022, the LHCb experiment announced they had found evidence of direct CP violation in the decay of the D0 meson into pions. Characteristics The charm quark is a second-generation up-type quark. It carries charm, a quantum number. According to the 2022 Particle Physics Review, the charmed quark has a mass of , a charge of + e, and a charm of +1. The charm quark is more massive than the strange quark: the ratio between the masses of the two is about . The CKM matrix describes the weak interaction of quarks. As of 2022, the values of the CKM matrix relating to the charm quark are: Charm quarks can exist in either "open charm particles", which contain one or several charm quarks, or as charmonium states, which are bound states of a charm quark and a charm antiquark. There are several charmed mesons, including and . Charmed baryons include , , , , with various charges and resonances. Production and decay Particles containing charm quarks can be produced via electron–positron collisions or in hadron collisions. Using different energies, electron–positron colliders can produce psi or upsilon mesons. Hadron colliders produce particles that contain charm quarks at a higher cross section. The W boson can also decay into hadrons containing the charm quark or the charm antiquark. The Z boson can decay into charmonium through charm quark fragmentation. The Higgs boson can also decay to or through the same mechanism. The decay rate of the Higgs boson into charmonium is "governed by the charm-quark Yukawa coupling". The charm quark can decay into other quarks via weak decays. The charm quark also annihilates with the charm antiquark during the decays of ground-state charmonium mesons.
Physical sciences
Fermions
Physics
241029
https://en.wikipedia.org/wiki/Top%20quark
Top quark
The top quark, sometimes also referred to as the truth quark, (symbol: t) is the most massive of all observed elementary particles. It derives its mass from its coupling to the Higgs field. This coupling is very close to unity; in the Standard Model of particle physics, it is the largest (strongest) coupling at the scale of the weak interactions and above. The top quark was discovered in 1995 by the CDF and DØ experiments at Fermilab. Like all other quarks, the top quark is a fermion with spin-1/2 and participates in all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. It has an electric charge of + e. It has a mass of , which is close to the rhenium atom mass. The antiparticle of the top quark is the top antiquark (symbol: , sometimes called antitop quark or simply antitop), which differs from it only in that some of its properties have equal magnitude but opposite sign. The top quark interacts with gluons of the strong interaction and is typically produced in hadron colliders via this interaction. However, once produced, the top (or antitop) can decay only through the weak force. It decays to a W boson and either a bottom quark (most frequently), a strange quark, or, on the rarest of occasions, a down quark. The Standard Model determines the top quark's mean lifetime to be roughly . This is about a twentieth of the timescale for strong interactions, and therefore it does not form hadrons, giving physicists a unique opportunity to study a "bare" quark (all other quarks hadronize, meaning that they combine with other quarks to form hadrons and can only be observed as such). Because the top quark is so massive, its properties allowed indirect determination of the mass of the Higgs boson (see below). As such, the top quark's properties are extensively studied as a means to discriminate between competing theories of new physics beyond the Standard Model. The top quark is the only quark that has been directly observed due to its decay time being shorter than the hadronization time. History In 1973, Makoto Kobayashi and Toshihide Maskawa predicted the existence of a third generation of quarks to explain observed CP violations in kaon decay. The names top and bottom were introduced by Haim Harari in 1975, to match the names of the first generation of quarks (up and down) reflecting the fact that the two were the "up" and "down" component of a weak isospin doublet. The proposal of Kobayashi and Maskawa heavily relied on the GIM mechanism put forward by Sheldon Glashow, John Iliopoulos and Luciano Maiani, which predicted the existence of the then still unobserved charm quark. (Direct evidence for the existence of quarks, including the other second generation quark, the strange quark, was obtained in 1968; strange particles were discovered back in 1947.) When in November 1974 teams at Brookhaven National Laboratory (BNL) and the Stanford Linear Accelerator Center (SLAC) simultaneously announced the discovery of the J/ψ meson, it was soon after identified as a bound state of the missing charm quark with its antiquark. This discovery allowed the GIM mechanism to become part of the Standard Model. With the acceptance of the GIM mechanism, Kobayashi and Maskawa's prediction also gained in credibility. Their case was further strengthened by the discovery of the tau by Martin Lewis Perl's team at SLAC between 1974 and 1978. The tau announced a third generation of leptons, breaking the new symmetry between leptons and quarks introduced by the GIM mechanism. Restoration of the symmetry implied the existence of a fifth and sixth quark. It was in fact not long until a fifth quark, the bottom, was discovered by the E288 experiment team, led by Leon Lederman at Fermilab in 1977. This strongly suggested that there must also be a sixth quark, the top, to complete the pair. It was known that this quark would be heavier than the bottom, requiring more energy to create in particle collisions, but the general expectation was that the sixth quark would soon be found. However, it took another 18 years before the existence of the top was confirmed. Early searches for the top quark at SLAC and DESY (in Hamburg) came up empty-handed. When, in the early 1980s, the Super Proton Synchrotron (SPS) at CERN discovered the W boson and the Z boson, it was again felt that the discovery of the top was imminent. As the SPS gained competition from the Tevatron at Fermilab there was still no sign of the missing particle, and it was announced by the group at CERN that the top mass must be at least . After a race between CERN and Fermilab to discover the top, the accelerator at CERN reached its limits without creating a single top, pushing the lower bound on its mass up to . The Tevatron was (until the start of LHC operation at CERN in 2009) the only hadron collider powerful enough to produce top quarks. In order to be able to confirm a future discovery, a second detector, the DØ detector, was added to the complex (in addition to the Collider Detector at Fermilab (CDF) already present). In October 1992, the two groups found their first hint of the top, with a single creation event that appeared to contain the top. In the following years, more evidence was collected and on 22 April 1994, the CDF group submitted their article presenting tentative evidence for the existence of a top quark with a mass of about . In the meantime, DØ had found no more evidence than the suggestive event in 1992. A year later, on 2 March 1995, after having gathered more evidence and reanalyzed the DØ data (which had been searched for a much lighter top), the two groups jointly reported the discovery of the top at a mass of . In the years leading up to the top-quark discovery, it was realized that certain precision measurements of the electroweak vector boson masses and couplings are very sensitive to the value of the top-quark mass. These effects become much larger for higher values of the top mass and therefore could indirectly see the top quark even if it could not be directly detected in any experiment at the time. The largest effect from the top-quark mass was on the T parameter, and by 1994 the precision of these indirect measurements had led to a prediction of the top-quark mass to be between and . It is the development of techniques that ultimately allowed such precision calculations that led to Gerardus 't Hooft and Martinus Veltman winning the Nobel Prize in physics in 1999. Properties At the final Tevatron energy of 1.96 TeV, top–antitop pairs were produced with a cross section of about 7 picobarns (pb). The Standard Model prediction (at next-to-leading order with = ) is 6.7–7.5 pb. The W bosons from top quark decays carry polarization from the parent particle, hence pose themselves as a unique probe to top polarization. In the Standard Model, the top quark is predicted to have a spin quantum number of and electric charge . A first measurement of the top quark charge has been published, resulting in some confidence that the top quark charge is indeed . Production Because top quarks are very massive, large amounts of energy are needed to create one. The only way to achieve such high energies is through high-energy collisions. These occur naturally in the Earth's upper atmosphere as cosmic rays collide with particles in the air, or can be created in a particle accelerator. In 2011, after the Tevatron ceased operations, the Large Hadron Collider at CERN became the only accelerator that generates a beam of sufficient energy to produce top quarks, with a center-of-mass energy of 7 TeV. There are multiple processes that can lead to the production of top quarks, but they can be conceptually divided in two categories: top-pair production, and single-top production. Top-quark pairs The most common is production of a top–antitop pair via strong interactions. In a collision, a highly energetic gluon is created, which subsequently decays into a top and antitop. This process was responsible for the majority of the top events at Tevatron and was the process observed when the top was first discovered in 1995. It is also possible to produce pairs of top–antitop through the decay of an intermediate photon or Z-boson. However, these processes are predicted to be much rarer and have a virtually identical experimental signature in a hadron collider like Tevatron. Single top quarks The production of single top quarks via weak interaction is a distinctly different process. This can happen in several ways (called channels): Either an intermediate W-boson decays into a top and antibottom quarks ("s-channel") or a bottom quark (probably created in a pair through the decay of a gluon) transforms to a top quark by exchanging a W boson with an up or down quark ("t-channel"). A single top quark can also be produced in association with a W boson, requiring an initial-state bottom quark ("tW-channel"). The first evidence for these processes was published by the DØ collaboration in December 2006, and in March 2009 the CDF and DØ collaborations released twin articles with the definitive observation of these processes. The main significance of measuring these production processes is that their frequency is directly proportional to the  component of the CKM matrix. Decay The only known way the top quark can decay is through the weak interaction, producing a W boson and a bottom quark. Because of its enormous mass, the top quark is extremely short-lived, with a predicted lifetime of only . As a result, top quarks do not have time before they decay to form hadrons as other quarks do. The absence of a hadron surrounding the top quark provides physicists with the unique opportunity to study the behavior of a "bare" quark. In particular, it is possible to directly determine the branching ratio: The best current determination of this ratio is . Since this ratio is equal to according to the Standard Model, this gives another way of determining the CKM element , or in combination with the determination of from single top production provides tests for the assumption that the CKM matrix is unitary. The Standard Model also allows more exotic decays, but only at one loop level, meaning that they are extremely rare. In particular, it is conceivable that a top quark might decay into another up-type quark (an up or a charm) by emitting a photon or a Z-boson. However, searches for these exotic decay modes have produced no evidence that they occur, in accordance with expectations of the Standard Model. The branching ratios for these decays have been determined to be less than 1.8 in 10000 for photonic decay and less than 5 in 10000 for Z boson decay at 95% confidence. Mass and coupling to the Higgs boson The Standard Model generates fermion masses through their couplings to the Higgs boson. This Higgs boson acts as a field that fills space. Fermions interact with this field in proportion to their individual coupling constants , which generates mass. A low-mass particle, such as the electron has a minuscule coupling , while the top quark has the largest coupling to the Higgs, . In the Standard Model, all of the quark and lepton Higgs–Yukawa couplings are small compared to the top-quark Yukawa coupling. This hierarchy in the fermion masses remains a profound and open problem in theoretical physics. Higgs–Yukawa couplings are not fixed constants of nature, as their values vary slowly as the energy scale (distance scale) at which they are measured. These dynamics of Higgs–Yukawa couplings, called "running coupling constants", are due to a quantum effect called the renormalization group. The Higgs–Yukawa couplings of the up, down, charm, strange and bottom quarks are hypothesized to have small values at the extremely high energy scale of grand unification, . They increase in value at lower energy scales, at which the quark masses are generated by the Higgs. The slight growth is due to corrections from the QCD coupling. The corrections from the Yukawa couplings are negligible for the lower-mass quarks. One of the prevailing views in particle physics is that the size of the top-quark Higgs–Yukawa coupling is determined by a unique nonlinear property of the renormalization group equation that describes the running of the large Higgs–Yukawa coupling of the top quark. If a quark Higgs–Yukawa coupling has a large value at very high energies, its Yukawa corrections will evolve downward in mass scale and cancel against the QCD corrections. This is known as a (quasi-) infrared fixed point, which was first predicted by B. Pendleton and G.G. Ross, and by Christopher T. Hill, No matter what the initial starting value of the coupling is, if sufficiently large, it will reach this fixed-point value. The corresponding quark mass is then predicted. The top-quark Yukawa coupling lies very near the infrared fixed point of the Standard Model. The renormalization group equation is: where is the color gauge coupling, is the weak isospin gauge coupling, and is the weak hypercharge gauge coupling. This equation describes how the Yukawa coupling changes with energy scale . Solutions to this equation for large initial values cause the right-hand side of the equation to quickly approach zero, locking to the QCD coupling . The value of the top quark fixed point is fairly precisely determined in the Standard Model, leading to a top-quark mass of 220 GeV. This is about 25% larger than the observed top mass and may be hinting at new physics at higher energy scales. The quasi-infrared fixed point subsequently became the basis of top quark condensation and topcolor theories of electroweak symmetry breaking, in which the Higgs boson is composed of a pair of top and antitop quarks. The predicted top-quark mass comes into improved agreement with the fixed point if there are additional Higgs scalars beyond the standard model and therefore it may be hinting at a rich spectroscopy of new Higgs fields at energy scales that can be probed with the LHC and its upgrades.
Physical sciences
Fermions
Physics
241030
https://en.wikipedia.org/wiki/Bottom%20quark
Bottom quark
The bottom quark, beauty quark, or b quark, is an elementary particle of the third generation. It is a heavy quark with a charge of − e. All quarks are described in a similar way by electroweak interaction and quantum chromodynamics, but the bottom quark has exceptionally low rates of transition to lower-mass quarks. The bottom quark is also notable because it is a product in almost all top quark decays, and is a frequent decay product of the Higgs boson. Name and history The bottom quark was first described theoretically in 1973 by physicists Makoto Kobayashi and Toshihide Maskawa to explain CP violation. The name "bottom" was introduced in 1975 by Haim Harari. The evidence for the bottom quark was first obtained in 1977 by the Fermilab E288 experiment team led by Leon M. Lederman, when proton-nucleon collisions produced bottomonium decaying to pairs of muons. The discovery was confirmed about a year later by the PLUTO and DASP2 Collaborations at the electron-positron collider DORIS at DESY. It was reported at the time that DESY scientists were in favor of the name "beauty", while the American scientists tended towards "bottom". Kobayashi and Maskawa won the 2008 Nobel Prize in Physics for their explanation of CP-violation. While the name "beauty" is sometimes used, "bottom" became the predominant usage by analogy of "top" and "bottom" to "up" and "down". Distinct character The bottom quark's "bare" mass is around – a bit more than four times the mass of a proton, and many orders of magnitude larger than common "light" quarks. Although it almost exclusively transitions from or to a top quark, the bottom quark can decay into either an up quark or charm quark via the weak interaction. CKM matrix elements and specify the rates, where both these decays are suppressed, making lifetimes of most bottom particles (~10−12 s) somewhat longer than those of charmed particles (~10−13 s), but shorter than those of strange particles (from ~10−10 to ~10−8 s). The combination of high mass and low transition rate gives experimental collision byproducts containing a bottom quark a distinctive signature that makes them relatively easy to identify using a technique called "B-tagging". For that reason, mesons containing the bottom quark are exceptionally long-lived for their mass, and are the easiest particles to use to investigate CP violation. Such experiments are being performed at the BaBar, Belle and LHCb experiments. Hadrons containing bottom quarks Some of the hadrons containing bottom quarks include: B mesons contain a bottom quark (or its antiparticle) and an up or down quark. and mesons contain a bottom quark along with a charm quark or strange quark respectively. There are many bottomonium states, for example the meson and χb(3P), the first particle discovered in LHC. These consist of a bottom quark and its antiparticle. Bottom baryons have been observed, and are named in analogy with strange baryons (e.g. ).
Physical sciences
Fermions
Physics
241033
https://en.wikipedia.org/wiki/Muon%20neutrino
Muon neutrino
The muon neutrino is an elementary particle which has the symbol and zero electric charge. Together with the muon it forms the second generation of leptons, hence the name muon neutrino. It was discovered in 1962 by Leon Lederman, Melvin Schwartz and Jack Steinberger. The discovery was rewarded with the 1988 Nobel Prize in Physics. Discovery The muon neutrino or "neutretto" was hypothesized to exist by a number of physicists in the 1940s. The first paper on it may be Shoichi Sakata and Takesi Inoue's two-meson theory of 1942, which also involved two neutrinos. In 1962 Leon M. Lederman, Melvin Schwartz and Jack Steinberger proved the existence of the muon neutrino in an experiment at the Brookhaven National Laboratory. This earned them the 1988 Nobel Prize. Speed In September 2011 OPERA researchers reported that muon neutrinos were apparently traveling at faster than light speed. This result was confirmed again in a second experiment in November 2011. These results were viewed skeptically by the scientific community at large, and more experiments investigated the phenomenon. In March 2012 the ICARUS team published results directly contradicting the results of OPERA. Later, in July 2012, the apparent anomalous super-luminous propagation of neutrinos was traced to a faulty element of the fibre optic timing system in Gran-Sasso. After it was corrected the neutrinos appeared to travel with the speed of light within the errors of the experiment.
Physical sciences
Fermions
Physics
241034
https://en.wikipedia.org/wiki/Electron%20neutrino
Electron neutrino
The electron neutrino () is an elementary particle which has zero electric charge and a spin of . Together with the electron, it forms the first generation of leptons, hence the name electron neutrino. It was first hypothesized by Wolfgang Pauli in 1930, to account for missing momentum and missing energy in beta decay, and was discovered in 1956 by a team led by Clyde Cowan and Frederick Reines (see Cowan–Reines neutrino experiment). Proposal In the early 1900s, theories predicted that the electrons resulting from beta decay should have been emitted at a specific energy. However, in 1914, James Chadwick showed that electrons were instead emitted in a continuous spectrum. → + The early understanding of beta decay In 1930, Wolfgang Pauli theorized that an undetected particle was carrying away the observed difference between the energy, momentum, and angular momentum of the initial and final particles. Pauli's version of beta decay Pauli's letter On 4 December 1930, Pauli wrote a letter to the Physical Institute of the Federal Institute of Technology, Zürich, in which he proposed the electron "neutron" [neutrino] as a potential solution to solve the problem of the continuous beta decay spectrum. A translated excerpt of his letter reads: Dear radioactive ladies and gentlemen, As the bearer of these lines [...] will explain more exactly, considering the 'false' statistics of N-14 and Li-6 nuclei, as well as the continuous β-spectrum, I have hit upon a desperate remedy to save the "exchange theorem" of statistics and the energy theorem. Namely [there is] the possibility that there could exist in the nuclei electrically neutral particles that I wish to call neutrons, which have spin  and obey the exclusion principle, and additionally differ from light quanta in that they do not travel with the velocity of light: The mass of the neutron must be of the same order of magnitude as the electron mass and, in any case, not larger than 0.01 proton mass. The continuous β-spectrum would then become understandable by the assumption that in β decay a neutron is emitted together with the electron, in such a way that the sum of the energies of neutron and electron is constant. [...] But I don't feel secure enough to publish anything about this idea, so I first turn confidently to you, dear radioactives, with a question as to the situation concerning experimental proof of such a neutron, if it has something like about 10 times the penetrating capacity of a γ ray. I admit that my remedy may appear to have a small a priori probability because neutrons, if they exist, would probably have long ago been seen. However, only those who wager can win, and the seriousness of the situation of the continuous β-spectrum can be made clear by the saying of my honored predecessor in office, Mr. Debye, [...] "One does best not to think about that at all, like the new taxes." [...] So, dear radioactives, put it to test and set it right. [...] With many greetings to you, also to Mr. Back, Your devoted servant, W. Pauli A translated reprint of the full letter can be found in the September 1978 issue of Physics Today. Discovery The electron neutrino was discovered by Clyde Cowan and Frederick Reines in 1956. Name Pauli originally named his proposed light particle a neutron. When James Chadwick discovered a much more massive nuclear particle in 1932 and also named it a neutron, this left the two particles with the same name. Enrico Fermi, who developed the theory of beta decay, introduced the term neutrino in 1934 (it was jokingly coined by Edoardo Amaldi during a conversation with Fermi at the Institute of physics of via Panisperna in Rome, in order to distinguish this light neutral particle from Chadwick's neutron) to resolve the confusion. It was a pun on neutrone, the Italian equivalent of neutron: the -one ending can be an augmentative in Italian, so neutrone could be read as the "large neutral thing"; -ino replaces the augmentative suffix with a diminutive one ("small neutral thing"). Upon the prediction and discovery of a second neutrino, it became important to distinguish between different types of neutrinos. Pauli's neutrino is now identified as the electron neutrino, while the second neutrino is identified as the muon neutrino. Electron antineutrino The electron neutrino has a corresponding antiparticle, the electron antineutrino (), which differs only in that some of its properties have equal magnitude but opposite sign. One major open question in particle physics is whether neutrinos and anti-neutrinos are the same particle. If so, they would be Majorana fermions, whereas if not, they would be Dirac fermions. They are produced in beta decay and other types of weak interactions.
Physical sciences
Fermions
Physics
241047
https://en.wikipedia.org/wiki/Carbon%20tetrachloride
Carbon tetrachloride
Carbon tetrachloride, also known by many other names (such as carbon tet for short and tetrachloromethane, also recognised by the IUPAC), is a chemical compound with the chemical formula CCl4. It is a non-flammable, dense, colourless liquid with a "sweet" chloroform-like odour that can be detected at low levels. It was formerly widely used in fire extinguishers, as a precursor to refrigerants, an anthelmintic and a cleaning agent, but has since been phased out because of environmental and safety concerns. Exposure to high concentrations of carbon tetrachloride can affect the central nervous system and degenerate the liver and kidneys. Prolonged exposure can be fatal. Properties In the carbon tetrachloride molecule, four chlorine atoms are positioned symmetrically as corners in a tetrahedral configuration joined to a central carbon atom by single covalent bonds. Because of this symmetric geometry, CCl4 is non-polar. Methane gas has the same structure, making carbon tetrachloride a halomethane. As a solvent, it is well suited to dissolving other non-polar compounds such as fats and oils. It can also dissolve iodine. It is volatile, giving off vapors with an odor characteristic of other chlorinated solvents, somewhat similar to the tetrachloroethylene odor reminiscent of dry cleaners' shops. Solid tetrachloromethane has two polymorphs: crystalline II below −47.5 °C (225.6 K) and crystalline I above −47.5 °C. At −47.3 °C it has monoclinic crystal structure with space group C2/c and lattice constants a = 20.3, b = 11.6, c = 19.9 (.10−1 nm), β = 111°. With a specific gravity greater than 1, carbon tetrachloride will be present as a dense nonaqueous phase liquid if sufficient quantities are spilt in the environment. Reactions Despite being generally inert, carbon tetrachloride can undergo various reactions. Hydrogen or an acid in the presence of an iron catalyst can reduce carbon tetrachloride to chloroform, dichloromethane, chloromethane and even methane. When its vapours are passed through a red-hot tube, carbon tetrachloride dechlorinates to tetrachloroethylene and hexachloroethane. Carbon tetrachloride, when treated with HF, gives various compounds such as trichlorofluoromethane (R-11), dichlorodifluoromethane (R-12), chlorotrifluoromethane (R-13) and carbon tetrafluoride with HCl as the by-product: This was once one of the main uses of carbon tetrachloride, as R-11 and R-12 were widely used as refrigerants. An alcohol solution of potassium hydroxide decomposes it to potassium chloride and potassium carbonate in water: Carbon is sufficiently oxophilic that many compounds react to give phosgene: Reaction with hydrogen sulfide gives thiophosgene: History and synthesis Carbon tetrachloride was originally synthesized in 1820 by Michael Faraday, who named it "protochloride of carbon", by decomposition of hexachloroethane ("perchloride of carbon") which he synthesized by chlorination of ethylene. The protochloride of carbon has been previously misidentified as tetrachloroethylene because it can be made with the same reaction of hexachloroethane. Later in the 19th century, the name "protochloride of carbon" was used for tetrachloroethylene, and carbon tetrachloride was called "bichloride of carbon" or "perchloride of carbon". Henri Victor Regnault developed another method to synthesise carbon tetrachloride from chloroform, chloroethane or methanol with excess chlorine in 1839. Kolbe made carbon tetrachloride in 1845 by passing chlorine over carbon disulfide through a porcelain tube. Prior to the 1950s, carbon tetrachloride was manufactured by the chlorination of carbon disulfide at 105 to 130 °C: CS2 + 3 Cl2 → CCl4 + S2Cl2 But now it is mainly produced from methane: CH4 + 4 Cl2 → CCl4 + 4 HCl The production often utilizes by-products of other chlorination reactions, such as from the syntheses of dichloromethane and chloroform. Higher chlorocarbons are also subjected to this process named "chlorinolysis": C2Cl6 + Cl2 → 2 CCl4 The production of carbon tetrachloride has steeply declined since the 1980s because of environmental concerns and the decreased demand for CFCs, which were derived from carbon tetrachloride. In 1992, production in the U.S./Europe/Japan was estimated at 720,000 tonnes. Natural occurrence Carbon tetrachloride was discovered along with chloromethane and chloroform in oceans, marine algae and volcanoes. The natural emissions of carbon tetrachloride are too little compared to those from anthropogenic sources; for example, the Momotombo Volcano in Nicaragua emits carbon tetrachloride at a flux of 82 grams per year while the global industrial emissions were at 2 × 1010 grams per year. Carbon tetrachloride was found in Red algae Asparagopsis taxiformis and Asparagopsis armata. It was detected in Southern California ecosystems, salt lakes of Kalmykian Steppe and a common liverwort in Czechia. Safety At high temperatures in air, it decomposes or burns to produce poisonous phosgene. This was a common problem when carbon tetrachloride was used as a fire extinguisher and there have been deaths due to its conversion to phosgene reported. Carbon tetrachloride is a suspected human carcinogen but there is no sufficient evidence of carcinogenicity in humans. The World Health Organization reports carbon tetrachloride can induce hepatocellular carcinomas (hepatomas) in mice and rats. The doses inducing hepatic tumors in mice and rats are higher than those inducing cell toxicity. The International Agency for Research on Cancer (IARC) classified this compound in Group 2B, "possibly carcinogenic to humans". Carbon tetrachloride is one of the most potent hepatotoxins (toxic to the liver), so much so that it is widely used in scientific research to evaluate hepatoprotective agents. Exposure to high concentrations of carbon tetrachloride (including vapor) can affect the central nervous system and degenerate the liver and kidneys, and prolonged exposure may lead to coma or death. Chronic exposure to carbon tetrachloride can cause liver and kidney damage and could result in cancer. Consumption of alcohol increases the toxic effects of carbon tetrachloride and may cause more severe organ damage, such as acute renal failure, in heavy drinkers. The doses that can cause mild toxicity to non-drinkers can be fatal to drinkers. The effects of carbon tetrachloride on human health and the environment have been assessed under REACH in 2012 in the context of the substance evaluation by France. In 2008, a study of common cleaning products found the presence of carbon tetrachloride in "very high concentrations" (up to 101 mg/m3) as a result of manufacturers' mixing of surfactants or soap with sodium hypochlorite (bleach). Carbon tetrachloride is also both ozone-depleting and a greenhouse gas. However, since 1992 its atmospheric concentrations have been in decline for the reasons described above (see atmospheric concentration graphs in the gallery). CCl4 has an atmospheric lifetime of 85 years. Uses In organic chemistry, carbon tetrachloride serves as a source of chlorine in the Appel reaction. Carbon tetrachloride made from heavy chlorine-37 has been used in the detection of neutrinos and antineutrinos. Raymond Davis Jr. used carbon tetrachloride in his experiments to detect antineutrinos. Historical uses Carbon tetrachloride was widely used as a dry cleaning solvent, as a refrigerant, and in lava lamps. In the last case, carbon tetrachloride is a key ingredient that adds weight to the otherwise buoyant wax. One speciality use of carbon tetrachloride was in stamp collecting, to reveal watermarks on postage stamps without damaging them. A small amount of the liquid is placed on the back of a stamp, sitting in a black glass or obsidian tray. The letters or design of the watermark can then be seen clearly. Today, this is done on lit tables without using carbon tetrachloride. Cleaning Being a good solvent for many materials (such as grease and tar), carbon tetrachloride was widely used as a cleaning fluid for nearly 70 years. It is nonflammable and nonexplosive and did not leave any odour on the cleaned material, unlike gasoline, which was also used for cleaning at the time. It was used as a "safe" alternative to gasoline. It was first marketed as Katharin, in 1890 or 1892 and as Benzinoform later. Carbon tetrachloride was recommended for regularly cleaning the type slugs of typewriters in office settings in the 1940s. Carbon tetrachloride was the first chlorinated solvent to be used in dry-cleaning and was used until the 1950s. It had the downsides of being corrosive to the dry-cleaning equipment and causing illness among dry-cleaning operators, and was replaced by trichloroethylene, tetrachloroethylene and methyl chloroform (trichloroethane). Carbon tetrachloride was also used as an alternative to petrol (gasoline) in dry shampoos, from the beginning of 1903 to the 1930s. Several women had fainted from its fumes during the hair wash in barber shops, the hairdressers often used electric fans to blow the fumes away. In 1909, a baronet's daughter, Helenora Elphinstone-Dalrymple (aged 29), died after having her hair shampooed with carbon tetrachloride. It is assumed that carbon tetrachloride was still used as a dry cleaning solvent in North Korea as of 2006. Medical uses Anaesthetic and analgesic Carbon tetrachloride was briefly used as a volatile inhalation anaesthetic and analgesic for intense menstruation pains and headaches in the mid-19th century. Its anaesthetic effects were known as early as 1847 or 1848. It was introduced as a safer alternative to chloroform by the doctor Protheroe Smith in 1864. In December 1865, the Scottish obstetrician who discovered the anaesthetic effects of chloroform on humans, James Young Simpson, had experimented with carbon tetrachloride as an anaesthetic. Simpson named the compound "Chlorocarbon" for its similarity to chloroform. His experiments involved injecting carbon tetrachloride into two women's vaginas. Simpson orally consumed carbon tetrachloride and described it as having "the same effect as swallowing a capsule of chloroform". Because of the higher amount of chlorine atoms (compared to chloroform) in its molecule, carbon tetrachloride has a stronger anaesthetic effect than chloroform and required a smaller amount. Its anaesthetic action was likened to ether, rather than the related chloroform. It is less volatile than chloroform, therefore it was more difficult to apply and needed warm water to evaporate. Its smell has been described as "fruity", quince-like and "more pleasant than chloroform", and had a "pleasant taste". Carbon tetrachloride for anaesthetic use was made by the chlorination of carbon disulfide. It was used on at least 50 patients, of which most were women in labour. During anaesthesia, carbon tetrachloride has caused such violent muscular contractions and negative effects on the heart in some patients that it had to be replaced with chloroform or ether. Such use was experimental and the anaesthetic use of carbon tetrachloride never gained popularity due to its potential toxicity. Parasite medication The veterinary doctor Maurice Crowther Hall (1881-1938) discovered in 1921 that carbon tetrachloride was incredibly effective as an anthelminthic in eradicating hookworm via ingestion. In one of the clinical trials of carbon tetrachloride, it was tested on criminals to determine its safety for use in human beings. Beginning in 1922, capsules of pure carbon tetrachloride were marketed by Merck under the name Necatorina (variants include Neo-necatorina and Necatorine). Necatorina was used as a medication against parasitic diseases in humans. This medication was most prevalently used in Latin American countries. Its toxicity was not well understood at the time and toxic effects were attributed to impurities in the capsules rather than carbon tetrachloride itself. Due to carbon tetrachloride's toxicity, tetrachloroethylene (which was also investigated by Hall in 1925) replaced its use as an anthelmintic by the 1940s. Solvent It once was a popular solvent in organic chemistry, but because of its adverse health effects, it is rarely used today. It is sometimes useful as a solvent for infrared spectroscopy, because there are no significant absorption bands above 1600 cm−1. Because carbon tetrachloride does not have any hydrogen atoms, it was historically used in proton NMR spectroscopy. In addition to being toxic, its dissolving power is low. Its use in NMR spectroscopy has been largely superseded by deuterated solvents (mainly deuterochloroform). The use of carbon tetrachloride in the determination of oil has been replaced by various other solvents, such as tetrachloroethylene. Because it has no C–H bonds, carbon tetrachloride does not easily undergo free-radical reactions. It is a useful solvent for halogenations either by the elemental halogen or by a halogenation reagent such as N-bromosuccinimide (these conditions are known as Wohl–Ziegler bromination). Fire suppression Between 1902 and 1908, carbon tetrachloride-based fire extinguishers began to appear in the United States, years after Europe. In 1910, the Pyrene Manufacturing Company of Delaware filed a patent to use carbon tetrachloride to extinguish fires. The liquid was vaporized by the heat of combustion and extinguished flames, an early form of gaseous fire suppression. At the time it was believed the gas displaced oxygen in the area near the fire, but later research found that the gas inhibited the chemical chain reaction of the combustion process. In 1911, Pyrene patented a small, portable extinguisher that used the chemical. The extinguisher consisted of a brass bottle with an integrated hand-pump that was used to expel a jet of liquid toward the fire. As the container was unpressurized, it could easily be refilled after use. Carbon tetrachloride was suitable for liquid and electrical fires and the extinguishers were often carried on aircraft or motor vehicles. However, as early as 1920, there were reports of fatalities caused by the chemical when used to fight a fire in a confined space. In the first half of the 20th century, another common fire extinguisher was a single-use, sealed glass globe, a "fire grenade, " filled with carbon tetrachloride or salt water. The bulb could be thrown at the base of the flames to quench the fire. The carbon tetrachloride type could also be installed in a spring-loaded wall fixture with a solder-based restraint. When the solder melted by high heat, the spring would either break the globe or launch it out of the bracket, allowing the extinguishing agent to be automatically dispersed into the fire. A well-known brand of fire grenade was the "Red Comet", which was variously manufactured with other fire-fighting equipment in the Denver, Colorado area by the Red Comet Manufacturing Company from its founding in 1919 until manufacturing operations were closed in the early 1980s. Since carbon tetrachloride freezes at –23 °C, the fire extinguishers would contain only 89-90% carbon tetrachloride and 10% trichloroethylene (m.p. –85 °C) or chloroform (m.p. –63 °C) for lowering the extinguishing mixture's freezing point down to temperatures as low as –45 °C. The extinguishers with 10% trichloroethylene would contain 1% carbon disulfide as a stabiliser. Refrigerants Prior to the Montreal Protocol, large quantities of carbon tetrachloride were used to produce the chlorofluorocarbon refrigerants R-11 (trichlorofluoromethane) and R-12 (dichlorodifluoromethane). However, these refrigerants play a role in ozone depletion and have been phased out. Carbon tetrachloride is still used to manufacture less destructive refrigerants. Fumigant Carbon tetrachloride was widely used as a fumigant to kill insect pests in stored grain. It was employed in a mixture known as 80/20, that was 80% carbon tetrachloride and 20% carbon disulfide. The United States Environmental Protection Agency banned its use in 1985. Another carbon tetrachloride fumigant preparation mixture contained acrylonitrile. Carbon tetrachloride reduced the flammability of the mixture. Most common trade names for the preparation were Acritet, Carbacryl and Acrylofume. The most common preparation, Acritet, was prepared with 34 percent acrylonitrile and 66 percent carbon tetrachloride. Society and culture The French writer René Daumal intoxicated himself by inhalation of carbon tetrachloride which he used to kill the beetles he collected, to "encounter other worlds" by voluntarily plunging himself into intoxications close to comatose states. Carbon tetrachloride is listed (along with salicylic acid, toluene, sodium tetraborate, silica gel, methanol, potassium carbonate, ethyl acetate and "BHA") as an ingredient in Peter Parker's (Spider-Man) custom web fluid formula in the book The Wakanda Files: A Technological Exploration of the Avengers and Beyond. Australian YouTuber Tom of Explosions&Fire and Extractions&Ire made a video on extracting carbon tetrachloride from an old fire extinguisher in 2019, and later experimenting with it by mixing it with sodium, and the chemical gained a fan base called "Tet Gang" on social media (especially on Reddit). The channel owner later used carbon tetrachloride-themed designs in the channel's merch. In the Ramones song "Carbona Not Glue" released in 1977, the narrator says that huffing the vapours of Carbona, a carbon tetrachloride-based stain remover, was better than huffing glue. They later removed the song from the album as Carbona was a corporate trademark. Famous deaths from carbon tetrachloride poisoning Evalyn Bostock (1917–1944), British actress who died from accidentally drinking carbon tetrachloride after mistaking it for her drink while working in a photographic darkroom. Harry Edwards (1887–1952), an American director who died from carbon tetrachloride poisoning shortly after directing his first television production. Zilphia Horton (1910–1956), American musician and activist who died from accidentally drinking a glass full of carbon tetrachloride-based typewriter cleaning fluid that she mistook for water. Margo Jones (1911–1955), American stage director who was exposed to the fumes of carbon tetrachloride that was used to clean off paint from a carpet. She died a week later from kidney failure. Jim Beck (1919–1956), American record producer, died after exposure to carbon tetrachloride fumes while cleaning recording equipment. Tommy Tucker (1933–1982), American blues singer, died after using carbon tetrachloride in floor refinishing. Gallery
Physical sciences
Halocarbons
Chemistry
241204
https://en.wikipedia.org/wiki/Requiem%20shark
Requiem shark
Requiem sharks are sharks of the family Carcharhinidae in the order Carcharhiniformes. They are migratory, live-bearing sharks of warm seas (sometimes of brackish or fresh water) and include such species as the bull shark, lemon shark, blacktip shark, and whitetip reef shark. Family members have the usual carcharhiniform characteristics. Their eyes are round, and one or two gill slits fall over the pectoral fin base. Most species are viviparous, the young being born fully developed. They vary widely in size, from as small as adult length in the Australian sharpnose shark, up to adult length in the oceanic whitetip shark. Scientists assume that the size and shape of their pectoral fins have the right dimensions to minimize transport cost. Requiem sharks tend to live in more tropical areas, but tend to migrate. Females release a chemical in the ocean in order to let the males know they are ready to mate. Typical mating time for these sharks is around spring to autumn. According to the ISAF, requiem sharks are among the top five species involved in shark attacks on humans; however, "requiem shark" is not a single species, but refers, in this case, to an order of similar sharks that are often involved in incidents. ISAF prefers to use "requiem sharks" due to the difficulty in identifying individual species. Etymology The common name requiem shark may be related to the French word for shark, requin, which is itself of disputed etymology. One derivation of the latter is from Latin requiem ("rest"), which would thereby create a cyclic etymology (requiem-requin-requiem), but other sources derive it from the Old French verb reschignier ("to grimace while baring teeth"). The scientific name Carcharhinidae was first proposed in 1896 by D.S. Jordan and B.W. Evermann as a subfamily of Galeidae (now replaced by "Carcharhinidae"). The term is derived from Greek (karcharos, sharp or jagged), and ῥί̄νη (rhinē, rasp); both elements describe the jagged, rasp-like skin. Rasp-like skin is typical of shark skin in general, and is not diagnostic to Carcharhinidae. Evolutionary history The oldest member of the family is Archaeogaleus lengadocensis from the Early Cretaceous (Valanginian) of France. Only a handful of records of the group are known from prior to the beginning of the Cenozoic. Modern carcharinid sharks have extensively diversified in coral reef habitats. Hunting strategies Requiem sharks are extraordinarily fast and effective hunters. Their elongated, torpedo-shaped bodies make them quick and agile swimmers, so they can easily attack any prey. Some species are continually active, while others are capable of resting motionless for extended periods on the bottom. They have a range of food sources depending on location and species, including bony fish, squid, octopus, lobster, turtles, marine mammals, seabirds, other sharks and rays; smaller species tend to select a narrow range of prey, but some very large species, especially the tiger shark (Galeocerdo cuvier), are virtually omnivorous. They are often considered the "garbage cans" of the seas because they will eat almost anything, even non-food items like trash. They are migratory hunters that follow their food source across entire oceans. They tend to be most active at night time, where their impressive eyesight can help them sneak up on unsuspecting prey. It is worth mentioning that the tiger shark, however, possibly belongs to the Galeocerdidae family. Most requiem sharks hunt alone, however some species like the whitetip reef sharks and lemon sharks are cooperative feeders and will hunt in packs through coordinated, timed attacks against their prey. Some of the species have been shown to give specialized displays when confronted by divers or other sharks, which may be indicative of aggressive or defensive threat. Classification The 60 species of requiem shark are grouped into 11 genera: Genus Scoliodon J. P. Müller & Henle, 1838 Scoliodon laticaudus J. P. Müller & Henle, 1838 (spadenose shark) Scoliodon macrorhynchos Bleeker, 1852 (Pacific spadenose shark) Genus Carcharhinus Blainville, 1816 Carcharhinus acronotus Poey, 1860 (blacknose shark) Carcharhinus albimarginatus Rüppell, 1837 (silvertip shark) Carcharhinus altimus S. Springer, 1950 (bignose shark) Carcharhinus amblyrhynchoides Whitley, 1934 (graceful shark) Carcharhinus amblyrhynchos Bleeker, 1856 (grey reef shark) Carcharhinus amboinensis J. P. Müller & Henle, 1839 (pigeye shark) Carcharhinus borneensis Bleeker, 1858 (Borneo shark) Carcharhinus brachyurus Günther, 1870 (copper shark) Carcharhinus brevipinna J. P. Müller & Henle, 1839 (spinner shark) Carcharhinus cautus Whitley, 1945 (nervous shark) Carcharhinus cerdale C. H. Gilbert, 1898 (Pacific smalltail shark) Carcharhinus coatesi Whitley, 1939 (Coates's shark) Carcharhinus dussumieri J. P. Müller & Henle, 1839 (whitecheek shark) Carcharhinus falciformis J. P. Müller & Henle, 1839 (silky shark) Carcharhinus fitzroyensis Whitley, 1943 (creek whaler) Carcharhinus galapagensis Snodgrass & Heller, 1905 (Galapagos shark) Carcharhinus hemiodon J. P. Müller & Henle, 1839 (Pondicherry shark) Carcharhinus humani W. T. White & Weigmann, 2014 (Human's whaler shark) Carcharhinus isodon J. P. Müller & Henle, 1839 (finetooth shark) Carcharhinus leiodon Garrick, 1985 (smoothtooth blacktip shark) Carcharhinus leucas J. P. Müller & Henle, 1839 (bull shark) Carcharhinus limbatus J. P. Müller & Henle, 1839 (blacktip shark) Carcharhinus longimanus Poey, 1861 (oceanic whitetip shark) Carcharhinus macloti J. P. Müller & Henle, 1839 (hardnose shark) Carcharhinus melanopterus Quoy & Gaimard, 1824 (blacktip reef shark) Carcharhinus obscurus Lesueur, 1818 (dusky shark) Carcharhinus perezi Poey, 1876 (Caribbean reef shark) Carcharhinus plumbeus Nardo, 1827 (sandbar shark) Carcharhinus porosus Ranzani, 1839 (smalltail shark) Carcharhinus sealei Pietschmann, 1913 (blackspot shark) Carcharhinus signatus Poey, 1868 (night shark) Carcharhinus sorrah J. P. Müller & Henle, 1839 (spot-tail shark) Carcharhinus tilstoni Whitley, 1950 (Australian blacktip shark) †Carcharhinus tingae Carcharhinus tjutjot Bleeker, 1852 (Indonesian whaler shark) Carcharhinus obsolerus White, Kyne, and Harris, 2019 (lost shark) Genus Glyphis Agassiz, 1843 Glyphis gangeticus J. P. Müller & Henle, 1839 (Ganges shark) Glyphis garricki Compagno, W. T. White & Last, 2008 (northern river shark) Glyphis glyphis J. P. Müller & Henle, 1839 (speartooth shark) Glyphis sp. not yet described (Mukah river shark) Genus Lamiopsis Gill, 1862 Lamiopsis temminckii J. P. Müller & Henle, 1839 (broadfin shark) Lamiopsis tephrodes Fowler, 1905 (Borneo broadfin shark) Genus Nasolamia Compagno & Garrick, 1983 Nasolamia velox C. H. Gilbert, 1898 (whitenose shark) Genus Negaprion Whitley, 1940 Negaprion acutidens Rüppell, 1837 (sicklefin lemon shark) Negaprion brevirostris Poey, 1868 (lemon shark) †Negaprion eurybathrodon Blake, 1862 Genus Prionace Cantor, 1849 Prionace glauca Linnaeus, 1758 (blue shark) Genus Rhizoprionodon Whitley, 1929 Rhizoprionodon acutus Rüppell, 1837 (milk shark) Rhizoprionodon lalandii J. P. Müller & Henle, 1839 (Brazilian sharpnose shark) Rhizoprionodon longurio D. S. Jordan & C. H. Gilbert, 1882 (Pacific sharpnose shark) Rhizoprionodon oligolinx V. G. Springer, 1964 (grey sharpnose shark) Rhizoprionodon porosus Poey, 1861 (Caribbean sharpnose shark) Rhizoprionodon taylori Ogilby, 1915 (Australian sharpnose shark) Rhizoprionodon terraenovae J. Richardson, 1836 (Atlantic sharpnose shark) Genus Loxodon J. P. Müller & Henle, 1838 Loxodon macrorhinus J. P. Müller & Henle, 1839 (sliteye shark) Genus Isogomphodon Gill, 1862 Isogomphodon oxyrhynchus J. P. Müller & Henle, 1839 (daggernose shark) Genus Triaenodon J. P. Müller & Henle, 1837 Triaenodon obesus Rüppell, 1837 (whitetip reef shark) Genus †Physogaleus Cappetta, 1980 †Physogaleus americanus Case, 1994 †Physogaleus contortus Gibbes, 1849 †Physogaleus hemmooriensis Reinecke & Hoedemakers, 2006 †Physogaleus huberensis Case, 1981 †Physogaleus latecuspidatus Muller, 1999 †Physogaleus latus Storms, 1894 †Physogaleus maltzani Winkler, 1875 †Physogaleus onkensis Boulemia & Adnet, 2023 †Physogaleus rosehillensis Case & Borodin, 2000 †Physogaleus secundus Winkler, 1876 †Physogaleus tertius Winkler, 1876 † = extinct
Biology and health sciences
Sharks
Animals
241223
https://en.wikipedia.org/wiki/Poisson%27s%20ratio
Poisson's ratio
In materials science and solid mechanics, Poisson's ratio (symbol: (nu)) is a measure of the Poisson effect, the deformation (expansion or contraction) of a material in directions perpendicular to the specific direction of loading. The value of Poisson's ratio is the negative of the ratio of transverse strain to axial strain. For small values of these changes, is the amount of transversal elongation divided by the amount of axial compression. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. For soft materials, such as rubber, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is near 0.5. For open-cell polymer foams, Poisson's ratio is near zero, since the cells tend to collapse in compression. Many typical solids have Poisson's ratios in the range of 0.2 to 0.3. The ratio is named after the French mathematician and physicist Siméon Poisson. Origin Poisson's ratio is a measure of the Poisson effect, the phenomenon in which a material tends to expand in directions perpendicular to the direction of compression. Conversely, if the material is stretched rather than compressed, it usually tends to contract in the directions transverse to the direction of stretching. It is a common observation when a rubber band is stretched, it becomes noticeably thinner. Again, the Poisson ratio will be the ratio of relative contraction to relative expansion and will have the same value as above. In certain rare cases, a material will actually shrink in the transverse direction when compressed (or expand when stretched) which will yield a negative value of the Poisson ratio. The Poisson's ratio of a stable, isotropic, linear elastic material must be between −1.0 and +0.5 because of the requirement for Young's modulus, the shear modulus and bulk modulus to have positive values. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. A perfectly incompressible isotropic material deformed elastically at small strains would have a Poisson's ratio of exactly 0.5. Most steels and rigid polymers when used within their design limits (before yield) exhibit values of about 0.3, increasing to 0.5 for post-yield deformation which occurs largely at constant volume. Rubber has a Poisson ratio of nearly 0.5. Cork's Poisson ratio is close to 0, showing very little lateral expansion when compressed and glass is between 0.18 and 0.30. Some materials, e.g. some polymer foams, origami folds, and certain cells can exhibit negative Poisson's ratio, and are referred to as auxetic materials. If these auxetic materials are stretched in one direction, they become thicker in the perpendicular direction. In contrast, some anisotropic materials, such as carbon nanotubes, zigzag-based folded sheet materials, and honeycomb auxetic metamaterials to name a few, can exhibit one or more Poisson's ratios above 0.5 in certain directions. Assuming that the material is stretched or compressed in only one direction (the axis in the diagram below): where is the resulting Poisson's ratio, is transverse strain is axial strain and positive strain indicates extension and negative strain indicates contraction. Poisson's ratio from geometry changes Length change For a cube stretched in the -direction (see Figure 1) with a length increase of in the -direction, and a length decrease of in the - and -directions, the infinitesimal diagonal strains are given by If Poisson's ratio is constant through deformation, integrating these expressions and using the definition of Poisson's ratio gives Solving and exponentiating, the relationship between and is then For very small values of and , the first-order approximation yields: Volumetric change The relative change of volume of a cube due to the stretch of the material can now be calculated. Since and one can derive Using the above derived relationship between and : and for very small values of and , the first-order approximation yields: For isotropic materials we can use Lamé's relation where is bulk modulus and is Young's modulus. Width change If a rod with diameter (or width, or thickness) and length is subject to tension so that its length will change by then its diameter will change by: The above formula is true only in the case of small deformations; if deformations are large then the following (more precise) formula can be used: where is original diameter is rod diameter change is Poisson's ratio is original length, before stretch is the change of length. The value is negative because it decreases with increase of length Characteristic materials Isotropic For a linear isotropic material subjected only to compressive (i.e. normal) forces, the deformation of a material in the direction of one axis will produce a deformation of the material along the other axis in three dimensions. Thus it is possible to generalize Hooke's Law (for compressive forces) into three dimensions: where: , , and are strain in the direction of , and , , and are stress in the direction of , and is Young's modulus (the same in all directions for isotropic materials) is Poisson's ratio (the same in all directions for isotropic materials) these equations can be all synthesized in the following: In the most general case, also shear stresses will hold as well as normal stresses, and the full generalization of Hooke's law is given by: where is the Kronecker delta. The Einstein notation is usually adopted: to write the equation simply as: Anisotropic For anisotropic materials, the Poisson ratio depends on the direction of extension and transverse deformation Here is Poisson's ratio, is Young's modulus, is a unit vector directed along the direction of extension, is a unit vector directed perpendicular to the direction of extension. Poisson's ratio has a different number of special directions depending on the type of anisotropy. Orthotropic Orthotropic materials have three mutually perpendicular planes of symmetry in their material properties. An example is wood, which is most stiff (and strong) along the grain, and less so in the other directions. Then Hooke's law can be expressed in matrix form as where is the Young's modulus along axis is the shear modulus in direction on the plane whose normal is in direction is the Poisson ratio that corresponds to a contraction in direction when an extension is applied in direction . The Poisson ratio of an orthotropic material is different in each direction (, and ). However, the symmetry of the stress and strain tensors implies that not all the six Poisson's ratios in the equation are independent. There are only nine independent material properties: three elastic moduli, three shear moduli, and three Poisson's ratios. The remaining three Poisson's ratios can be obtained from the relations From the above relations we can see that if then . The larger ratio (in this case ) is called the major Poisson ratio while the smaller one (in this case ) is called the minor Poisson ratio. We can find similar relations between the other Poisson ratios. Transversely isotropic Transversely isotropic materials have a plane of isotropy in which the elastic properties are isotropic. If we assume that this plane of isotropy is the -plane, then Hooke's law takes the form where we have used the -plane of isotropy to reduce the number of constants, that is, . The symmetry of the stress and strain tensors implies that This leaves us with six independent constants , , , , , . However, transverse isotropy gives rise to a further constraint between and , which is Therefore, there are five independent elastic material properties two of which are Poisson's ratios. For the assumed plane of symmetry, the larger of and is the major Poisson ratio. The other major and minor Poisson ratios are equal. Poisson's ratio values for different materials {| class="wikitable sortable" style="border-collapse: collapse" |- bgcolor="#cccccc" ! Material ! Poisson's ratio |- | rubber | 0.4999 |- | gold | 0.42–0.44 |- | saturated clay | 0.40–0.49 |- | magnesium | 0.252–0.289 |- | titanium | 0.265–0.34 |- | copper | 0.33 |- | aluminium alloy | 0.32 |- | clay | 0.30–0.45 |- | stainless steel | 0.30–0.31 |- | steel | 0.27–0.30 |- | cast iron | 0.21–0.26 |- | sand | 0.20–0.455 |- | concrete | 0.1–0.2 |- | glass | 0.18–0.3 |- | metallic glasses | 0.276–0.409 |- | foam | 0.10–0.50 |- | cork | 0.0 |} {| class="wikitable sortable" style="border-collapse: collapse" |- bgcolor="#cccccc" !Material!!Plane of symmetry!!!!!!!!!!!! |- | Nomex honeycomb core | , ribbon in direction |0.49 |0.69 |0.01 |2.75 |3.88 |0.01 |- |glass fiber epoxy resin | |0.29 |0.32 |0.06 |0.06 |0.32 |} Negative Poisson's ratio materials Some materials known as auxetic materials display a negative Poisson's ratio. When subjected to positive strain in a longitudinal axis, the transverse strain in the material will actually be positive (i.e. it would increase the cross sectional area). For these materials, it is usually due to uniquely oriented, hinged molecular bonds. In order for these bonds to stretch in the longitudinal direction, the hinges must ‘open’ in the transverse direction, effectively exhibiting a positive strain. This can also be done in a structured way and lead to new aspects in material design as for mechanical metamaterials. Studies have shown that certain solid wood types display negative Poisson's ratio exclusively during a compression creep test. Initially, the compression creep test shows positive Poisson's ratios, but gradually decreases until it reaches negative values. Consequently, this also shows that Poisson's ratio for wood is time-dependent during constant loading, meaning that the strain in the axial and transverse direction do not increase in the same rate. Media with engineered microstructure may exhibit negative Poisson's ratio. In a simple case auxeticity is obtained removing material and creating a periodic porous media. Lattices can reach lower values of Poisson's ratio, which can be indefinitely close to the limiting value −1 in the isotropic case. More than three hundred crystalline materials have negative Poisson's ratio. For example, Li, Na, K, Cu, Rb, Ag, Fe, Ni, Co, Cs, Au, Be, Ca, Zn Sr, Sb, MoS2 and others. Poisson function At finite strains, the relationship between the transverse and axial strains and is typically not well described by the Poisson ratio. In fact, the Poisson ratio is often considered a function of the applied strain in the large strain regime. In such instances, the Poisson ratio is replaced by the Poisson function, for which there are several competing definitions. Defining the transverse stretch and axial stretch , where the transverse stretch is a function of the axial stretch, the most common are the Hencky, Biot, Green, and Almansi functions: Applications of Poisson's effect One area in which Poisson's effect has a considerable influence is in pressurized pipe flow. When the air or liquid inside a pipe is highly pressurized it exerts a uniform force on the inside of the pipe, resulting in a hoop stress within the pipe material. Due to Poisson's effect, this hoop stress will cause the pipe to increase in diameter and slightly decrease in length. The decrease in length, in particular, can have a noticeable effect upon the pipe joints, as the effect will accumulate for each section of pipe joined in series. A restrained joint may be pulled apart or otherwise prone to failure. Another area of application for Poisson's effect is in the realm of structural geology. Rocks, like most materials, are subject to Poisson's effect while under stress. In a geological timescale, excessive erosion or sedimentation of Earth's crust can either create or remove large vertical stresses upon the underlying rock. This rock will expand or contract in the vertical direction as a direct result of the applied stress, and it will also deform in the horizontal direction as a result of Poisson's effect. This change in strain in the horizontal direction can affect or form joints and dormant stresses in the rock. Although cork was historically chosen to seal wine bottle for other reasons (including its inert nature, impermeability, flexibility, sealing ability, and resilience), cork's Poisson's ratio of zero provides another advantage. As the cork is inserted into the bottle, the upper part which is not yet inserted does not expand in diameter as it is compressed axially. The force needed to insert a cork into a bottle arises only from the friction between the cork and the bottle due to the radial compression of the cork. If the stopper were made of rubber, for example, (with a Poisson's ratio of about +0.5), there would be a relatively large additional force required to overcome the radial expansion of the upper part of the rubber stopper. Most car mechanics are aware that it is hard to pull a rubber hose (such as a coolant hose) off a metal pipe stub, as the tension of pulling causes the diameter of the hose to shrink, gripping the stub tightly. (This is the same effect as shown in a Chinese finger trap.) Hoses can more easily be pushed off stubs instead using a wide flat blade.
Physical sciences
Solid mechanics
Physics
241247
https://en.wikipedia.org/wiki/Bellows
Bellows
A bellows or pair of bellows is a device constructed to furnish a strong blast of air. The simplest type consists of a flexible bag comprising a pair of rigid boards with handles joined by flexible leather sides enclosing an approximately airtight cavity which can be expanded and contracted by operating the handles, and fitted with a valve allowing air to fill the cavity when expanded, and with a tube through which the air is forced out in a stream when the cavity is compressed. It has many applications, in particular blowing on a fire to supply it with air. The term "bellows" is used by extension for a flexible bag whose volume can be changed by compression or expansion, but not used to deliver air. For example, the light-tight (but not airtight) bag allowing the distance between the lens and film of a folding photographic camera to be varied is called a bellows. Etymology "Bellows" is only used in plural. The Old English name for "bellows" was , 'blast-bag', 'blowing-bag'; the prefix was dropped and by the eleventh century the simple , , ('bag') was used. The word is cognate with "belly". There are similar words in Old Norse, Swedish, and Danish and Dutch (blaasbalg), but the derivation is not certain. 'Bellows' appears not to be cognate with the apparently similar Latin . Metallurgy Several processes, such as metallurgical iron smelting and welding, require so much heat that they could only be developed after the invention, in antiquity, of the bellows. The bellows are used to deliver additional air to the fuel, raising the rate of combustion and therefore the heat output. Various kinds of bellows are used in metallurgy: Box bellows were and are traditionally used in East Asia. Pot bellows were used in ancient Egypt. Tatara foot bellows from Japan. Accordion bellows, with the characteristic pleated sides, have been used in Europe for many centuries. Piston bellows developed in Southeast Asia (probably by the Austronesian peoples) using the principles of the similarly indigenous fire piston. It led to the independent development of bronze and iron metallurgy in Southeast Asia. They were present in various Southeast Asian cultures, and the technology was transported to Madagascar via the Austronesian expansion. The technology was later adopted and refined by the Han Chinese into the double-action piston bellows, replacing the native Chinese ox hide pot or drum bellows completely. Piston bellows were independently developed in the middle of the 18th century in Europe. Metal bellows were made to absorb axial movement in a dynamic condition. Often referred to as Axial Dynamics bellows types. Chinese bellows were originally made of ox hide with two pots as described in Mozi's book on military technology in the Warring States period (4th century BC). By the Han dynasty, contact with Southeast Asian cultures exposed the Chinese to the bamboo-based piston bellows of Southeast Asians. The acquired piston bellows technology completely replaced the Chinese ox hide bellows that by the Song dynasty, the ox hide bellows were completely extinct. The Han dynasty Chinese mechanical engineer Du Shi (d. 38) is credited with being the first to use hydraulic power on a double-action piston pumps, through a waterwheel, to operate bellows in metallurgy. His invention was used to operate piston bellows of blast furnaces in order to forge cast iron. The ancient Greeks, ancient Romans, and other civilizations used bellows in bloomery furnaces producing wrought iron. Bellows are also used to send pressurized air in a controlled manner in a fired heater. In modern industry, reciprocating bellows are usually replaced with motorized blowers. Double-acting piston bellows Double-acting piston bellows are a type of bellows used by blacksmiths and smelters to increase the air flow going into the forge, with the property that air is blown out on both strokes of the handle (in contrast to simpler and more common bellows that blow air when the stroke is in one direction and refill the bellows in the other direction). These bellows blow a more constant, and thus stronger, blast than simple bellows. Such bellows existed in China at least since the 5th century BC, when it was invented, and had reached Europe by the 16th century. In 240 BC, The ancient Greek inventor Ctesibius of Alexandria independently invented a double-action piston bellow used to lift water from one level to another. A piston is enclosed in a rectangular box with a handle coming out one side. The piston edges are covered with feathers, fur, or soft paper to ensure that it is airtight and lubricated. As the piston is pulled, air enters from the far side and the air in the near chamber is compressed and forced into a side chamber, where it flows through the nozzle. Then as it is pushed air enters from the near side and the air in the far chamber flows through the same nozzle. Double-lung accordion bellows These have three leaves. The middle leaf is fixed in place. The bottom leaf is moved up and down. The top leaf can move freely and has a weight on it. The bottom and the middle leaves contain valves, the top one does not. Only the top lung is connected to the spout. When the bottom leaf is moved up, air is pumped from the bottom lung into the top lung. At the same time air is leaving the bellows from the top lung through the spout, but at a slower rate. This inflates the top lung. Next the bottom leaf is moved down to pull fresh air into the bellows. While this happens the weight on the top leaf pushes it down, so air keeps leaving through the spout. This design does not increase the amount of air flow going into the forge, but provides a more constant air flow compared to a simple bellows. It also provides more even air flow than two simple bellows pumped alternately or one double-acting piston bellows. Primitive bellows In archaeological ruins of the Levant, archaeologists have found primitive pot bellows, consisting of a ceramic pot to which a loose leather hide had been attached at the top. Such pot bellows were constructed with a wide rim, so that the hide covering would transmit a maximum amount of air when pumped. The covering was fastened to the pot with a cord under an out-turned rim, or in a groove just below the rim exterior. An opening near the base served to insert a pipe of perishable material whose purpose was to direct the air blast to either the furnace or crucible, and which was usually done through the mediation of a tuyère. Tuyères used in conjunction with pot bellows had the function of protecting the ends of perishable tubes leading from the pot into the fire. Places in Saharan Africa still make use of primitive pot bellows. Further applications Fluid transfer applications Bellows are used extensively in hydraulic power circuits and cooling loops. They are an essential part of anesthesia machines. Cuckoo clocks use bellows to blow air through their gedackt (pipes) and imitate the call of the Common Cuckoo bird. Musical instruments may employ bellows as a substitute or regulator for air pressure provided by the human lungs: Accordion, concertina and related instruments Reed organ Pipe organ Musette de cour, Uilleann Pipes and some other varieties of bagpipes Harmonium and melodeon Portative Expansion joint applications The term "bellows" is used by extension for a number of applications that do not involve air transfer. Bellows are widely used in industrial and mechanical applications such as rod boots, machinery way covers, lift covers and rail covers to protect rods, bearings and seals from dirt. Bellows are widely used on articulated buses and trams, to cover the joint where the vehicle bends. Bellows are used in mechanical aneroids by acting as a precision indicator of pressure levels based on their lateral movement. Bellows tubing, a type of lightweight, flexible, extensible tubing may be used for delivery of gas or air at near-ambient pressure, as in early aqua-lung designs. Folding and view cameras use bellows to exclude light while allowing the lens to be moved relative to the film plane for focusing and, mainly in view cameras, to allow the lens to slide and tilt to control the image (camera movements). Piping expansion joint: In this application, bellows are formed in series to absorb thermal movement and vibration in piping systems that transport high temperature media such as exhaust gases or steam. Beekeeping Bee smokers have bellows attached to the side to provide air to a slow burning fuel. This allows for an increased rate of combustion and a temporarily higher output of smoke on command, something desirable when calming domesticated bees. Gallery
Technology
Metallurgy
null
241267
https://en.wikipedia.org/wiki/12-hour%20clock
12-hour clock
The 12-hour clock is a time convention in which the 24 hours of the day are divided into two periods: a.m. (from Latin , translating to "before midday") and p.m. (from Latin , translating to "after midday"). Each period consists of 12 hours numbered: 12 (acting as 0), 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and 11. The 12-hour clock has been developed since the second millennium BC and reached its modern form in the 16th century. The 12-hour time convention is common in several English-speaking nations and former British colonies, as well as a few other countries. There is no widely accepted convention for how midday and midnight should be represented: in English-speaking countries, "12 p.m." indicates 12 o'clock noon, while "12 a.m." means 12 o'clock midnight. History and use The natural day-and-night division of a calendar day forms the fundamental basis as to why each day is split into two cycles. Originally there were two cycles: one cycle which could be tracked by the position of the Sun (day), followed by one cycle which could be tracked by the Moon and stars (night). This eventually evolved into the two 12-hour periods which are used today, one called "a.m." starting at midnight and another called "p.m." starting at noon. Noon itself is rarely abbreviated today; but if it is, it is denoted "m." The 12-hour clock can be traced back as far as Mesopotamia and ancient Egypt. Both an Egyptian sundial for daytime use and an Egyptian water clock for night-time use were found in the tomb of Pharaoh Amenhotep I. Dating to , these clocks divided their respective times of use into 12 hours each. The Romans also used a 12-hour clock: daylight was divided into 12 equal hours (thus hours having varying length throughout the year) and the night was divided into four watches. The first mechanical clocks in the 14th century, if they had dials at all, showed all 24 hours using the 24-hour analog dial, influenced by astronomers' familiarity with the astrolabe and sundial and by their desire to model the Earth's apparent motion around the Sun. In Northern Europe these dials generally used the 12-hour numbering scheme in Roman numerals but showed both a.m. and p.m. periods in sequence. This is known as the double-XII system and can be seen on many surviving clock faces, such as those at Wells and Exeter. Elsewhere in Europe, numbering was more likely to be based on the 24-hour system (I to XXIV). The 12-hour clock was used throughout the British empire. During the 15th and 16th centuries, the 12-hour analog dial and time system gradually became established as standard throughout Northern Europe for general public use. The 24-hour analog dial was reserved for more specialized applications, such as astronomical clocks and chronometers. Most analog clocks and watches today use the 12-hour dial, on which the shorter hour hand rotates once every 12 hours and twice in one day. Some analog clock dials have an inner ring of numbers along with the standard 1-to-12 numbered ring. The number 12 is paired either with a 00 or a 24, while the numbers 1 through 11 are paired with the numbers 13 through 23, respectively. This modification allows the clock to also be read in 24-hour notation. This kind of 12-hour clock can be found in countries where the 24-hour clock is preferred. Use by country In several countries the 12-hour clock is the dominant written and spoken system of time, predominantly in nations that were part of the former British Empire, for example, the United Kingdom, Republic of Ireland, the United States, Canada (excluding Quebec), Australia, New Zealand, South Africa, India, Pakistan, and Bangladesh, and others follow this convention as well, such as Mexico and the former American colony of the Philippines. Even in those countries where the 12-hour clock is predominant, there are frequently contexts (such as science, medicine, the military or transport) in which the 24-hour clock is preferred. In most countries, however, the 24-hour clock is the standard system used, especially in writing. Some nations in Europe and Latin America use a combination of the two, preferring the 12-hour system in colloquial speech but using the 24-hour system in written form and in formal contexts. The 12-hour clock in speech often uses phrases such as ... in the morning, ... in the afternoon, ... in the evening, and ... at night. Rider's British Merlin almanac for 1795 and a similar almanac for 1773 published in London used them. Other than in English-speaking countries and some Spanish-speaking countries, the terms a.m. and p.m. are seldom used and often unknown. Computer support In most countries, computers by default show the time in 24-hour notation. Most operating systems, including Microsoft Windows and Unix-like systems such as Linux and macOS, activate the 12-hour notation by default for a limited number of language and region settings. This behaviour can be changed by the user, such as with the Windows operating system's "Region and Language" settings. Abbreviations The Latin abbreviations a.m. and p.m. (often written "am" and "pm", "AM" and "PM", or "A.M." and "P.M.") are used in English and Spanish. The equivalents in Greek are and , respectively, and in Sinhala () for (, – fore, pre) and () for (,  – after, post). However, noon is rarely abbreviated in either of these languages, noon normally being written in full. In Portuguese, there are two official options and many others used, for example, using 21:45, 21h45 or 21h45min (official ones) or 21:45 or 9:45 p.m. In Irish, a.m. and i.n. are used, standing for ar maidin ("in the morning") and iarnóin ("afternoon") respectively. Most other languages lack formal abbreviations for "before noon" and "after noon", and their users use the 12-hour clock only orally and informally. However, in many languages, such as Russian and Hebrew, informal designations are used, such as "9 in the morning" or "3 in the night". When abbreviations and phrases are omitted, one may rely on sentence context and societal norms to reduce ambiguity. For example, if one commutes to work at "9:00", 9:00 a.m. may be implied, but if a social dance is scheduled to begin at "9:00", it may begin at 9:00 p.m. Related conventions Typography The terms "a.m." and "p.m." are abbreviations of the Latin (before midday) and (after midday). Depending on the style guide referenced, the abbreviations "a.m." and "p.m." are variously written in small capitals ("" and ""), uppercase letters without a period ("AM" and "PM"), uppercase letters with periods, or lowercase letters ("am" and "pm" or "a.m." and "p.m."). With the advent of computer generated and printed schedules, especially airlines, advertising, and television promotions, the "M" character is often omitted as providing no additional information as in "9:30A" or "10:00P". Some style guides suggest the use of a space between the number and the a.m. or p.m. abbreviation. Style guides recommend not using a.m. and p.m. without a time preceding it. The hour/minute separator varies between countries: some use a colon, others use a period (full stop), and still others use the letter h. (In some usages, particularly "military time", of the 24-hour clock, there is no separator between hours and minutes. This style is not generally seen when the 12-hour clock is used.) Encoding Unicode specifies codepoints for a.m. and p.m. as precomposed characters, which are intended to be used only with Chinese-Japanese-Korean (CJK) character sets, as they take up exactly the same space as one CJK character: Informal speech and rounding off In speaking, it is common to round the time to the nearest five minutes and/or express the time as the past (or to) the closest hour; for example, "five past five" (5:05). Minutes past the hour means those minutes are added to the hour; "ten past five" means 5:10. Minutes to, 'til and of the hour mean those minutes are subtracted; "ten of five", "ten 'til five", and "ten to five" all mean 4:50. Fifteen minutes is often called a quarter hour, and thirty minutes is often known as a half hour. For example, 5:15 can be phrased "(a) quarter past five" or "five-fifteen"; 5:30 can be "half past five", "five-thirty" or simply "half five". The time 8:45 may be spoken as "eight forty-five" or "(a) quarter to nine". In older English, it was common for the number 25 to be expressed as "five-and-twenty". In this way the time 8:35 may be phrased as "five-and-twenty to 9", although this styling fell out of fashion in the later part of the 1900s and is now rarely used. Instead of meaning 5:30, the "half five" expression is sometimes used to mean 4:30, or "halfway to five", especially for regions such as the American Midwest and other areas that have been particularly influenced by German culture. This meaning follows the pattern choices of many Germanic and Slavic languages, including Serbo-Croatian, Dutch, Danish, Russian, and Swedish, as well as Hungarian, Finnish, and the languages of the Baltic States. Moreover, in situations where the relevant hour is obvious or has been recently mentioned, a speaker might omit the hour and just say "quarter to (the hour)", "half past" or "ten 'til" to avoid an elaborate sentence in informal conversations. These forms are often commonly used in television and radio broadcasts that cover multiple time zones at one-hour intervals. In describing a vague time of day, a speaker might say the phrase "seven-thirty, eight" to mean sometime around 7:30 or 8:00. Such phrasing can be misinterpreted for a specific time of day (here 7:38), especially by a listener not expecting an estimation. The phrase "about seven-thirty or eight" clarifies this. Some more ambiguous phrasing might be avoided. Within five minutes of the hour, the phrase "five of seven" (6:55) can be heard "five-oh-seven" (5:07). "Five to seven" or even "six fifty-five" clarifies this. Formal speech and times to the minute Minutes may be expressed as an exact number of minutes past the hour specifying the time of day (e.g., 6:32 p.m. is "six thirty-two"). Additionally, when expressing the time using the "past (after)" or "to (before)" formula, it is conventional to choose the number of minutes below 30 (e.g., 6:32 p.m. is conventionally "twenty-eight minutes to seven" rather than "thirty-two minutes past six"). In spoken English, full hours are often represented by the numbered hour followed by o'clock (10:00 as ten o'clock, 2:00 as two o'clock). This may be followed by the "a.m." or "p.m." designator, though some phrases such as in the morning, in the afternoon, in the evening, or at night more commonly follow analog-style terms such as o'clock, half past three, and quarter to four. O'clock itself may be omitted, telling a time as four a.m. or four p.m. Minutes ":01" to ":09" are usually pronounced as oh one to oh nine (nought or zero can also be used instead of oh). Minutes ":10" to ":59" are pronounced as their usual number-words. For instance, 6:02 a.m. can be pronounced six oh two a.m. whereas 6:32 a.m. could be told as six thirty-two a.m.. Confusion at noon and midnight It is not always clear what times "12:00 a.m." and "12:00 p.m." denote. From the Latin words (midday), ante (before) and post (after), the term (a.m.) means before midday and (p.m.) means after midday. Since "noon" (midday, (m.)) is neither before nor after itself, the terms a.m. and p.m. do not apply. Although "12 m." was suggested as a way to indicate noon, this is seldom done and also does not resolve the question of how to indicate midnight. The American Heritage Dictionary of the English Language states "By convention, 12 AM denotes midnight and 12 PM denotes noon. Because of the potential for confusion, it is advisable to use 12 noon and 12 midnight". E. G. Richards in his book Mapping Time (1999) provided a diagram in which 12 a.m. means noon and 12 p.m. means midnight. Historically, the style manual of the United States Government Printing Office used 12 a.m. for noon and 12 p.m. for midnight until its 2008 edition. At this point it reversed these designations and then retained that change in its 2016 revision. Many U.S. style guides, and NIST's "Frequently asked questions (FAQ)" web page, recommend that it is clearest if one refers to "noon" or "12:00 noon" and "midnight" or "12:00 midnight" (rather than to "12:00 p.m." and "12:00 a.m."). The NIST website states that "12 a.m. and 12 p.m. are ambiguous and should not be used." The Associated Press Stylebook specifies that midnight "is part of the day that is ending, not the one that is beginning." The Canadian Press Stylebook says, "write noon or midnight, not 12 noon or 12 midnight." Phrases such as "12 a.m." and "12 p.m." are not mentioned at all. In the UK, the National Physical Laboratory "FAQ-Time" web page states "In cases where the context cannot be relied upon to place a particular event, the pair of days straddling midnight can be quoted"; also "the terms 12 a.m. and 12 p.m. should be avoided." Likewise, some U.S. style guides recommend either clarifying "midnight" with other context clues, such as specifying the two dates between which it falls, or not referring to the term at all. For an example of the latter method, "midnight" is replaced with "11:59 p.m." for the end of a day or "12:01 a.m." for the start of a day. That has become common in the United States in legal contracts and for airplane, bus, or train schedules, though some schedules use other conventions. Occasionally, when trains run at regular intervals, the pattern may be broken at midnight by displacing the midnight departure one or more minutes, such as to 11:59 p.m. or 12:01 a.m. In Japanese usage, midnight is written as (0 a.m.) and noon is written as (0 p.m.), making the hours numbered sequentially from 0 to 11 in both halves of the day. Alternatively, noon may be written as (12 a.m.) and midnight at the end of the day as (12 p.m.), as opposed to (0 a.m.) for the start of the day, making the Japanese convention the opposite of the English usage of 12 a.m. and 12 p.m.
Technology
Clocks
null
241281
https://en.wikipedia.org/wiki/Bovidae
Bovidae
The Bovidae comprise the biological family of cloven-hoofed, ruminant mammals that includes cattle, bison, buffalo, antelopes (including goat-antelopes), sheep and goats. A member of this family is called a bovid. With 143 extant species and 300 known extinct species, the family Bovidae consists of 11 (or two) major subfamilies and thirteen major tribes. The family evolved 20 million years ago, in the early Miocene. The bovids show great variation in size and pelage colouration. Except some domesticated forms, all male bovids have two or more horns, and in many species, females possess horns, too. The size and shape of the horns vary greatly, but the basic structure is always one or more pairs of simple bony protrusions without branches, often having a spiral, twisted or fluted form, each covered in a permanent sheath of keratin. Most bovids bear 30 to 32 teeth. Most bovids are diurnal. Social activity and feeding usually peak during dawn and dusk. Bovids typically rest before dawn, during midday, and after dark. They have various methods of social organisation and social behaviour, which are classified into solitary and gregarious behaviour. Bovids use different forms of vocal, olfactory, and tangible communication. Most species alternately feed and ruminate throughout the day. While small bovids forage in dense and closed habitat, larger species feed on high-fiber vegetation in open grasslands. Most bovids are polygynous. Mature bovids mate at least once a year and smaller species may even mate twice. In some species, neonate bovids remain hidden for a week to two months, regularly nursed by their mothers; in other species, neonates are followers, accompanying their dams, rather than tending to remain hidden. The greatest diversities of bovids occur in Africa. The maximum concentration of species is in the savannas of Eastern Africa. Other bovid species also occur in Europe, Asia, and North America. Bovidae includes a number of domesticated species, including three whose use has spread worldwide, namely cattle, sheep, and goats. Dairy products, such as milk, butter, and cheese, are manufactured largely from domestic cattle. Bovids are also raised for their leather, meat, and wool. Naming and etymology The name "Bovidae" was given by the British zoologist John Edward Gray in 1821. The word "Bovidae" is the combination of the prefix bov- (originating from Latin bos, "ox", through Late Latin bovinus) and the suffix -idae. Taxonomy The family Bovidae is placed in the order Artiodactyla (which includes the even-toed ungulates). It includes 143 extant species, accounting for nearly 55% of the ungulates, and 300 known extinct species. Until the beginning of the 21st century it was understood that the family Moschidae (musk deer) was sister to Cervidae. However, a 2003 phylogenetic study by Alexandre Hassanin (of National Museum of Natural History, France) and colleagues, based on mitochondrial and nuclear analyses, revealed that Moschidae and Bovidae form a clade sister to Cervidae. According to the study, Cervidae diverged from the Bovidae-Moschidae clade 27 to 28 million years ago. The following cladogram is based on the 2003 study. Molecular studies have supported monophyly in the family Bovidae (a group of organisms comprises an ancestral species and all their descendants). The number of subfamilies in Bovidae is disputed, with suggestions of as many as ten and as few as two subfamilies. However, molecular, morphological and fossil evidence indicates the existence of eight distinct subfamilies: Aepycerotinae (consisting of just the impala), Alcelaphinae (bontebok, hartebeest, wildebeest and relatives), Antilopinae (several antelopes, gazelles, and relatives), Bovinae (cattle, buffaloes, bison and other antelopes), Caprinae (goats, sheep, ibex, serows and relatives), Cephalophinae (duikers), Hippotraginae (addax, oryx and relatives) and Reduncinae (reedbuck and kob antelopes). In addition, three extinct subfamilies are known: Hypsodontinae (mid-Miocene), Oiocerinae (Turolian) and the subfamily Tethytraginae, which contains Tethytragus (mid-Miocene). In 1992, Alan W . Gentry of the Natural History Museum, London divided the eight major subfamilies of Bovidae into two major clades on the basis of their evolutionary history: the Boodontia, which comprised only the Bovinae, and the Aegodontia, which consisted of the rest of the subfamilies. Boodonts have somewhat primitive teeth, resembling those of oxen, whereas aegodonts have more advanced teeth like those of goats. A controversy exists about the recognition of Peleinae and Pantholopinae, comprising the genera Pelea and Pantholops respectively, as subfamilies. In 2000, American biologist George Schaller and palaeontologist Elisabeth Vrba suggested the inclusion of Pelea in Reduncinae, though the grey rhebok, the sole species of Pelea, is highly different from kobs and reduncines in morphology. Pantholops, earlier classified in the Antilopinae, was later placed in its own subfamily, Pantholopinae. However, molecular and morphological analysis supports the inclusion of Pantholops in Caprinae. Below is a cladogram based on Yang et al., 2013 and Calamari, 2021: Alternatively, all members of the Aegodontia can be classified within the subfamily Antilopinae, with the individual subfamilies being tribes in this treatment. Evolutionary history Early Miocene and before In the early Miocene, bovids began diverging from the cervids (deer) and giraffids. The earliest bovids, whose presence in Africa and Eurasia in the latter part of early Miocene (20 Mya) has been ascertained, were small animals, somewhat similar to modern gazelles, and probably lived in woodland environments. Eotragus, the earliest known bovid, weighed and was nearly the same in size as the Thomson's gazelle. Early in their evolutionary history, the bovids split into two main clades: Boodontia (of Eurasian origin) and Aegodontia (of African origin). This early split between Boodontia and Aegodontia has been attributed to the continental divide between these land masses. When these continents were later rejoined, this barrier was removed, and both groups expanded into the territory of the other. The tribes Bovini and Tragelaphini diverged in the early Miocene. Bovids are known to have reached the Americas in the Pleistocene by crossing the Bering land bridge. The present genera of Alcelaphinae appeared in the Pliocene. The extinct Alcelaphine genus Paramularius, which was the same in size as the hartebeest, is believed to have come into being in the Pliocene, but became extinct in the middle Pleistocene. Several genera of Hippotraginae are known since the Pliocene and Pleistocene. This subfamily appears to have diverged from the Alcelaphinae in the latter part of early Miocene. The Bovinae are believed to have diverged from the rest of the Bovidae in the early Miocene. The Boselaphini became extinct in Africa in the early Pliocene; their latest fossils were excavated in Langebaanweg (South Africa) and Lothagam (Kenya). Middle Miocene The middle Miocene marked the spread of the bovids into China and the Indian subcontinent. According to Vrba, the radiation of the subfamily Alcelaphinae began in the latter part of middle Miocene. The Caprinae tribes probably diverged in the early middle Miocene. The Caprini emerged in the middle Miocene, and seem to have been replaced by other bovids and cervids in Eurasia. The earliest fossils of the antilopines are from the middle Miocene, though studies show the existence of the subfamily from the early Miocene. Speciation occurred in the tribe Antilopini during the middle or upper Miocene, mainly in Eurasia. Tribe Neotragini seems to have appeared in Africa by the end of Miocene, and had become widespread by the Pliocene. Late Miocene By the late Miocene, around 10 Mya, the bovids rapidly diversified, leading to the creation of 70 new genera. This late Miocene radiation was partly because many bovids became adapted to more open, grassland habitats. The Aepycerotinae first appeared in the late Miocene, and no significant difference in the sizes of the primitive and modern impala has been noted. Fossils of ovibovines, a tribe of Caprinae, in Africa date back to the late Miocene. The earliest Hippotragine fossils date back to the late Miocene, and were excavated from sites such as Lothagam and Awash Valley. The first African fossils of Reduncinae date back to 6-7 Mya. Reduncinae and Peleinae probably diverged in the mid-Miocene. Plio-Pleistocene African bovids continued becoming more adapted to mixed feeding, indicated by dental mesowear evidence, as their palaeoenvironment opened up. Characteristics All bovids have the similar basic form - a snout with a blunt end, one or more pairs of horns (generally present on males) immediately after the oval or pointed ears, a distinct neck and limbs, and a tail varying in length and bushiness among the species. Most bovids exhibit sexual dimorphism, with males usually larger as well as heavier than females. Sexual dimorphism is more prominent in medium- to large-sized bovids. All bovids have four toes on each foot – they walk on the central two (the hooves), while the outer two (the dewclaws) are much smaller and rarely touch the ground. The bovids show great variation in size: the gaur can weigh more than , and stand high at the shoulder. The royal antelope, in sharp contrast, is only tall and weighs at most . The klipspringer, another small antelope, stands at the shoulder and weighs just . Differences occur in pelage colouration, ranging from a pale white (as in the Arabian oryx) to black (as in the black wildebeest). However, only the intermediate shades, such as brown and reddish brown (as in the reedbuck), are commonly observed. In several species, females and juveniles exhibit a light-coloured coat, while those of males darken with age. As in the wildebeest, the coat may be marked with prominent or faint stripes. In some species such as the addax, the coat colour can vary by the season. Scent glands and sebaceous glands are often present. Some species, such as the gemsbok, sable antelope, and Grant's gazelle, are camouflaged with strongly disruptive facial markings that conceal the highly recognisable eye. Many species, such as gazelles, may be made to look flat, and hence to blend into the background, by countershading. The outlines of many bovids are broken up with bold disruptive colouration, the strongly contrasting patterns helping to delay recognition by predators. However, all the Hippotraginae (including the gemsbok) have pale bodies and faces with conspicuous markings. The zoologist Tim Caro describes this as difficult to explain, but given that the species are diurnal, he suggests that the markings may function in communication. Strongly contrasting leg colouration is common only in the Bovidae, where for example Bos, Ovis, bontebok and gemsbok have white stockings. Again, communication is the likely function. Excepting some domesticated forms, all male bovids have horns, and in many species, females, too, possess horns. The size and shape of the horns vary greatly, but the basic structure is a pair of simple bony protrusions without branches, often having a spiral, twisted, or fluted form, each covered in a permanent sheath of keratin. Although horns occur in a single pair on almost all bovid species, there are exceptions such as the four-horned antelope and the Jacob sheep. The unique horn structure is the only unambiguous morphological feature of bovids that distinguishes them from other pecorans. A high correlation exists between horn morphology and fighting behaviour of the individual. For instance, long horns are intended for wrestling and fencing, whereas curved horns are used in ramming. Males with horns directed inwards are monogamous and solitary, while those with horns directed outwards tend to be polygynous. These results were independent of body size. Male horn development has been linked to sexual selection, Horns are small spikes in the monogamous duikers and other small antelopes, whereas in the polygynous, they are large and elaborately formed (for example in a spiral structure, as in the giant eland). Thus, to some extent, horns depict the degree of competition among males in a species. However, the presence of horns in females is likely due to natural selection. The horns of females are usually smaller than those of males, and are sometimes of a different shape. The horns of female bovids are believed to have evolved for defence against predators or to express territoriality, as non-territorial females, which are able to use crypsis for predator defence, often do not have horns. Females possess horns only in half of the bovid genera, and females in these genera are heavier than those in the rest. Females use horns mainly for stabbing. Anatomy In bovids, the third and fourth metapodials are combined into the cannon bone. The ulna and fibula are reduced, and fused with the radius and tibia, respectively. Long scapulae are present, whereas the clavicles are absent. Being ruminants, the stomach is composed of four chambers: the rumen (80%), the omasum, the reticulum, and the abomasum. The ciliates and bacteria of the rumen ferment the complex cellulose into simpler fatty acids, which are then absorbed through the rumen wall. Bovids have a long small intestine; the length of the small intestine in cattle is . Body temperature fluctuates through the day; for instance, in goats the temperature can change slightly from nearly in the early morning to in the afternoon. Temperature is regulated through sweating in cattle, whereas goats use panting for the same. The right lung, consisting of four to five lobes, is around 1.5 times larger than the left, which has three lobes. Dentition Most bovids bear 30 to 32 teeth. While the upper incisors are absent, the upper canines are either reduced or absent. Instead of the upper incisors, bovids have a thick and tough layer of tissue, called the dental pad, that provides a surface to grip grasses and foliage. They are hypsodont and selenodont, since the molars and premolars are low-crowned and crescent-shaped cusps. The lower incisors and canines project forward. The incisors are followed by a long toothless gap, known as the diastema. The general dental formula for bovids is . Most members of the family are herbivorous, but most duikers are omnivorous. Like other ruminants, bovids have four-chambered stomachs, which allow them to digest plant material, such as grass, that cannot be used by many other animals. Ruminants (and some others like kangaroos, rabbits, and termites) are able to use micro-organisms living in their guts to break down cellulose by fermentation. Ecology and behaviour The bovids have various methods of social organisation and social behaviour, which are classified into solitary and gregarious behaviour. Further, these types may each be divided into territorial and non-territorial behaviour. Small bovids such as the klipspringer, oribi, and steenbok are generally solitary and territorial. They hold small territories into which other members of the species are not allowed to enter. These antelopes form monogamous pairs. Many species such as the dik-dik use pheromone secretions from the preorbital glands and sometimes dung, as well, to mark their territories. The offspring disperse at the time of adolescence, and males must acquire territories prior to mating. The bushbuck is the only bovid that is both solitary and not territorial. This antelope hardly displays aggression, and tends to isolate itself or form loose herds, though in a favourable habitat, several bushbuck may be found quite close to one another. Excluding the cephalophines (duikers), tragelaphines (spiral-horned antelopes) and the neotragines, most African bovids are gregarious and territorial. Males are forced to disperse on attaining sexual maturity, and must form their own territories, while females are not required to do so. Males that do not hold territories form bachelor herds. Competition takes place among males to acquire dominance, and fights tend to be more rigorous in limited rutting seasons. With the exception of migratory males, males generally hold the same territory throughout their lives. In the waterbuck, some male individuals, known as "satellite males", may be allowed into the territories of other males and have to wait till the owner grows old so they may acquire his territory. Lek mating, where males gather together and competitively display to potential mates, is known to exist among topis, kobs, and lechwes. The tragelaphines, cattle, sheep, and goats are gregarious and not territorial. In these species, males must gain absolute dominance over all other males, and fights are not confined to territories. Males, therefore, spend years in body growth. Activity Most bovids are diurnal, although a few such as the buffalo, bushbuck, reedbuck, and grysbok are exceptions. Social activity and feeding usually peak during dawn and dusk. The bovids usually rest before dawn, during midday, and after dark. Grooming is usually by licking with the tongue. Rarely do antelopes roll in mud or dust. Wildebeest and buffalo usually wallow in mud, whereas the hartebeest and topi rub their heads and horns in mud and then smear it over their bodies. Bovids use different forms of vocal, olfactory, and tangible communication. These involve varied postures of neck, head, horns, hair, legs, and ears to convey sexual excitement, emotional state, or alarm. One such expression is the flehmen response. Bovids usually stand motionless, with the head high and an intent stare, when they sense danger. Some like the impala, kudu, and eland can even leap to heights of a few feet. Bovids may roar or grunt to caution others and warn off predators. Bovids such as gazelles stot or pronk in response to predators, making high leaps on stiff legs, indicating honestly both that the predator has been seen, and that the stotting individual is strong and not worth chasing. In the mating season, rutting males bellow to make their presence known to females. Muskoxen roar during male-male fights, and male saigas force air through their noses, producing a roar to deter rival males and attract females. Mothers also use vocal communication to locate their calves if they get separated. During fights over dominance, males tend to display themselves in an erect posture with a level muzzle. Fighting techniques differ amongst the bovid families and also depend on their build. While the hartebeest fight on knees, others usually fight on all fours. Gazelles of various sizes use different methods of combat. Gazelles usually box, and in serious fights may clash and fence, consisting of hard blows from short range. Ibex, goat and sheep males stand upright and clash into each other downwards. Wildebeest use powerful head butting in aggressive clashes. If horns become entangled, the opponents move in a circular manner to unlock them. Muskoxen will ram into each other at high speeds. As a rule, only two bovids of equal build and level of defence engage in a fight, which is intended to determine the superior of the two. Individuals that are evidently inferior to others would rather flee than fight; for example, immature males do not fight with the mature bulls. Generally, bovids direct their attacks on the opponent's head rather than its body. The S-shaped horns, such as those on the impala, have various sections that help in ramming, holding, and stabbing. Serious fights leading to injury are rare. Diet Most bovids alternately feed and ruminate throughout the day. While those that feed on concentrate feed and digest in short intervals, the roughage feeders take longer intervals. Only small species such as the duiker browse for a few hours during day or night. Feeding habits are related to body size; while small bovids forage in dense and closed habitat, larger species feed upon high-fiber vegetation in open grasslands. Subfamilies exhibit different feeding strategies. While Bovinae species graze extensively on fresh grass and diffused forage, Cephalophinae species (with the exception of Sylvicapra) primarily consume fruits. Reduncinae and Hippotraginae species depend on unstable food sources, but the latter are specially adapted to arid areas. Members of Caprinae, being flexible feeders, forage even in areas with low productivity. Tribes Alcelaphini, Hippotragini, and Reduncini have high proportions of monocots in their diets. On the contrary, Tragelaphini and Neotragini (with the exception of Ourebia) feed extensively on dicots. No conspicuous relationship exists between body size and consumption of monocots. Sexuality and reproduction Most bovids are polygynous. In a few species, individuals are monogamous, resulting in minimal male-male aggression and reduced selection for large body size in males. Thus, sexual dimorphism is almost absent. Females may be slightly larger than males, possibly due to competition among females for the acquisition of territories. This is the case in duikers and other small bovids. The time taken for the attainment of sexual maturity by either sex varies broadly among bovids. Sexual maturity may even precede or follow mating. For instance, the impala males, though sexually mature by a year, can mate only after four years of age. On the contrary barbary sheep females may give birth to offspring even before they have gained sexual maturity. The delay in male sexual maturation is more visible in sexually dimorphic species, particularly the reduncines, probably due to competition among males. For instance, the blue wildebeest females become capable of reproduction within a year or two of birth, while the males become mature only when four years old. All bovids mate at least once a year, and smaller species may even mate twice. Mating seasons occur typically during the rainy months for most bovids. As such, breeding might peak twice in the equatorial regions. The sheep and goats exhibit remarkable seasonality of reproduction, in the determination of which the annual cycle of daily photoperiod plays a pivotal role. Other factors that have a significant influence on this cycle include the temperature of the surroundings, nutritional status, social interactions, the date of parturition and the lactation period. A study of this phenomenon concluded that goats and sheep are short-day breeders. Mating in most sheep breeds begins in summer or early autumn. Mating in sheep is also affected by melatonin, that advances the onset of the breeding season; and thyroxine, that terminates the breeding season. Estrus lasts for at most a day in bovids, with the exception of bovines and tragelaphines. Except for the hartebeest and the topi, all bovids can detect estrus in females by testing the urine using the vomeronasal organ. Once the male is assured that the female is in estrus, he begins courtship displays; these displays vary greatly from the elaborate marches among gregarious species to the fervent licking of female genitalia among solitary species. Females, initially not receptive, ultimately mates with the male which has achieved dominance over others. Receptiveness is expressed by permission for mounting by the male and setting aside the tail by the female. Copulation generally takes a few seconds. Gestational period varies among bovids - while duiker gestation ranges from 120 to 150 days, gestation in African buffalo ranges from 300 to 330 days. Usually, a single offspring is born (twins are less frequent), and it is able to stand and run by itself within an hour of birth. In monogamous species, males assist in defending their young, but that is not the case in polygynous species. Most newborn calves remain hidden for a week to two months, regularly nursed by their mothers. In some bovid species, the neonates start following about their mothers immediately or within a few days, as in the impala. Different bovids have different strategies for the defence of juveniles. For instance, while wildebeest mothers solely defend their young, buffaloes exhibit collective defence. Weaning might occur as early as two months (as in royal antelope) or as late as a year (as in muskox). Lifespan Most wild bovids live for 10 to 15 years. Larger species tend to live longer; for instance, American bison can live up to 25 years and gaur up to 30 years. The mean lifespan of domesticated individuals is nearly ten years. For example, domesticated goats have an average lifespan of 12 years. Usually males, mainly in polygynous species, have shorter lifespans than females. This can be attributed to several reasons: early dispersal of young males, aggressive male-male fights, vulnerability to predation (particularly when males are less agile, as in kudu), and malnutrition (being large in size, the male body has high nutritional requirements which may not be satisfied). Richard Despard Estes suggested that females mimic male secondary sexual characteristics like horns to protect their male offspring from dominant males. This feature seems to have been strongly selected to prevent male mortality and imbalanced sex ratios due to attacks by aggressive males and forced dispersal of young males during adolescence. Distribution Most of the diverse bovid species occur in Africa. The maximum concentration is in the savannas of eastern Africa. Depending on their feeding habits, several species have radiated over large stretches of land, and hence several variations in dental and limb morphology are observed. Duikers inhabit the equatorial rainforests, sitatunga and lechwe occur near swamps, eland inhabit grasslands, springbok and oryx occur in deserts, bongo and anoa live in dense forests, and mountain goats and takin live at high altitudes. A few bovid species also occur in Europe, Asia, and North America. Sheep and goats are found primarily in Eurasia, though the Barbary sheep and the ibex form part of the African fauna. The muskox is confined to the arctic tundra. Several bovid species have been domesticated by human beings. The domestication of goats and sheep began 10 thousand years ago, while cattle were domesticated about 7.5 thousand years ago. Interaction with humans Domesticated animals The domestication of bovids has contributed to shifting the dependence of human beings from hunting and gathering to agriculture. The Bovidae includes three domesticated species whose use has spread around the world: cattle, sheep, and goats; all are from Eurasia. Other large bovids that have been domesticated but which have less ubiquitous distributions include the domestic buffalo (from the wild water buffalo), domestic yak (from the wild yak), zebu (from the Indian aurochs), gayal (from the gaur) and Bali cattle (from the banteng). Some antelopes have been domesticated including the oryxes, addax, elands and the extinct bubal hartebeest. In Ancient Egypt oryxes, addaxes and bubal hartebeests are depicted in carved walls. The earliest evidence of cattle domestication is from 8000 BC, suggesting that the process began in Cyprus and the Euphrates basin. Animal products Dairy products such as milk, butter, ghee, yoghurt, buttermilk and cheese are manufactured largely from domestic cattle, though the milk of sheep, goat, yak, and buffalo is also used in some parts of the world and for gourmet products. For example, buffalo milk is used to make mozzarella in Italy and gulab jamun dessert in India, while sheep milk is used to make blue Roquefort cheese in France. Beef is a food source high in zinc, selenium, phosphorus, iron, and B vitamins. Bison meat is lower in fat and cholesterol than beef, but has a higher protein content. Bovid leather is tough and durable, with the additional advantage that it can be made into leathers of varying thicknesses - from soft clothing leather to hard shoe leather. While goat and cattle leather have a wide variety of use, sheepskin is suited only for clothing purposes. Wool from Merino hoggets is the finest and most valuable. Merino wool is long and very soft. Coarse wools, being durable and resistant to pilling, are used for making tough garments and carpets. Bone meal is an important fertilizer rich in calcium, phosphorus, and nitrogen, effective in removing soil acidity. Bovid horns have been used as drinking vessels since antiquity. In human culture Bovidae have featured in stories since at least the time of Aesop's fables from Ancient Greece around 600 BC. Fables by Aesop include The Crow and the Sheep, The Frog and the Ox, and The Wolf and the Lamb. The mythological creature Chimera, depicted as a lion, with the head of a goat arising from its back, and a tail that might end with a snake's head, was one of the offspring of Typhon and Echidna and a sibling of such monsters as Cerberus and the Lernaean Hydra. The sheep, synonymous with the goat in Chinese mythology, is the eighth animal of the Chinese zodiac, and a symbol of filial piety.
Biology and health sciences
Artiodactyla
null
241291
https://en.wikipedia.org/wiki/Hyperbolic%20geometry
Hyperbolic geometry
In mathematics, hyperbolic geometry (also called Lobachevskian geometry or Bolyai–Lobachevskian geometry) is a non-Euclidean geometry. The parallel postulate of Euclidean geometry is replaced with: For any given line R and point P not on R, in the plane containing both line R and point P there are at least two distinct lines through P that do not intersect R. (Compare the above with Playfair's axiom, the modern version of Euclid's parallel postulate.) The hyperbolic plane is a plane where every point is a saddle point. Hyperbolic plane geometry is also the geometry of pseudospherical surfaces, surfaces with a constant negative Gaussian curvature. Saddle surfaces have negative Gaussian curvature in at least some regions, where they locally resemble the hyperbolic plane. The hyperboloid model of hyperbolic geometry provides a representation of events one temporal unit into the future in Minkowski space, the basis of special relativity. Each of these events corresponds to a rapidity in some direction. When geometers first realised they were working with something other than the standard Euclidean geometry, they described their geometry under many different names; Felix Klein finally gave the subject the name hyperbolic geometry to include it in the now rarely used sequence elliptic geometry (spherical geometry), parabolic geometry (Euclidean geometry), and hyperbolic geometry. In the former Soviet Union, it is commonly called Lobachevskian geometry, named after one of its discoverers, the Russian geometer Nikolai Lobachevsky. Properties Relation to Euclidean geometry Hyperbolic geometry is more closely related to Euclidean geometry than it seems: the only axiomatic difference is the parallel postulate. When the parallel postulate is removed from Euclidean geometry the resulting geometry is absolute geometry. There are two kinds of absolute geometry, Euclidean and hyperbolic. All theorems of absolute geometry, including the first 28 propositions of book one of Euclid's Elements, are valid in Euclidean and hyperbolic geometry. Propositions 27 and 28 of Book One of Euclid's Elements prove the existence of parallel/non-intersecting lines. This difference also has many consequences: concepts that are equivalent in Euclidean geometry are not equivalent in hyperbolic geometry; new concepts need to be introduced. Further, because of the angle of parallelism, hyperbolic geometry has an absolute scale, a relation between distance and angle measurements. Lines Single lines in hyperbolic geometry have exactly the same properties as single straight lines in Euclidean geometry. For example, two points uniquely define a line, and line segments can be infinitely extended. Two intersecting lines have the same properties as two intersecting lines in Euclidean geometry. For example, two distinct lines can intersect in no more than one point, intersecting lines form equal opposite angles, and adjacent angles of intersecting lines are supplementary. When a third line is introduced, then there can be properties of intersecting lines that differ from intersecting lines in Euclidean geometry. For example, given two intersecting lines there are infinitely many lines that do not intersect either of the given lines. These properties are all independent of the model used, even if the lines may look radically different. Non-intersecting / parallel lines Non-intersecting lines in hyperbolic geometry also have properties that differ from non-intersecting lines in Euclidean geometry: For any line R and any point P which does not lie on R, in the plane containing line R and point P there are at least two distinct lines through P that do not intersect R. This implies that there are through P an infinite number of coplanar lines that do not intersect R. These non-intersecting lines are divided into two classes: Two of the lines (x and y in the diagram) are limiting parallels (sometimes called critically parallel, horoparallel or just parallel): there is one in the direction of each of the ideal points at the "ends" of R, asymptotically approaching R, always getting closer to R, but never meeting it. All other non-intersecting lines have a point of minimum distance and diverge from both sides of that point, and are called ultraparallel, diverging parallel or sometimes non-intersecting. Some geometers simply use the phrase "parallel lines" to mean "limiting parallel lines", with ultraparallel lines meaning just non-intersecting. These limiting parallels make an angle θ with PB; this angle depends only on the Gaussian curvature of the plane and the distance PB and is called the angle of parallelism. For ultraparallel lines, the ultraparallel theorem states that there is a unique line in the hyperbolic plane that is perpendicular to each pair of ultraparallel lines. Circles and disks In hyperbolic geometry, the circumference of a circle of radius r is greater than . Let , where is the Gaussian curvature of the plane. In hyperbolic geometry, is negative, so the square root is of a positive number. Then the circumference of a circle of radius r is equal to: And the area of the enclosed disk is: Therefore, in hyperbolic geometry the ratio of a circle's circumference to its radius is always strictly greater than , though it can be made arbitrarily close by selecting a small enough circle. If the Gaussian curvature of the plane is −1 then the geodesic curvature of a circle of radius r is: Hypercycles and horocycles In hyperbolic geometry, there is no line whose points are all equidistant from another line. Instead, the points that are all the same distance from a given line lie on a curve called a hypercycle. Another special curve is the horocycle, whose normal radii (perpendicular lines) are all limiting parallel to each other (all converge asymptotically in one direction to the same ideal point, the centre of the horocycle). Through every pair of points there are two horocycles. The centres of the horocycles are the ideal points of the perpendicular bisector of the line-segment between them. Given any three distinct points, they all lie on either a line, hypercycle, horocycle, or circle. The length of a line-segment is the shortest length between two points. The arc-length of a hypercycle connecting two points is longer than that of the line segment and shorter than that of the arc horocycle, connecting the same two points. The lengths of the arcs of both horocycles connecting two points are equal, and are longer than the arclength of any hypercycle connecting the points and shorter than the arc of any circle connecting the two points. If the Gaussian curvature of the plane is −1, then the geodesic curvature of a horocycle is 1 and that of a hypercycle is between 0 and 1. Triangles Unlike Euclidean triangles, where the angles always add up to π radians (180°, a straight angle), in hyperbolic space the sum of the angles of a triangle is always strictly less than π radians (180°). The difference is called the defect. Generally, the defect of a convex hyperbolic polygon with sides is its angle sum subtracted from . The area of a hyperbolic triangle is given by its defect in radians multiplied by R, which is also true for all convex hyperbolic polygons. Therefore all hyperbolic triangles have an area less than or equal to Rπ. The area of a hyperbolic ideal triangle in which all three angles are 0° is equal to this maximum. As in Euclidean geometry, each hyperbolic triangle has an incircle. In hyperbolic space, if all three of its vertices lie on a horocycle or hypercycle, then the triangle has no circumscribed circle. As in spherical and elliptical geometry, in hyperbolic geometry if two triangles are similar, they must be congruent. Regular apeirogon and pseudogon Special polygons in hyperbolic geometry are the regular apeirogon and pseudogon uniform polygons with an infinite number of sides. In Euclidean geometry, the only way to construct such a polygon is to make the side lengths tend to zero and the apeirogon is indistinguishable from a circle, or make the interior angles tend to 180° and the apeirogon approaches a straight line. However, in hyperbolic geometry, a regular apeirogon or pseudogon has sides of any length (i.e., it remains a polygon with noticeable sides). The side and angle bisectors will, depending on the side length and the angle between the sides, be limiting or diverging parallel. If the bisectors are limiting parallel then it is an apeirogon and can be inscribed and circumscribed by concentric horocycles. If the bisectors are diverging parallel then it is a pseudogon and can be inscribed and circumscribed by hypercycles (all vertices are the same distance of a line, the axis, also the midpoint of the side segments are all equidistant to the same axis). Tessellations Like the Euclidean plane it is also possible to tessellate the hyperbolic plane with regular polygons as faces. There are an infinite number of uniform tilings based on the Schwarz triangles (p q r) where 1/p + 1/q + 1/r < 1, where p, q, r are each orders of reflection symmetry at three points of the fundamental domain triangle, the symmetry group is a hyperbolic triangle group. There are also infinitely many uniform tilings that cannot be generated from Schwarz triangles, some for example requiring quadrilaterals as fundamental domains. Standardized Gaussian curvature Though hyperbolic geometry applies for any surface with a constant negative Gaussian curvature, it is usual to assume a scale in which the curvature K is −1. This results in some formulas becoming simpler. Some examples are: The area of a triangle is equal to its angle defect in radians. The area of a horocyclic sector is equal to the length of its horocyclic arc. An arc of a horocycle so that a line that is tangent at one endpoint is limiting parallel to the radius through the other endpoint has a length of 1. The ratio of the arc lengths between two radii of two concentric horocycles where the horocycles are a distance 1 apart is e :1. Cartesian-like coordinate systems Compared to Euclidean geometry, hyperbolic geometry presents many difficulties for a coordinate system: the angle sum of a quadrilateral is always less than 360°; there are no equidistant lines, so a proper rectangle would need to be enclosed by two lines and two hypercycles; parallel-transporting a line segment around a quadrilateral causes it to rotate when it returns to the origin; etc. There are however different coordinate systems for hyperbolic plane geometry. All are based around choosing a point (the origin) on a chosen directed line (the x-axis) and after that many choices exist. The Lobachevsky coordinates x and y are found by dropping a perpendicular onto the x-axis. x will be the label of the foot of the perpendicular. y will be the distance along the perpendicular of the given point from its foot (positive on one side and negative on the other). Another coordinate system measures the distance from the point to the horocycle through the origin centered around and the length along this horocycle. Other coordinate systems use the Klein model or the Poincaré disk model described below, and take the Euclidean coordinates as hyperbolic. Distance A Cartesian-like coordinate system (x, y) on the oriented hyperbolic plane is constructed as follows. Choose a line in the hyperbolic plane together with an orientation and an origin o on this line. Then: the x-coordinate of a point is the signed distance of its projection onto the line (the foot of the perpendicular segment to the line from that point) to the origin; the y-coordinate is the signed distance from the point to the line, with the sign according to whether the point is on the positive or negative side of the oriented line. The distance between two points represented by (x_i, y_i), i=1,2 in this coordinate system is This formula can be derived from the formulas about hyperbolic triangles. The corresponding metric tensor field is: . In this coordinate system, straight lines take one of these forms ((x, y) is a point on the line; x0, y0, A, and α are parameters): ultraparallel to the x-axis asymptotically parallel on the negative side asymptotically parallel on the positive side intersecting perpendicularly intersecting at an angle α Generally, these equations will only hold in a bounded domain (of x values). At the edge of that domain, the value of y blows up to ±infinity. History Since the publication of Euclid's Elements circa 300BC, many geometers tried to prove the parallel postulate. Some tried to prove it by assuming its negation and trying to derive a contradiction. Foremost among these were Proclus, Ibn al-Haytham (Alhacen), Omar Khayyám, Nasīr al-Dīn al-Tūsī, Witelo, Gersonides, Alfonso, and later Giovanni Gerolamo Saccheri, John Wallis, Johann Heinrich Lambert, and Legendre. Their attempts were doomed to failure (as we now know, the parallel postulate is not provable from the other postulates), but their efforts led to the discovery of hyperbolic geometry. The theorems of Alhacen, Khayyam and al-Tūsī on quadrilaterals, including the Ibn al-Haytham–Lambert quadrilateral and Khayyam–Saccheri quadrilateral, were the first theorems on hyperbolic geometry. Their works on hyperbolic geometry had a considerable influence on its development among later European geometers, including Witelo, Gersonides, Alfonso, John Wallis and Saccheri. In the 18th century, Johann Heinrich Lambert introduced the hyperbolic functions and computed the area of a hyperbolic triangle. 19th-century developments In the 19th century, hyperbolic geometry was explored extensively by Nikolai Lobachevsky, János Bolyai, Carl Friedrich Gauss and Franz Taurinus. Unlike their predecessors, who just wanted to eliminate the parallel postulate from the axioms of Euclidean geometry, these authors realized they had discovered a new geometry. Gauss wrote in an 1824 letter to Franz Taurinus that he had constructed it, but Gauss did not publish his work. Gauss called it "non-Euclidean geometry" causing several modern authors to continue to consider "non-Euclidean geometry" and "hyperbolic geometry" to be synonyms. Taurinus published results on hyperbolic trigonometry in 1826, argued that hyperbolic geometry is self-consistent, but still believed in the special role of Euclidean geometry. The complete system of hyperbolic geometry was published by Lobachevsky in 1829/1830, while Bolyai discovered it independently and published in 1832. In 1868, Eugenio Beltrami provided models of hyperbolic geometry, and used this to prove that hyperbolic geometry was consistent if and only if Euclidean geometry was. The term "hyperbolic geometry" was introduced by Felix Klein in 1871. Klein followed an initiative of Arthur Cayley to use the transformations of projective geometry to produce isometries. The idea used a conic section or quadric to define a region, and used cross ratio to define a metric. The projective transformations that leave the conic section or quadric stable are the isometries. "Klein showed that if the Cayley absolute is a real curve then the part of the projective plane in its interior is isometric to the hyperbolic plane..." Philosophical consequences The discovery of hyperbolic geometry had important philosophical consequences. Before its discovery many philosophers (such as Hobbes and Spinoza) viewed philosophical rigor in terms of the "geometrical method", referring to the method of reasoning used in Euclid's Elements. Kant in Critique of Pure Reason concluded that space (in Euclidean geometry) and time are not discovered by humans as objective features of the world, but are part of an unavoidable systematic framework for organizing our experiences. It is said that Gauss did not publish anything about hyperbolic geometry out of fear of the "uproar of the Boeotians" (stereotyped as dullards by the ancient Athenians), which would ruin his status as princeps mathematicorum (Latin, "the Prince of Mathematicians"). The "uproar of the Boeotians" came and went, and gave an impetus to great improvements in mathematical rigour, analytical philosophy and logic. Hyperbolic geometry was finally proved consistent and is therefore another valid geometry. Geometry of the universe (spatial dimensions only) Because Euclidean, hyperbolic and elliptic geometry are all consistent, the question arises: which is the real geometry of space, and if it is hyperbolic or elliptic, what is its curvature? Lobachevsky had already tried to measure the curvature of the universe by measuring the parallax of Sirius and treating Sirius as the ideal point of an angle of parallelism. He realized that his measurements were not precise enough to give a definite answer, but he did reach the conclusion that if the geometry of the universe is hyperbolic, then the absolute length is at least one million times the diameter of Earth's orbit (, 10 parsec). Some argue that his measurements were methodologically flawed. Henri Poincaré, with his sphere-world thought experiment, came to the conclusion that everyday experience does not necessarily rule out other geometries. The geometrization conjecture gives a complete list of eight possibilities for the fundamental geometry of our space. The problem in determining which one applies is that, to reach a definitive answer, we need to be able to look at extremely large shapes – much larger than anything on Earth or perhaps even in our galaxy. Geometry of the universe (special relativity) Special relativity places space and time on equal footing, so that one considers the geometry of a unified spacetime instead of considering space and time separately. Minkowski geometry replaces Galilean geometry (which is the 3-dimensional Euclidean space with time of Galilean relativity). In relativity, rather than Euclidean, elliptic and hyperbolic geometry, the appropriate geometries to consider are Minkowski space, de Sitter space and anti-de Sitter space, corresponding to zero, positive and negative curvature respectively. Hyperbolic geometry enters special relativity through rapidity, which stands in for velocity, and is expressed by a hyperbolic angle. The study of this velocity geometry has been called kinematic geometry. The space of relativistic velocities has a three-dimensional hyperbolic geometry, where the distance function is determined from the relative velocities of "nearby" points (velocities). Physical realizations of the hyperbolic plane There exist various pseudospheres in Euclidean space that have a finite area of constant negative Gaussian curvature. By Hilbert's theorem, one cannot isometrically immerse a complete hyperbolic plane (a complete regular surface of constant negative Gaussian curvature) in a 3-D Euclidean space. Other useful models of hyperbolic geometry exist in Euclidean space, in which the metric is not preserved. A particularly well-known paper model based on the pseudosphere is due to William Thurston. The art of crochet has been used to demonstrate hyperbolic planes, the first such demonstration having been made by Daina Taimiņa. In 2000, Keith Henderson demonstrated a quick-to-make paper model dubbed the "hyperbolic soccerball" (more precisely, a truncated order-7 triangular tiling). Instructions on how to make a hyperbolic quilt, designed by Helaman Ferguson, have been made available by Jeff Weeks. Models of the hyperbolic plane Various pseudospheres – surfaces with constant negative Gaussian curvature – can be embedded in 3-D space under the standard Euclidean metric, and so can be made into tangible models. Of these, the tractoid (or pseudosphere) is the best known; using the tractoid as a model of the hyperbolic plane is analogous to using a cone or cylinder as a model of the Euclidean plane. However, the entire hyperbolic plane cannot be embedded into Euclidean space in this way, and various other models are more convenient for abstractly exploring hyperbolic geometry. There are four models commonly used for hyperbolic geometry: the Klein model, the Poincaré disk model, the Poincaré half-plane model, and the Lorentz or hyperboloid model. These models define a hyperbolic plane which satisfies the axioms of a hyperbolic geometry. Despite their names, the first three mentioned above were introduced as models of hyperbolic space by Beltrami, not by Poincaré or Klein. All these models are extendable to more dimensions. The Beltrami–Klein model The Beltrami–Klein model, also known as the projective disk model, Klein disk model and Klein model, is named after Eugenio Beltrami and Felix Klein. For the two dimensions this model uses the interior of the unit circle for the complete hyperbolic plane, and the chords of this circle are the hyperbolic lines. For higher dimensions this model uses the interior of the unit ball, and the chords of this n-ball are the hyperbolic lines. This model has the advantage that lines are straight, but the disadvantage that angles are distorted (the mapping is not conformal), and also circles are not represented as circles. The distance in this model is half the logarithm of the cross-ratio, which was introduced by Arthur Cayley in projective geometry. The Poincaré disk model The Poincaré disk model, also known as the conformal disk model, also employs the interior of the unit circle, but lines are represented by arcs of circles that are orthogonal to the boundary circle, plus diameters of the boundary circle. This model preserves angles, and is thereby conformal. All isometries within this model are therefore Möbius transformations. Circles entirely within the disk remain circles although the Euclidean center of the circle is closer to the center of the disk than is the hyperbolic center of the circle. Horocycles are circles within the disk which are tangent to the boundary circle, minus the point of contact. Hypercycles are open-ended chords and circular arcs within the disc that terminate on the boundary circle at non-orthogonal angles. The Poincaré half-plane model The Poincaré half-plane model takes one-half of the Euclidean plane, bounded by a line B of the plane, to be a model of the hyperbolic plane. The line B is not included in the model. The Euclidean plane may be taken to be a plane with the Cartesian coordinate system and the x-axis is taken as line B and the half plane is the upper half (y > 0 ) of this plane. Hyperbolic lines are then either half-circles orthogonal to B or rays perpendicular to B. The length of an interval on a ray is given by logarithmic measure so it is invariant under a homothetic transformation Like the Poincaré disk model, this model preserves angles, and is thus conformal. All isometries within this model are therefore Möbius transformations of the plane. The half-plane model is the limit of the Poincaré disk model whose boundary is tangent to B at the same point while the radius of the disk model goes to infinity. The hyperboloid model The hyperboloid model or Lorentz model employs a 2-dimensional hyperboloid of revolution (of two sheets, but using one) embedded in 3-dimensional Minkowski space. This model is generally credited to Poincaré, but Reynolds says that Wilhelm Killing used this model in 1885 This model has direct application to special relativity, as Minkowski 3-space is a model for spacetime, suppressing one spatial dimension. One can take the hyperboloid to represent the events (positions in spacetime) that various inertially moving observers, starting from a common event, will reach in a fixed proper time. The hyperbolic distance between two points on the hyperboloid can then be identified with the relative rapidity between the two corresponding observers. The model generalizes directly to an additional dimension: a hyperbolic 3-space three-dimensional hyperbolic geometry relates to Minkowski 4-space. The hemisphere model The hemisphere model is not often used as model by itself, but it functions as a useful tool for visualizing transformations between the other models. The hemisphere model uses the upper half of the unit sphere: The hyperbolic lines are half-circles orthogonal to the boundary of the hemisphere. The hemisphere model is part of a Riemann sphere, and different projections give different models of the hyperbolic plane: Stereographic projection from onto the plane projects corresponding points on the Poincaré disk model Stereographic projection from onto the surface projects corresponding points on the hyperboloid model Stereographic projection from onto the plane projects corresponding points on the Poincaré half-plane model Orthographic projection onto a plane projects corresponding points on the Beltrami–Klein model. Central projection from the centre of the sphere onto the plane projects corresponding points on the Gans Model Connection between the models All models essentially describe the same structure. The difference between them is that they represent different coordinate charts laid down on the same metric space, namely the hyperbolic plane. The characteristic feature of the hyperbolic plane itself is that it has a constant negative Gaussian curvature, which is indifferent to the coordinate chart used. The geodesics are similarly invariant: that is, geodesics map to geodesics under coordinate transformation. Hyperbolic geometry is generally introduced in terms of the geodesics and their intersections on the hyperbolic plane. Once we choose a coordinate chart (one of the "models"), we can always embed it in a Euclidean space of same dimension, but the embedding is clearly not isometric (since the curvature of Euclidean space is 0). The hyperbolic space can be represented by infinitely many different charts; but the embeddings in Euclidean space due to these four specific charts show some interesting characteristics. Since the four models describe the same metric space, each can be transformed into the other. See, for example: the Beltrami–Klein model's relation to the hyperboloid model, the Beltrami–Klein model's relation to the Poincaré disk model, and the Poincaré disk model's relation to the hyperboloid model. Other models of hyperbolic geometry The Gans model In 1966 David Gans proposed a flattened hyperboloid model in the journal American Mathematical Monthly. It is an orthographic projection of the hyperboloid model onto the xy-plane. This model is not as widely used as other models but nevertheless is quite useful in the understanding of hyperbolic geometry. Unlike the Klein or the Poincaré models, this model utilizes the entire Euclidean plane. The lines in this model are represented as branches of a hyperbola. The conformal square model The conformal square model of the hyperbolic plane arises from using Schwarz-Christoffel mapping to convert the Poincaré disk into a square. This model has finite extent, like the Poincaré disk. However, all of the points are inside a square. This model is conformal, which makes it suitable for artistic applications. The band model The band model employs a portion of the Euclidean plane between two parallel lines. Distance is preserved along one line through the middle of the band. Assuming the band is given by , the metric is given by . Isometries of the hyperbolic plane Every isometry (transformation or motion) of the hyperbolic plane to itself can be realized as the composition of at most three reflections. In n-dimensional hyperbolic space, up to n+1 reflections might be required. (These are also true for Euclidean and spherical geometries, but the classification below is different.) All isometries of the hyperbolic plane can be classified into these classes: Orientation preserving the identity isometry — nothing moves; zero reflections; zero degrees of freedom. inversion through a point (half turn) — two reflections through mutually perpendicular lines passing through the given point, i.e. a rotation of 180 degrees around the point; two degrees of freedom. rotation around a normal point — two reflections through lines passing through the given point (includes inversion as a special case); points move on circles around the center; three degrees of freedom. "rotation" around an ideal point (horolation) — two reflections through lines leading to the ideal point; points move along horocycles centered on the ideal point; two degrees of freedom. translation along a straight line — two reflections through lines perpendicular to the given line; points off the given line move along hypercycles; three degrees of freedom. Orientation reversing reflection through a line — one reflection; two degrees of freedom. combined reflection through a line and translation along the same line — the reflection and translation commute; three reflections required; three degrees of freedom. Hyperbolic geometry in art M. C. Escher's famous prints Circle Limit III and Circle Limit IV illustrate the conformal disc model (Poincaré disk model) quite well. The white lines in III are not quite geodesics (they are hypercycles), but are close to them. It is also possible to see quite plainly the negative curvature of the hyperbolic plane, through its effect on the sum of angles in triangles and squares. For example, in Circle Limit III every vertex belongs to three triangles and three squares. In the Euclidean plane, their angles would sum to 450°; i.e., a circle and a quarter. From this, we see that the sum of angles of a triangle in the hyperbolic plane must be smaller than 180°. Another visible property is exponential growth. In Circle Limit III, for example, one can see that the number of fishes within a distance of n from the center rises exponentially. The fishes have an equal hyperbolic area, so the area of a ball of radius n must rise exponentially in n. The art of crochet has been used to demonstrate hyperbolic planes (pictured above) with the first being made by Daina Taimiņa, whose book Crocheting Adventures with Hyperbolic Planes won the 2009 Bookseller/Diagram Prize for Oddest Title of the Year. HyperRogue is a roguelike game set on various tilings of the hyperbolic plane. Higher dimensions Hyperbolic geometry is not limited to 2 dimensions; a hyperbolic geometry exists for every higher number of dimensions. Homogeneous structure Hyperbolic space of dimension n is a special case of a Riemannian symmetric space of noncompact type, as it is isomorphic to the quotient The orthogonal group acts by norm-preserving transformations on Minkowski space R1,n, and it acts transitively on the two-sheet hyperboloid of norm 1 vectors. Timelike lines (i.e., those with positive-norm tangents) through the origin pass through antipodal points in the hyperboloid, so the space of such lines yields a model of hyperbolic n-space. The stabilizer of any particular line is isomorphic to the product of the orthogonal groups O(n) and O(1), where O(n) acts on the tangent space of a point in the hyperboloid, and O(1) reflects the line through the origin. Many of the elementary concepts in hyperbolic geometry can be described in linear algebraic terms: geodesic paths are described by intersections with planes through the origin, dihedral angles between hyperplanes can be described by inner products of normal vectors, and hyperbolic reflection groups can be given explicit matrix realizations. In small dimensions, there are exceptional isomorphisms of Lie groups that yield additional ways to consider symmetries of hyperbolic spaces. For example, in dimension 2, the isomorphisms allow one to interpret the upper half plane model as the quotient and the Poincaré disc model as the quotient . In both cases, the symmetry groups act by fractional linear transformations, since both groups are the orientation-preserving stabilizers in of the respective subspaces of the Riemann sphere. The Cayley transformation not only takes one model of the hyperbolic plane to the other, but realizes the isomorphism of symmetry groups as conjugation in a larger group. In dimension 3, the fractional linear action of on the Riemann sphere is identified with the action on the conformal boundary of hyperbolic 3-space induced by the isomorphism . This allows one to study isometries of hyperbolic 3-space by considering spectral properties of representative complex matrices. For example, parabolic transformations are conjugate to rigid translations in the upper half-space model, and they are exactly those transformations that can be represented by unipotent upper triangular matrices.
Mathematics
Non-Euclidean geometry
null
241294
https://en.wikipedia.org/wiki/Frilled%20shark
Frilled shark
The frilled shark (Chlamydoselachus anguineus), also known as the lizard shark, is one of the two extant species of shark in the family Chlamydoselachidae (the other is the southern African frilled shark, Chlamydoselachus africana). The frilled shark is considered a living fossil, because of its primitive, anguilliform (eel-like) physical traits, such as a dark-brown color, amphistyly (the articulation of the jaws to the cranium), and a –long body, which has dorsal, pelvic, and anal fins located towards the tail. The common name, frilled shark, derives from the fringed appearance of the six pairs of gill slits at the shark's throat. The two species of frilled shark are distributed throughout regions of the Atlantic and the Pacific oceans, usually in the waters of the outer continental shelf and of the upper continental slope, where the sharks usually live near the ocean floor, near biologically productive areas of the ecosystem. To live on a diet of cephalopods, smaller sharks, and bony fish, the frilled shark practices diel vertical migration to feed at night at the surface of the ocean. When hunting food, the frilled shark curls its tail against a rock and moves like an eel, bending and lunging to capture and swallow whole prey with its long and flexible jaws, which are equipped with 300 recurved, needle-like teeth. Reproductively, the two species of frilled shark, C. anguineus and C. africana, are aplacental viviparous animals, born of an egg, without a placenta to the mother shark. Contained within egg capsules, the shark embryos develop in the body of the mother shark; at birth, the infant sharks emerge from their egg capsules in the uterus, where they feed on yolk. Although it has no distinct breeding season, the gestation period of the frilled shark can be up to 3.5 years long, to produce a litter of 2–15 shark pups. Usually caught as bycatch in commercial fishing, the frilled shark has some economic value as a meat and as fishmeal; and has been caught from depths of , although its occurrence is uncommon below ; whereas in Suruga Bay, Japan, the frilled shark commonly occurs at depths of . Taxonomy and phylogeny The zoologist Ludwig Döderlein first identified, described, and classified the frilled shark as a discrete species of shark. After three years (1879–1881) of marine research in Japan, Döderlein took two specimen sharks to Vienna, but lost the taxonomic manuscript of the research. Three years later, in the Bulletin of the Essex Institute (vol. XVI, 1884) the zoologist Samuel Garman published the first taxonomy of the frilled shark, based upon his observations, measurements, and descriptions of a –long female shark from Sagami Bay, Japan. In the article "An Extraordinary Shark" Garman classified the new species of shark within its own genus and family, and named it Chlamydoselachus anguineus (eel-like shark with frills). The Graeco–Latin nomenclature of the frilled shark derives from the Greek chlamy (frill) and selachus (shark), and the Latin anguineus (like an eel); besides its common name, the frilled shark also is known as the "lizard shark" and as the "scaffold shark". The frilled-shark is considered a "living fossil", because its family lineage dates to the Carboniferous period. Initially, marine scientists considered the frilled shark a living, evolutionary representative of extinct groups of elasmobranchs (rays, sharks, skates, sawfish), because the shark's body featured primitive anatomic traits, such as long jaws with trident-shaped, multi-cusp teeth; amphistyly, the direct articulation of the jaws to the cranium, at a point behind the eyes; and a quasi-cartilaginous notochord (a proto-spinal-column) composed of indistinct vertebrae. From that anatomy, Garman proposed that the frilled shark was related to the cladodont sharks of the Cladoselache genus that existed during the Devonian period (419–359 mya) in the Palaeozoic era (541–251 mya). In contrast to Garman's thesis, the ichthyologist Theodore Gill and the paleontologist Edward Drinker Cope, suggested that the frilled shark's evolutionary tree indicated relation to the Hybodontiformes (hybodonts), which were the dominant species of shark during the Mesozoic era (252–66 mya); and Cope categorized the Chlamydoselachus anguineus species to the fossil genus Xenacanthus that existed from the late Devonian period to the end of the Triassic period of the Mesozoic era. The anatomic traits of body, muscle, and skeleton phylogenically include the frilled shark to the neoselachian clade (modern sharks and rays) which relates it to the cow shark, in the order Hexanchiformes. In addition, a genetic analysis conducted by researchers in 2016 may also suggest that the species is part of the order Hexanchiformes. Nonetheless, the systematic biologist Shigeru Shirai proposed the Chlamydoselachiformes taxonomic order exclusively for the C. anguinesis and the C. africana species of frilled sharks. As a marine animal, the frilled shark is a living fossil because of its relatively unchanged anatomy and physique, since first appearing in the primeval seas of the Late Cretaceous (c. 95 mya) and the Late Jurassic (150 mya) epochs. In evolutionary terms, the frilled shark is an animal species of recent occurrence in the natural history of the Earth; the earliest discoveries of the fossilized teeth of the Chlamydoselachus anguineus species of shark date to the early Pleistocene epoch (2.58–0.0117 mya). In 2009, marine biologists identified, described, and classified the Chlamydoselachus africana (southern African frilled shark) of the Atlantic waters of southern Angola and of southern Namibia as a species of frilled shark different from the Chlamydoselachus anguineus identified in 1884. Habitat and distribution The habitats of the frilled shark include the waters of the outer continental shelf and the upper-to-middle continental slope, favoring upwellings and other biologically productive areas. Usually, the shiver lives close to the ocean floor, yet its diet of cephalopods, smaller sharks, and bony fish, indicates that the frilled shark practices diel vertical migration, and swims up to feed at night at the surface of the ocean. In their Atlantic- and Pacific-ocean habitats, frilled sharks practice spatial segregation determined by the individual size, the sex, and the reproductive condition of each shark in the shiver. In Suruga Bay, on the Pacific coast of Honshu, Japan, the frilled shark is most common at the depth of , except in the August-to-November period, when the temperature at the water-layer exceeds , and then the sharks swim into deeper, cooler water. In the eastern Atlantic Ocean, the frilled shark occurs off northern Norway, northern Scotland, and western Ireland, ranging from France to Morocco, the archipelago of Madeira, and the coast of Mauritania, in northwest Africa. In the central Atlantic Ocean, the frilled shark has been caught along the region of the Mid-Atlantic Ridge, ranging from north of the Azores islands to the Rio Grande Rise, off southern Brazil, and the Vavilov Ridge, off West Africa. Frilled sharks tend to be very solitary organisms; interacting with multiple individuals of their kind is rare. However, in the late 2000s a large capture was made over an underwater seamount of the Mid-Atlantic Ridge, hauling in over 30 frilled sharks. The mass capture of a wide variety of male and female specimens emphasized these seamounts as a location for the mating of the species. In the western Atlantic, the frilled shark occurs in the waters of New England and Georgia, in the US, and in the waters of Suriname, in the northeastern coast of South America. In the western Pacific Ocean, the frilled shark ranges from southeastern Honshu, Japan, north to Taiwan, off the coast of China, to the coast of New South Wales, Australia, and the islands of Tasmania and New Zealand. In the central and eastern Pacific Ocean, the frilled shark occurs in the regional waters of Hawaii and the coast of California, in the US, and the northern coast of Chile, in western South America. Although it has been caught at the depth of , the frilled shark usually does not occur deeper than . Description The eel-like bodies of C. anguineus and C. africana differ anatomically; C. anguineus has a longer head and shorter gill slits, a spinal column with more vertebrae (160–171 vs. 147), and a lower-intestine spiral valve with more turns (35–49 vs. 26–28) than does C. africana. The skin color of either species ranges from uniformly dark-brown to uniformly grey. In addition, C. anguineus has smaller pectoral fins than C. africana, and the mouth is narrower. The recorded, maximum body-length of a male frilled shark is , and the recorded, maximum body-length of a female frilled shark is . The head of the frilled shark is broad and flat, with a short, rounded snout. The nostrils are vertical slits, separated by a flap of skin that forms the incurrent opening and the excurrent opening. The moderately large eyes are horizontal ellipsoids, which have no nictitating membrane, which is a protective, third-eyelid. Ligaments articulate the long jaws to the cranium, and the corners of the mouth have neither furrows nor folds. The jaws contain 300 trident-shaped teeth, each needle-tooth has a cusp and two cusplets; the rows of teeth are widely spaced, with 19–28 tooth rows in the upper jaw, and 21–29 tooth rows in the lower jaw. Frilled sharks are able to open jaws and devour food sources that are considerably greater than that of their size, this is a physical trait that is present in gulper eels and viperfish. At the throat, there are six pairs of long gill slits; the first pair of gill slits form a collar, while the extended tips of the gill filaments create a fleshy frill, hence, the frilled shark name of this fish. The pectoral fins are short and rounded; the single, small dorsal fin has a rounded margin, and is positioned at the far end of the body, approximately opposite the anal fin. The pelvic and the anal fins are large, broad, and rounded, and are positioned to the tail-end of the frilled shark's body. The very long caudal fin is a triangular tail that has neither a lower lobe nor a ventral notch in the upper lobe, and has a margin equipped with sharp, chisel-shaped dermal denticles, which the shark can enlarge. The underside of the shark's eel-like body features a pair of long, thick folds of skin, separated by a groove, which run the length of the belly; the function of the ventral skin-folds is unknown. In the female frilled shark, the mid-section is of the body longer, with the pelvic fins located closer to the anal fin. Biology and ecology A cartilaginous skeleton and a large liver (filled with low-density lipids) are the mechanical means with which the frilled shark controls and maintains its buoyancy in the deep waters of the ocean. The shark has an open, lateral-line organ system featuring mechanoreceptor hair cells in grooves exposed to the ocean environment; such a basal clade configuration enhances the frilled shark's perception and detection of changes in the movement, the vibration, and the pressure of the surrounding water. Like all animals, the frilled shark is afflicted by parasites, such as the Monorygma tapeworm, the trematoda flatworm, the Otodistomum veliporum, and the Mooleptus rabuka nematode; and by predators, such as other sharks, as indicated by missing tail-tips lost to a hungry attacker. In New Zealand, the Takatika Grit, in the Chatham Islands, yielded frilled-shark, bird, and conifer-cone fossils that dated to the time of the Cretaceous–Paleogene boundary (66.043 ± 0.011 mya) which suggested that the sharks lived inland, in shallow bodies of water far from the ocean. The shallow-water frilled shark had larger, stronger teeth, suitable for eating mollusks; scarcity and plenty of food are indicated in the tooth's morphology of sharper points (cusps) oriented into the mouth. From the Late Paleocene epoch (66–56 mya) until the contemporary era, other species of sharks out-matched the Chlamydoselachus sharks in competition for feeding grounds and living space, which restricted their geographic distribution to the deep-water ocean. Regarding the frilled shark's survival of the mass-extinction event which occurred at the Cretaceous–Paleogene time-boundary, one hypothesis proposes that the sharks survived in bodies of shallow water, both inland and on the continental shelf; afterwards, the frilled shark migrated to deep-water habitats. Diet The frilled shark eats a diet of cephalopods, Nudibranchs, smaller sharks, and bony fish; 60 percent of the diet is composed of squid varieties, such as the Chiroteuthis, the Histioteuthis, and the Onychoteuthis, the Sthenoteuthis and the Todarodes; and other sharks, as indicated by the stomach contents of a –long frilled shark which had swallowed a Japanese catshark (Apristurus japonicus). The high tendency to primarily consume the squids in their habitat can be supported by the frequent observation of beak remnants left behind during digestive processes. Because frilled sharks live on the ocean floor, they may also feed on carrion floating down from the surface. In hunting and eating prey that are tired or exhausted or dying (after spawn), the frilled shark's physiology suggests that it may curve its anguilline body, and brace its rear fins against the water, for leverage to effect a rapid-strike bite that captures the prey. The wide gape of the distended, long jaws allows devouring whole prey that are more than half the size of the frilled shark, itself. The jaws' 300 recurved teeth (19–28 upper rows and 21–29 lower rows) readily snag and capture the soft body and tentacles of a cephalopod, especially with the rows of trident-shaped teeth are rotated outwards, when the jaws are open and protruded. Moreover, unlike the strong bite of sharks with an underslung jaw attached below the cranium, the frilled shark has a relatively weak bite, because of the limited leverage and force possible with long jaws that are directly articulated to the cranium, at a point behind the eyes. The behavior of captive specimen sharks suggests that the frilled shark also hunts with its mouth open, by using the dark-and-light contrast of white teeth and darkness to lure prey into its gaping maw; and also hunts with negative pressure, to suck prey into its maw. Forensic examination of frilled sharks' revealed little-to-no food in their stomachs, which suggests that the frilled shark either has a fast-rate of digestion or goes hungry in the long intervals between feedings. Reproduction The extant species of frilled shark, C. anguineus and C. africana, do not have a defined breeding season, because their oceanic habitats register no seasonal influence from the ocean's surface; the male shark reaches sexual maturity when he is long, and the female shark reaches sexual maturity when she is long. The mature female shark has two ovaries and a uterus, which is in the right side of her body; ovulation occurs fortnightly; and pregnancy ceases vitellogenesis (yolk formation) and the production of new ova. Both ovulated eggs and early-stage shark embryos are enclosed in chondrichthyes, ellipsoid egg-cases made of a thin, golden-brown membrane. Reproductively, the frilled shark is an ovoviviparous animal born from an encapsulated egg retained within the mother shark's uterus. During gestation, the shark embryos develop in membranous egg-cases contained within the body of the mother shark, when the infant sharks emerge from their egg capsules in the uterus they feed on yolk until birth. The frilled-shark embryo is long, has a pointed head, slightly developed jaws, nascent external gills, and possesses all fins. The growth of the jaw for elasmobranchs seem to begin early in the embryonic stage, however, it has been observed not to be the case for frilled sharks. The elongation of the jaws seemed to begin later in embryonic development. This leads to some studies suggesting that the terminal position of their mouth, due to anterior elongation of the jaw, is a derived trait instead of ancestral. When the embryo is long, the mother shark expels the egg capsule, at which developmental stage the frilled shark's external gills are developed. Throughout embryonic development, the size of the yolk sac remains constant, until the shark embryo is long, whereupon the sac shrinks until disappearing when the embryo has grown to in length. In the course of pregnancy, the embryo's average rate-of-growth is per month until birth, when the shark pups are long, therefore, the frilled shark's gestation period can be as long as 3.5 years; at birth, a frilled shark's litter comprises 2–15 pups, with an average litter comprises 6.0 pups. Human interactions In pursuit of food, the frilled shark usually is a bycatch of commercial fishing, accidentally caught in the nets used for trawl-, gillnet-, and longline-fishing. In Japan, at Suruga Bay, the frilled shark is usually caught in the gillnets used to catch sea bream and gnomefish, and in the trawl nets used to catch shrimp in the mid-waters of the ocean. Despite being a nuisance fish that damages fishing nets, the economic and commercial value of the frilled shark is as fishmeal and as meat. In 2004, marine biologists first observed the frilled shark (Chlamydoselachus anguineus) at the depth of , in its deep-water habitat at the Blake Plateau, off the southeastern coast of the U.S. In 2007, a Japanese fisherman caught a –long female frilled shark at the surface of the ocean and delivered it to the Awashima Marine Park, at Shizuoka city, where the shark died after hours of captivity. In 2014, a trawler fishing-boat caught a –long frilled shark in –deep water at Lakes Entrance, Victoria, Australia; later, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) confirmed that the shark was a Chlamydoselachus anguineus, an eel-like shark with a frill. In 2016, consequent to the depletion of food sources caused by commercial overfishing of the feeding areas of the shark's deep-water habitat, and because of the shark's slow rate of reproduction, the International Union for the Conservation of Nature (IUCN) classified the frilled shark as a fish species under near-threat of extinction, and then reclassified it as a species of Least Concern of extinction. In 2018, the New Zealand Threat Classification System identified the frilled shark as an animal "At Risk — Naturally Uncommon", not easily found living in the wild.
Biology and health sciences
Sharks
Animals
241330
https://en.wikipedia.org/wiki/Corn%20snake
Corn snake
The corn snake (Pantherophis guttatus), sometimes called red rat snake is a species of North American rat snake in the family Colubridae. The species subdues its small prey by constriction. It is found throughout the southeastern and central United States. Though superficially resembling the venomous copperhead (Agkistrodon contortrix) and often killed as a result of this mistaken identity, the corn snake lacks functional venom and is harmless. The corn snake is beneficial to humans because it helps to control populations of wild rodent pests that damage crops and spread disease. Nomenclature The corn snake is named for the species' regular presence near grain stores, where it preys on mice and rats that eat harvested corn (maize). The Oxford English Dictionary cites this usage as far back as 1675, whilst other sources maintain that the corn snake is so-named because the distinctive, nearly-checkered pattern of the snake's belly scales resembles the kernels of variegated corn. The genus name Panthērophis literally means "panther snake" in reference to the snake's panther-like skin pattern; from : pánthēr "panther", and : óphis "snake". The species name is from meaning "spotted, speckled", again in reference to the snake's skin pattern. Description As an adult the corn snake may have a total length (including tail) of . In the wild, it usually lives around ten to fifteen years, but in captivity can live to an age of 23 years or more. The record for the oldest corn snake in captivity was 32 years and 3 months. The natural corn snake is usually orange or brown bodied with large red blotches outlined in black down their backs. The belly has distinctive rows of alternating black and white marks. This black and white checker pattern is similar to Indian corn (maize) which is where the name corn snake may have come from. The corn snake can be distinguished from a copperhead by the corn snake's brighter colors, slender build, slim head, round pupils, and lack of heat-sensing pits. Taxonomy Until 2002, the corn snake was considered to have two subspecies: the nominate subspecies (P. g. guttatus) described here and the Great Plains rat snake (P. g. emoryi). The latter has since been split off as its own species (P. emoryi), but is still occasionally treated as a subspecies of the corn snake by hobbyists. P. guttatus has been suggested to be split into three species: the corn snake (P. guttatus), the Great Plains rat snake (P. emoryi, corresponding with the subspecies P. g. emoryi), and Slowinski's corn snake (P. slowinskii, occurring in western Louisiana and adjacent Texas). P. guttatus was previously placed in the genus Elaphe, but Elaphe was found to be paraphyletic by Utiger et al., leading to placement of this species in the genus Pantherophis. The placement of P. guttatus and several related species in Pantherophis rather than in Elaphe has been confirmed by further phylogenetic studies. Many reference materials still use the synonym Elaphe guttata. Molecular data have shown that the corn snake is actually more closely related to kingsnakes (genus Lampropeltis) than it is to the Old World rat snakes (genus Elaphe) with which it was formerly classified. The corn snake has even been bred in captivity with the California kingsnake (Lampropeltis californiae) to produce fertile hybrids known as "jungle corn snakes". Range Natural habitat In the wild, the corn snake prefers habitats such as overgrown fields, forest openings, trees, palmetto flatwoods, and abandoned or seldom-used buildings and farms, from sea level to as high as . Typically, the corn snake remains on the ground until the age of four months but can ascend trees, cliffs, and other elevated surfaces. It can be found in the Southeastern United States ranging from New Jersey to the Florida Keys. In colder regions, the corn snake brumates during winter. However, in the more temperate climate along the coast, it shelters in rock crevices and logs during cold weather. It also can find shelter in small, closed spaces, such as under a house, and come out on warm days to soak up the heat of the sun. During cold weather, the corn snake is less active so it hunts less. Introduced range Often called the "American corn snake", P. guttatus is a proscribed pest in much of Australia. There are active extermination campaigns and advice for the public in Victoria, New South Wales, and Queensland. Reproduction It has been found that corn snakes (along with other colubrids) reach sexual maturity by means of size, as opposed to age. Corn snakes are relatively easy to breed. Although not necessary, they are usually put through a cooling (also known as brumation) period that takes 60–90 days to get them ready for breeding. Corn snakes brumate around in a place where they cannot be disturbed and with little sunlight. Corn snakes usually breed shortly after the winter cooling. The male courts the female primarily with tactile and chemical cues, then everts one of his hemipenes, inserts it into the female, and ejaculates his sperm. If the female is ovulating, the eggs will be fertilized and she will begin sequestering nutrients into the eggs, then secreting a shell. Egg-laying occurs slightly more than a month after mating, with 12–24 eggs deposited into a warm, moist, hidden location. Once laid, the adult snake abandons the eggs and does not return to them. The eggs are oblong with leathery, flexible shells. About 10 weeks after laying, the young snakes use a specialized scale called an egg tooth to slice slits in the egg shell, from which they emerge at about long. Reproduction in captivity has to be done correctly so the clutch's mortality rate decreases. This includes accurate sexing, establishing proper pre-breeding conditioning, and timely pairing of adults. Corn snakes are temperate zone colubrids, and share a reproductive pattern where females increase their feeding during summer and fall. This only applies to corn snakes that are sexually mature, which typically indicates the snake is around 75 cm (30 inches) in length or weight 250 g. Diet Like all snakes, corn snakes are carnivorous and, in the wild, they eat every few days. While most corn snakes eat small rodents, such as the white-footed mouse, they may also eat other reptiles, or amphibians, or climb trees to find unguarded bird eggs. Seasons play a large role in the thermal regulation patterns of corn snakes, which is the main mechanism of digestion for snakes. During fall, corn snakes maintain a body temperature approximately 3 degrees Celsius higher than the surrounding environment after consuming a meal, while corn snakes in the winter are not seen to thermoregulate after digestion. Captive snakes do this by using heat mats as an underneath heat source replicates their natural conditions. Corn snakes demonstrate nocturnal patterns, and use the warm ground at night to thermoregulate, therefore heat mats replicate this source. American "rat snakes", such as P. guttatus, had venomous ancestors, which lost their venom after they evolved constriction as a means of prey capture. Intelligence and behavior Like many species of the Colubridae, corn snakes exhibit defensive tail vibration behavior. Behavioral and chemosensory studies with corn snakes suggest that odor cues are of primary importance for prey detection, whereas visual cues are of secondary importance. A study conducted by Dr. David Holzman of the University of Rochester in 1999 found that snakes' capacity for spatial learning rivals those of birds and rodents. Holzman challenged the typical testing method that was being used by biologists to examine snakes' navigational abilities, claiming the structure of the arena itself was biologically in favor of rodents. He hypothesized that if the typical arena being used to test the animals was modified to cater to snakes' instinctive goals, thus providing them with problem sets that they would likely encounter in their natural environment, this would give a more accurate view of their intelligence. The study involved testing 24 captive-bred corn snakes, placing them in an open tub with walls too high for them to climb out. Eight holes were cut out underneath, with one hole leading to a shelter. An intense light was positioned to shine directly on the arena, exploiting the snake's natural aversion to bright open spaces. This provided a biologically meaningful objective for the snakes: to seek out cozy dark shelter. The study found that when given the incentive of finding shelter, the snakes exhibited an acute ability to learn and navigate their surroundings. They also found snakes rely on their sense of vision much more than many herpetologists had previously assumed. They found that younger snakes were able to more quickly locate the holes than older snakes, as the younger snakes were more resourceful in their application of senses and older snakes relied more heavily on their sense of sight. In captivity Corn snakes are one of the most popular types of snakes to keep in captivity or as pets, second only to the ball python. Outside of their native range, they are a popular pet snake in Brazil, where they risk becoming an invasive species. Their size, calm temperament, and ease of care contribute to this popularity. Captive corn snakes tolerate being handled by their owners, even for extended periods. Variations After many generations of selective breeding, captive bred corn snakes are found in a wide variety of different colors and patterns. These result from recombining the dominant and recessive genes that code for proteins involved in chromatophore development, maintenance, or function. New variations, or morphs, become available every year as breeders gain a better understanding of the genetics involved. Color morphs Normal / Carolina / Wildtype – Orange with black lines around red-colored saddle markings going down their back and with black-and-white checkered bellies. Regional diversity is found in wild-caught corn snakes, the most popular being the Miami and Okeetee Phases. These are the most commonly seen corn snakes. Miami Phase (originates in the Florida Wildtype) – Usually smaller corn snakes with some specimens having highly contrasting light silver to gray ground color with red or orange saddle markings surrounded in black. Selective breeding has lightened the ground color and darkened the saddle marks. The "Miami" name is now considered an appearance trait. Okeetee Phase – Characterized by deep red dorsal saddle marks, surrounded by very black borders on a bright orange ground color. As with the Miami Phase, selective breeding has changed the term "Okeetee" to an appearance rather than a locality. Some on the market originate solely from selectively breeding corn snakes from the Okeetee Hunt Club. Candy-cane (selectively bred amelanistic) – Amelanistic corn snakes, bred toward the ideal of red or orange saddle marks on a white background. Some were produced using light Creamsicle (an amel hybrid from Great Plains rat snake x corn snake crosses) bred with Miami Phase corn snakes. Some Candy-canes will develop orange coloration around the neck region as they mature and many labeled as Candy-canes later develop significant amounts of yellow or orange in the ground color. The contrast they have as hatchlings often fades with maturity. Reverse Okeetee (selectively bred amelanistic) – An amelanistic Okeetee Phase corn snake, which has the normal black rings around the saddle marks replaced with wide white rings. Ideal specimens are high contrast snakes with light orange to yellow background and dark orange/red saddles. Note: An Albino Okeetee is not a locale-specific Okeetee—it is a selectively bred amelanistic. Fluorescent Orange (selectively bred amelanistic) – A designer amelanistic corn snake that develops white borders around bright red saddle marks as adults on an orange background. Sunglow (selectively bred amelanistic) – Another designer amelanistic corn snake that lacks the usual white speckling that often appears in most albinos and selected for exceptionally bright ground color. The orange background surrounds dark orange saddle marks. Blood Red (selectively bred "diffused") – Carry a recessive trait (known as diffused) that eliminates the ventral checkered patterns. These originated from a somewhat unicolor Jacksonville, Florida and Gainesville, Florida strain of corn snake. Through selective breeding, an almost solid ground color has been produced. Hatchlings have a visible pattern that can fade as they mature into a solid orange-red to ash-red colored snake. The earlier Blood Red corn snakes tended to have large clutches of smaller than average eggs that produce hard-to-feed offspring, though this is no longer the case. Crimson (Hypomelanistic + Miami) – Very light high contrast corn snakes, with a light background and dark red/orange saddle marks. Anerythristic (anerythristic type A, sometimes called "Black Albino") – The complement to amelanism. The inherited recessive mutation of lacking erythrin (red, yellow and orange) pigments produces a corn snake that is mostly black, gray and brown. When mature, many anerythristic type A corn snakes develop yellow on their neck regions, which is a result of the carotenoids in their diet. Charcoal (sometimes known as anerythristic type B) – Can lack the yellow color pigment usually found in all corn snakes. They are a more muted contrast compared to Anerythristics. Caramel – Another Rich Zuchowski-engineered corn snake. The background is varying shades of yellow to yellow-brown. Dorsal saddle marks vary from caramel yellow to brown and chocolate brown. Lavender – Have a light pink background with darker purple-gray markings. They also have ruby- to burgundy-colored eyes. Cinder – Originated with Upper Keys corn snakes and, as such, are often built slimmer than most other morphs. They may resemble anerythristics, but they have wavy borders around their saddles. Kastanie – Hatch out looking nearly anerythristic, but gain some color as they mature, to eventually take on a chestnut coloration. This gene was first discovered in Germany. Hypomelanistic (or Hypo for short) – Carry a recessive trait that reduces the dark pigments, causing the reds, whites and oranges to become more vivid. Their eyes remain dark. They range in appearance between amelanistic corn snakes to normal corn snakes with greatly reduced melanin. Ultra – A hypomelanistic-like gene that is an allele to the amelanistic gene. Ultra corn snakes have light gray lines in place of black. The Ultra gene is derived from the gray rat snake (Pantherophis spiloides). All Ultra and Ultramel corn snakes have some amount of gray rat snake in them. Ultramel – An intermediate appearance between Ultra and amel, which is the result of being heterozygous for Ultra and amel at the albino locus. Dilute – Another melanin-reducing gene in which the corn snake looks as if it is getting ready to shed. Sunkissed – A hypo-like gene which was first found in Kathy Love's corn snake colony. Lava – An extreme hypo-like gene which was discovered by Joe Pierce and named by Jeff Mohr. What would normally be black pigment in these corn snakes is, instead, a grayish-purple. Pattern morphs Motley – Has a clear belly and an "inverted" spotting pattern. May also appear as stripes or dashes. Striped – This morph also has a clear belly and a striping pattern. Unlike the Motley corn snake, the striped corn snake's colors will not connect, but may sometimes break up and take on a "cubed" appearance. Cubes and spots on a striped corn snake are the same as the saddle color on a similar-looking normal corn snake, unlike Motley corn snakes. Striped is both allelic and recessive to Motley, so breeding a striped corn snake and a (homozygous) Motley corn snake will result in all-Motley corn snakes and then breeding the (heterozygous) Motley corn snake offspring will result in ¾ Motley corn snakes and ¼ striped corn snakes. Diffused – Diffuses the patterning on the sides and eliminates the belly pattern. It is one component of the Blood Red morph. Sunkissed – While considered a hypo-like gene, sunkissed corn snakes also have other effects, such as rounded saddles and unusual head patterns. Aztec, Zigzag and Banded – Selectively bred multigenetic morphs that are not dependent on a single gene. Compound morphs There are tens of thousands of possible compound morphs. Some of the most popular ones are listed here. Snow (amelanistic + Anerythristic) – As hatchlings, this color variation is composed of white and pink blotches. These corn snakes are predominantly white and tend to have yellow neck and throat regions when mature (due to carotenoid retention in their diet). Light blotches and background colors have subtle shades of beige, ivory, pink, green or yellow. Blizzard (amelanistic + Charcoal) – Totally white with red eyes, with very little to no visible pattern. This morph is formed by combining amelanistic (amel) gene and anerythristic gene. Ghost (Hypomelanistic + Anerythristic type A) – Exhibit varying shades of grays and browns on a lighter background. These often create pastel colors in lavenders, pinks, oranges and tans. Phantom – A combination of Charcoal and Hypomelanistic. Pewter (Charcoal + Diffused) – Silvery-lavender, with very little pattern as adults. Butter (amelanistic + Caramel) – A two-tone yellow corn snake. Amber (Hypomelanistic + Caramel) – Have amber-colored markings on a light brown background. Plasma (Diffused + Lavender) – Hatch out in varying shades of grayish-purple. Opal (amelanistic + Lavender) – Look like Blizzard corn snakes once mature, with pink to purple highlights. Granite (Diffused + Anerythristic) – Tend to be varying shades of gray as adults, with males often having pink highlights. Fire (amelanistic + Diffused) – An albino version of the Diffused morph. These corn snakes are typically very bright red snakes, with very little pattern as adults. Scale mutations Scaleless corn snakes are homozygous for a recessive mutation of the gene responsible for scale development. While not completely scaleless above, some do have fewer scales than others. However, all of them possess ventral (belly) scales. They can also be produced with any of the aforementioned color morphs. The first Scaleless corn snakes originated from the cross of another North American rat snake species to a corn snake and they are therefore, technically, hybrids. Scaleless mutants of many other snake species have also been documented in the wild. Hybrids Hybrids between corn snakes and any other snake is very common in captivity, but rarely occurs in the wild. Hybrids within the genera Pantherophis, Lampropeltis, or Pituophis so far have been proven to be completely fertile. Many different corn snake hybrids are bred in captivity. A few common examples include: Jungle corn snakes are hybrids between a corn snake and a California kingsnake (Lampropeltis californiae). These show extreme pattern variations, taking markings from both parents. Although they are hybrids of different genera, they are not sterile. Tri-color Jungle corn snakes are hybrids between a Querétaro kingsnake and a corn snake. The color is similar to that of an amelanistic corn snake. Creamsicle corn snakes are hybrids between an albino corn snake and a Great Plains rat snake (P. emoryi). The first-generation hybrids are known as "Rootbeers". Breeding these back to each other can produce Creamsicles. Turbo corn snakes are hybrids between a corn snake and any Pituophis species. Corn snakes hybridized with milk snakes are called a variety of names, depending on the subspecies of milk snake used. For example, a Honduran milk snake × corn snake is called a Cornduran, a Sinaloan milk snake × corn snake is called a Sinacorn and a Pueblan milk snake × corn snake is called a Pueblacorn. Brook Korn corn snakes are hybrids between a Brook's kingsnake and a corn snake. Like the jungle corn snake, these hybrids also show extreme pattern variations. When hybrids of corn snakes are found in the wild, they have usually hybridized with other Pantherophis species whose ranges overlap with corn snakes. Diseases In this snake Snake fungal disease (SFD) is caused by Ophidiomyces ophiodiicola.
Biology and health sciences
Snakes
Animals
241342
https://en.wikipedia.org/wiki/24-hour%20clock
24-hour clock
The modern 24-hour clock is the convention of timekeeping in which the day runs from midnight to midnight and is divided into 24 hours. This is indicated by the hours (and minutes) passed since midnight, from to , with as an option to indicate the end of the day. This system, as opposed to the 12-hour clock, is the most commonly used time notation in the world today, and is used by the international standard ISO 8601. A number of countries, particularly English speaking, use the 12-hour clock, or a mixture of the 24- and 12-hour time systems. In countries where the 12-hour clock is dominant, some professions prefer to use the 24-hour clock. For example, in the practice of medicine, the 24-hour clock is generally used in documentation of care as it prevents any ambiguity as to when events occurred in a patient's medical history. Description A time of day is written in the 24-hour notation in the form hh:mm (for example 01:23) or hh:mm:ss (for example, 01:23:45), where hh (00 to 23) is the number of full hours that have passed since midnight, mm (00 to 59) is the number of full minutes that have passed since the last full hour, and ss (00 to 59) is the number of seconds since the last full minute. To indicate the exact end of the day, hh may take the value 24, with mm and ss taking the value 00. In the case of a leap second, the value of ss may extend to 60. A leading zero is added for numbers under 10, but it is optional for the hours. The leading zero is very commonly used in computer applications, and always used when a specification requires it (for example, ISO 8601). Where subsecond resolution is required, the seconds can be a decimal fraction; that is, the fractional part follows a decimal dot or comma, as in 01:23:45.678. The most commonly used separator symbol between hours, minutes and seconds is the colon, which is also the symbol used in ISO 8601. In the past, some European countries used the dot on the line as a separator, but most national standards on time notation have since then been changed to the international standard colon. In some contexts (including some computer protocols and military time), no separator is used and times are written as, for example, "2359". Midnight 00:00 and 24:00 In the 24-hour time notation, the day begins at midnight, 00:00 or 0:00, and the last minute of the day begins at 23:59. Where convenient, the notation 24:00 may also be used to refer to midnight at the end of a given date — that is, 24:00 of one day is the same time as 00:00 of the following day. The notation 24:00 mainly serves to refer to the exact end of a day in a time interval. A typical usage is giving opening hours ending at midnight (e.g. "00:00–24:00", "07:00–24:00"). Similarly, some bus and train timetables show 00:00 as departure time and 24:00 as arrival time. Legal contracts often run from the start date at 00:00 until the end date at 24:00. While the 24-hour notation unambiguously distinguishes between midnight at the start (00:00) and end (24:00) of any given date, there is no commonly accepted distinction among users of the 12-hour notation. Style guides and military communication regulations in some English-speaking countries discourage the use of 24:00 even in the 24-hour notation, and recommend reporting times near midnight as 23:59 or 00:01 instead. Sometimes the use of 00:00 is also avoided. In variance with this, as of 2010, the correspondence manual for the United States Navy and United States Marine Corps formerly specified 0001 to 2400. The manual was updated in June 2015 to use 0000 to 2359. Times after 24:00 Time-of-day notations beyond 24:00 (such as 24:01 or 25:00 instead of 00:01 or 01:00) are not commonly used and not covered by the relevant standards. However, they have been used occasionally in some special contexts in the United Kingdom, France, Spain, Canada, Japan, South Korea, Hong Kong, and China where business hours extend beyond midnight, such as broadcast television production and scheduling. The GTFS public transport schedule listings file format has the concept of service days and expects times beyond 24:00 for trips that run after midnight. Computer support In most countries, computers by default show the time in 24-hour notation. For example, Microsoft Windows and macOS activate the 12-hour notation by default only if a computer is in a handful of specific language and region settings. The 24-hour system is commonly used in text-based interfaces. POSIX programs such as ls default to displaying timestamps in 24-hour format. Military time In American English, the term military time is a synonym for the 24-hour clock. In the US, the time of day is customarily given almost exclusively using the 12-hour clock notation, which counts the hours of the day as 12, 1, ..., 11 with suffixes a.m. and p.m. distinguishing the two diurnal repetitions of this sequence. The 24-hour clock is commonly used there only in some specialist areas (military, aviation, navigation, tourism, meteorology, astronomy, computing, logistics, emergency services, hospitals), where the ambiguities of the 12-hour notation are deemed too inconvenient, cumbersome, or dangerous. Military usage, as agreed between the United States and allied English-speaking military forces, differs in some respects from other twenty-four-hour time systems: No hours/minutes separator is used when writing the time, and a letter designating the time zone is appended (for example "0340Z"). Leading zeros are always written out and are required to be spoken, so 5:43 a.m. is spoken "zero five forty-three" (casually) or "zero five four three" (military radio), as opposed to "five forty-three" or "five four three". Military time zones are lettered and given word designations from the NATO phonetic alphabet. For example, in US Eastern Standard Time (UTC−5), which is designated time zone R, 2:00 a.m. is written "0200R" and spoken "zero two hundred Romeo". Local time is designated as zone J or "Juliett". "1200J" ("twelve hundred Juliett") is noon local time. Greenwich Mean Time (GMT) or Coordinated Universal Time (UTC) is designated time zone Z, and thus called "Zulu time". (When used as a modern time zone, in practice, GMT and UTC coincide. For other purposes there may be a difference of about a second.) Hours are always "hundred", never "thousand"; 1000 is "ten hundred" not "one thousand"; 2000 is "twenty hundred" not "two thousand". History The first mechanical public clocks introduced in Italy were mechanical 24-hour clocks which counted the 24 hours of the day from one-half hour after sunset to the evening of the following day. The 24th hour was the last hour of day time. From the 14th to the 17th century, two systems of time measurement competed in Europe: Italian (Bohemian, Old-Bohemian) hours (full-dial): 24 hours system with the day starting after sunset; on the static dial, the 24th hour was on the right side. In Italy, it was prevalently modified to a 4×6 hours system, but some 24-hour dials lasted until the 19th century. The system has spread especially to the Alpine countries, Czech countries and Poland. In Bohemia, this system was finally banned in 1621 after the defeat on White Mountain. The Prague Astronomical Clock struck according to the Old Bohemian Clock until its destruction in 1945. The variant with counting from dawn is also rarely documented and used, e.g. on a 16th-century cabinet clock in the Vienna Art-History Museum. German (Gallic) hours (half-dial): 2×12 hour system starting at midnight and restarted at noon. It is typical with the 12-hour dial with 12 at the top. The modern 24-hour system is a late-19th century adaptation of the German midnight-starting system, and then prevailed in the world with the exception of some Anglophone countries. Striking clocks had to produce 300 strokes each day, which required a lot of rope, and wore out the mechanism quickly, so some localities switched to ringing sequences of 1 to 12 twice (156 strokes), or even 1 to 6 repeated four times (84 strokes). Sandford Fleming, the engineer-in-chief of the Canadian Intercolonial Railway, was an early proponent of using the 24-hour clock as part of a programme to reform timekeeping, which also included establishing time zones and a standard prime meridian. At the International Meridian Conference in 1884, the following resolution was adopted by the conference: The Canadian Pacific Railway was among the first organisations to adopt the 24-hour clock, at midsummer 1886. A report by a government committee in the United Kingdom noted Italy as the first country among those mentioned to adopt 24-hour time nationally, in 1893. Other European countries followed: France adopted it in 1912 (the French army in 1909), followed by Denmark (1916), and Greece (1917). By 1920, Spain, Portugal, Belgium, and Switzerland had switched, followed by Turkey (1925), and Germany (1927). By the early 1920s, many countries in Latin America had also adopted the 24-hour clock. Some of the railways in India had switched before the outbreak of the war. During World War I, the British Royal Navy adopted the 24-hour clock in 1915, and the Allied armed forces followed soon after, with the British Army switching officially in 1918. The Canadian armed forces first started to use the 24-hour clock in late 1917. In 1920, the United States Navy was the first United States organisation to adopt the system; the United States Army, however, did not officially adopt the 24-hour clock until 1 July 1942. The use of the 24-hour clock in the United Kingdom has grown steadily since the beginning of the 20th century, although attempts to make the system official failed more than once. In 1934, the British Broadcasting Corporation (BBC) switched to the 24-hour clock for broadcast announcements and programme listings. The experiment was halted after five months following a lack of enthusiasm from the public, and the BBC continued using the 12-hour clock. In the same year, Pan American World Airways Corporation and Western Airlines in the United States both adopted the 24-hour clock. In modern times, the BBC uses a mixture of both the 12-hour and the 24-hour clock. British Rail, London Transport, and the London Underground switched to the 24-hour clock for timetables in 1964. A mixture of the 12- and 24-hour clocks similarly prevails in other English-speaking Commonwealth countries: French speakers have adopted the 24-hour clock in Canada much more broadly than English speakers, and Australia as well as New Zealand also use both systems.
Technology
Clocks
null
241383
https://en.wikipedia.org/wiki/Colubridae
Colubridae
Colubridae (, commonly known as colubrids , from , 'snake') is a family of snakes. With 249 genera, it is the largest snake family. The earliest fossil species of the family date back to the Late Eocene epoch, with earlier origins suspected. Colubrid snakes are found on every continent except Antarctica. Description Colubrids are a very diverse group of snakes. They can exhibit many different body styles, body sizes, colors, and patterns. They can also live in many different types of habitats including aquatic, terrestrial, semi-arboreal, arboreal, desert, mountainous forests, semi-fossorial, and brackish waters. A primarily shy and harmless group of snakes, the vast majority of colubrids are not venomous, nor do most colubrids produce venom that is medically significant to mammals. However, the bites of a few groups (such as Boiga sp.) can escalate quickly to emergency situations. Furthermore, within the Colubridae, the South African boomslang and twig snakes, as well as the Asian keelback snakes (Rhabdophis sp.) have long been notorious for inflicting the worst bites on humans, with the most confirmed fatalities. Some colubrids are described as opisthoglyphous (often simply called "rear-fanged"), meaning they possess shortened, grooved "fangs" located at the back of the upper jaw. It is thought that opisthoglyphy evolved many times throughout the natural history of squamates and is an evolutionary precursor to the larger, frontal fangs of vipers and elapids. These grooved fangs tend to be sharpest on the anterior and posterior edges. While feeding, colubrids move their jaws backward to create a cutting motion between the posterior edge and the prey's tissue. In order to inject venom, colubridae must chew on their prey. Colubrids can also be proteroglyphous (fangs at the front of the upper jaw, followed by small solid teeth) Most Colubridae are oviparous (mode of reproduction where an egg is produced that will later hatch) with clutch size varying by size and species of snake. However, certain species of snakes from the subfamilies of Natricinae and Colubrinae are viviparous (mode of reproduction where young are live birthed). These viviparous species can birth various amounts of offspring at a time, but the exact number of offspring depends on the size and species of snake. Characteristics of Colubridae Characteristics of Colubridae include limbless bodies, left lung that is reduced or absent with or without a tracheal lung, well-developed oviducts, premaxillaries that lack teeth, maxilaries oriented longitudinally with teeth that are solid or grooved, mandible without a coronoid bone, dentary that has teeth, only a left carotid artery, intracostal arteries arising from the dorsal aorta every few trunk segments, no cranial infrared receptors occurring in pits or surface indentations, and optic foramina that typically traverse the frontal–parietal–parasphenoid sutures. Classification In the past, the Colubridae were not a natural group, as many were more closely related to other groups, such as elapids, than to each other. This family was historically used as a "wastebasket taxon" for snakes that do not fit elsewhere. Until recently, colubrids were basically colubroids that were not elapids, viperids, or Atractaspis. However, recent research in molecular phylogenetics has stabilized the classification of historically "colubrid" snakes and the family as currently defined is a monophyletic clade, although additional research will be necessary to sort out all the relationships within this group. As of May 2018, eight subfamilies are recognized. Current subfamilies Sibynophiinae – three genera Natricinae – 36 genera (sometimes given as family Natricidae) Pseudoxenodontinae – two genera Dipsadinae – over 100 genera (sometimes given as family Dipsadidae) Grayiinae – one genus Grayia Calamariinae – seven genera Ahaetuliinae – five genera Colubrinae – 93 genera Sub-family currently undetermined Former subfamilies These taxa have been at one time or another classified as part of the Colubridae, but are now either classified as parts of other families, or are no longer accepted because all the species within them have been moved to other (sub)families. Subfamily Aparallactinae (now a subfamily of Lamprophiidae, sometimes combined with Atractaspidinae) Subfamily Boiginae (now part of Colubrinae) Subfamily Boodontinae (some of which now treated as subfamily Grayiinae of the new Colubridae, others moved to family Lamprophiidae as part of subfamilies Lamprophiinae, Pseudaspidinae and Pseudoxyrhophiidae, which are now sometimes treated as families) Subfamily Dispholidinae (now part of Colubrinae) Subfamily Homalopsinae (now family Homalopsidae) Subfamily Lamprophiinae (now a subfamily of Lamprophiidae) Subfamily Lycodontinae (now part of Colubrinae) Subfamily Lycophidinae (now part of Lamprophiidae) Subfamily Pareatinae (now family Pareidae, sometimes incorrectly spelled Pareatidae) Subfamily Philothamninae (now part of Colubrinae) Subfamily Psammophiinae (now a subfamily of Lamprophiidae) Subfamily Pseudoxyrhophiinae (now a subfamily of Lamprophiidae) Subfamily Xenoderminae (now family Xenodermidae, sometimes incorrectly spelled Xenodermatidae) Subfamily Xenodontinae (which many authors put in Dipsadinae/Dipsadidae) Fossil record The oldest colubrid fossils are indeterminate vertebrae from Thailand and specimens of the genus Nebraskophis from the U.S. state of Georgia, both from the Late Eocene. The presence of derived colubrids in North America so early on, despite their presumed Old World origins, suggests that they originated even earlier. The Pliocene (Blancan) fossil record in the Ringold Formation of Adams County, Washington has yielded fossils from a number of colubrids including Elaphe pliocenica, Elaphe vulpina, Lampropeltis getulus, Pituophis catenifer, a Thamnophis species, and the extinct genus Tauntonophis.
Biology and health sciences
Snakes
Animals
241420
https://en.wikipedia.org/wiki/Pyromania
Pyromania
Pyromania is an impulse control disorder in which individuals repeatedly fail to resist impulses to deliberately start fires, to relieve some tension or for instant gratification. The term pyromania comes from the Greek word (pyr, 'fire'). Pyromania is distinct from arson, the deliberate setting of fires for personal, monetary or political gain. Pyromaniacs start fires to release anxiety and tension, or for arousal. Other impulse disorders include kleptomania and intermittent explosive disorder. There are specific symptoms that separate pyromaniacs from those who start fires for criminal purposes or due to emotional motivations not specifically related to fire. Someone with this disorder deliberately and purposely sets fires on more than one occasion, and before the act of lighting the fire the person usually experiences tension and an emotional buildup. When around fires, a person with pyromania gains intense interest or fascination and may also experience pleasure, gratification or relief. Another long term contributor often linked with pyromania is the buildup of stress. When studying the lifestyle of someone with pyromania, a buildup of stress and emotion is often evident and this is seen in teens' attitudes towards friends and family. At times it is difficult to distinguish the difference between pyromania and experimentation in childhood because both involve pleasure from the fire. Classification ICD The World Health Organization's International Classification of Diseases (11th Revision) ICD-11, regarded as the global standard, was released in June 2018 and came into full effect from January 2022. It states the following about pyromania: It also notes that pyromania has no relation to intellectual impairment, substance abuse, or other mental and behavioral disorder. ICD-11 was produced by professionals from 55 countries out of the 90 countries involved and is one of the most widely used reference worldwide by clinicians, with the other being the Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR from 2022, DSM-5 from 2013, or their predecessors). DSM The American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders, First Edition, released in 1952, categorized pyromania as a subset of Obsessive–compulsive disorder. In the Second Edition, the disorder was dropped. In the Third Edition, it returned under the category of impulse-control disorders. The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision (DSM-5-TR), released in 2022, states that the essential feature of pyromania is "the presence of multiple episodes of deliberate and purposeful fire setting." Pyromania moved from the DSM-4 chapter "Impulse-Control Disorders Not Otherwise Specified," to the chapter "Disruptive, impulse-control, and conduct disorders" in DSM-5. Causes Most studied cases of pyromania occur in children and teenagers. There is a range of causes, but an understanding of the different motives and actions of fire setters can provide a platform for prevention. Common causes of pyromania can be broken down into two main groups: individual and environmental. This includes the complex understanding of factors such as individual temperament, parental psychopathology, and possible neurochemical predispositions. Many studies have shown that patients with pyromania were in households without a father figure present. Pyromania can be common in those with substance use disorders, problem gambling, mood disorders, disruptive behaviour, anti-social disorders, and/or another impulse-control disorder. Environmental Environmental factors that may lead to pyromania include an event that the patient has experienced in the environment they live in. Environmental factors include neglect from parents and physical or emotional abuse in earlier life. Other causes include early experiences of watching adults or teenagers using fire inappropriately and lighting fires as a stress reliever. Treatment and prognosis The appropriate treatment for pyromania varies with the age of the patient and the seriousness of the condition. For children and adolescents treatment usually is cognitive behavioral therapy sessions in which the patient's situation is diagnosed to find out what may have caused this impulsive behavior. Once the situation is diagnosed, repeated therapy sessions usually help continue to a recovery. Other important steps must be taken as well with the interventions and the cause of the impulse behavior. Some other treatments include parenting training, over-correction/satiation/negative practice with corrective consequences, behavior contracting/token reinforcement, special problem-solving skills training, relaxation training, covert sensitization, fire safety and prevention education, individual and family therapy, and medication. The prognosis for recovery in adolescents and children with pyromania depends on the environmental or individual factors in play, but is generally positive. Pyromania is generally harder to treat in adults, often due to lack of cooperation by the patient. Treatment usually consists of more medication to prevent stress or emotional outbursts, in addition to long-term psychotherapy. In adults, however, the recovery rate is generally poor, and if an adult does recover, it usually takes a longer period of time. History Pyromania was thought in the 1800s to be a concept involved with moral insanity and moral treatment, but had not been categorized under impulse control disorders. Pyromania is one of the four recognized types of arson alongside burning for profit, to cover up an act of crime, and for revenge. Pyromania is the second most common type of arson. Common synonyms for pyromaniacs in colloquial English include firebug (US) and fire raiser (UK), but these also refer to arsonists. Pyromania is a rare disorder with an incidence of less than one percent in most studies; also, pyromaniacs hold a very small proportion of psychiatric hospital admissions. Pyromania can occur in children as young as age three, though such cases are rare. Only a small percentage of children and teenagers arrested for arson are child pyromaniacs. A preponderance of the individuals are male; one source states that ninety percent of those diagnosed with pyromania are male. Based on a survey of 9,282 Americans using the Diagnostic and Statistical Manual of Mental Disorders, 4th edition, impulse-control problems such as gambling, pyromania and compulsive shopping collectively affect 9% of the population. A 1979 study by the Law Enforcement Assistance Administration found that only 14% of fires were started by pyromaniacs and others with mental illness. A 1951 study by Lewis and Yarnell, one of the largest epidemiological studies conducted, found that 39% of those who had intentionally set fires had been diagnosed with pyromania.
Biology and health sciences
Mental disorders
Health
241440
https://en.wikipedia.org/wiki/Leaving%20group
Leaving group
In chemistry, a leaving group is defined by the IUPAC as an atom or group of atoms that detaches from the main or residual part of a substrate during a reaction or elementary step of a reaction. However, in common usage, the term is often limited to a fragment that departs with a pair of electrons in heterolytic bond cleavage. In this usage, a leaving group is a less formal but more commonly used synonym of the term nucleofuge. In this context, leaving groups are generally anions or neutral species, departing from neutral or cationic substrates, respectively, though in rare cases, cations leaving from a dicationic substrate are also known. A species' ability to serve as a leaving group depends on its ability to stabilize the additional electron density that results from bond heterolysis. Common anionic leaving groups are halides such as and , and sulfonate esters such as tosylate (), while water (), alcohols (), and amines () are common neutral leaving groups. In the broader IUPAC definition, the term also includes groups that depart without an electron pair in a heterolytic cleavage (groups specifically known as an electrofuges), like or , which commonly depart in electrophilic aromatic substitution reactions. Similarly, species of high thermodynamic stability like nitrogen () or carbon dioxide () commonly act as leaving groups in homolytic bond cleavage reactions of radical species. A relatively uncommon term that serves as the antonym of leaving group is entering group (i.e., a species that reacts with and forms a bond with a substrate or a substrate-derived intermediate). In this article, the discussions below mainly pertain to leaving groups that act as nucleofuges. Leaving group ability The physical manifestation of leaving group ability is the rate at which a reaction takes place. Good leaving groups give fast reactions. By transition state theory, this implies that reactions involving good leaving groups have low activation barriers leading to relatively stable transition states. It is helpful to consider the concept of leaving group ability in the case of the first step of an SN1/E1 reaction with an anionic leaving group (ionization), while keeping in mind that this concept can be generalized to all reactions that involve leaving groups. Because the leaving group bears a larger negative charge in the transition state (and products) than in the starting material, a good leaving group must be able to stabilize this negative charge, i.e. form stable anions. A good measure of anion stability is the pKa of an anion's conjugate acid (pKaH), and leaving group ability indeed generally follows this trend, with a lower pKaH correlating well with better leaving group ability. The correlation between pKaH and leaving group ability, however, is not perfect. Leaving group ability represents the difference in energy between starting materials and a transition state (ΔG‡) and differences in leaving group ability are reflected in changes in this quantity (ΔΔG‡). The pKaH, however, represents the difference in energy between starting materials and products (ΔG°) with differences in acidity reflected in changes in this quantity (ΔΔG°). The ability to correlate these energy differences is justified by the Hammond postulate and the Bell–Evans–Polanyi principle. Also, the starting materials in these cases are different. In the case of the acid dissociation constant, the "leaving group" is bound to a proton in the starting material, while in the case of leaving group ability, the leaving group is bound to (usually) carbon. It is with these important caveats in mind that one must consider pKaH to be reflective of leaving group ability. Nevertheless, one can generally examine acid dissociation constants to qualitatively predict or rationalize rate or reactivity trends relating to variation of the leaving group. Consistent with this picture, strong bases such as and tend to make poor leaving groups, due their inability to stabilize a negative charge. What constitutes a reasonable leaving group is dependent on context. For SN2 reactions, typical synthetically useful leaving groups include , , and . Substrates containing phosphate and carboxylate leaving groups are more likely to react by competitive addition-elimination, while sulfonium and ammonium salts generally form ylides or undergo E2 elimination when possible. With reference to the table above, phenoxides () constitute the lower limit for what is feasible as SN2 leaving groups: very strong nucleophiles like or have been used to demethylate anisole derivatives through SN2 displacement at the methyl group. Hydroxide, alkoxides, amides, hydride, and alkyl anions do not serve as leaving groups in SN2 reactions. On the other hand, when anionic or dianionic tetrahedral intermediates collapse, the high electron density of the neighboring heteroatom facilitates the expulsion of a leaving group. Thus, in the case of ester and amide hydrolysis under basic conditions, alkoxides and amides are commonly proposed as leaving groups. For the same reason, E1cb reactions involving hydroxide as a leaving group are not uncommon (e.g., in the aldol condensation). It is exceedingly rare for groups such as (hydrides), (alkyl anions, R = alkyl or H), or (aryl anions, Ar = aryl) to depart with a pair of electrons because of the high energy of these species. The Chichibabin reaction provides an example of hydride as a leaving group, while the Wolff-Kishner reaction and Haller-Bauer reaction feature unstabilized carbanion leaving groups. Contextual differences in leaving group ability It is important to note that the list given above is qualitative and describes trends. The ability of a group to leave is contextual. For example, in SNAr reactions, the rate is generally increased when the leaving group is fluoride relative to the other halogens. This effect is due to the fact that the highest energy transition state for this two step addition-elimination process occurs in the first step, where fluoride's greater electron withdrawing capability relative to the other halides stabilizes the developing negative charge on the aromatic ring. The departure of the leaving group takes place quickly from this high energy Meisenheimer complex, and since the departure is not involved in the rate limiting step, it does not affect the overall rate of the reaction. This effect is general to conjugate base eliminations. Even when the departure of the leaving group is involved in the rate limiting step of a reaction there can still exist contextual differences that can change the order of leaving group ability. In Friedel-Crafts alkylations, the normal halogen leaving group order is reversed so that the rate of the reaction follows RF > RCl > RBr > RI. This effect is due to their greater ability to complex the Lewis acid catalyst, and the actual group that leaves is an "ate" complex between the Lewis acid and the departing leaving group. This situation is broadly defined as leaving group activation. There can still exist contextual differences in leaving group ability in the purest form, that is when the actual group that leaves is not affected by the reaction conditions (by protonation or Lewis acid complexation) and the departure of the leaving group occurs in the rate determining step. In the situation where other variables are held constant (nature of the alkyl electrophile, solvent, etc.), a change in nucleophile can lead to a change in the order of reactivity for leaving groups. In the case below, tosylate is the best leaving group when ethoxide is the nucleophile, but iodide and even bromide become better leaving groups in the case of the thiolate nucleophile. Activation It is common in E1 and SN1 reactions for a poor leaving group to be transformed into a good one by protonation or complexation with a Lewis acid. Thus, it is by protonation before departure that a molecule can formally lose such poor leaving groups as hydroxide. The same principle is at work in the Friedel-Crafts reaction. Here, a strong Lewis acid is required to generate either a carbocation from an alkyl halide in the Friedel-Crafts alkylation reaction or an acylium ion from an acyl halide. In the vast majority of cases, reactions that involve leaving group activation generate a cation in a separate step, before either nucleophilic attack or elimination. For example, SN1 and E1 reactions may involve an activation step, whereas SN2 and E2 reactions generally do not. In conjugate base eliminations The requirement for a good leaving group is relaxed in conjugate base elimination reactions. These reactions include loss of a leaving group in the β position of an enolate as well as the regeneration of a carbonyl group from the tetrahedral intermediate in nucleophilic acyl substitution. Under forcing conditions, even amides can be made to undergo basic hydrolysis, a process that involves the expulsion of an extremely poor leaving group, R2N−. Even more dramatic, decarboxylation of benzoate anions can occur by heating with copper or Cu2O, involving the loss of an aryl anion. This reaction is facilitated by the fact that the leaving group is most likely an arylcopper compound rather than the much more basic alkali metal salt. This dramatic departure from normal leaving group requirements occurs mostly in the realm of C=O double bond formation where formation of the very strong C=O double bond can drive otherwise unfavorable reactions forward. The requirement for a good leaving group is still relaxed in the case of C=C bond formation via E1cB mechanisms, but because of the relative weakness of the C=C double bond, the reaction still exhibits some leaving group sensitivity. Notably, changing the leaving group's identity (and willingness to leave) can change the nature of the mechanism in elimination reactions. With poor leaving groups, the E1cB mechanism is favored, but as the leaving group's ability changes, the reaction shifts from having a rate determining loss of leaving group from carbanionic intermediate B via TS BC‡ through having a rate determining deprotonation step via TS AB‡ (not pictured) to a concerted E2 elimination. In the latter situation, the leaving group X has become good enough that the former transition state connecting intermediates B and C has become lower in energy than B, which is no longer a stationary point on the potential energy surface for the reaction. Because only one transition state connects starting material A and product C, the reaction is now concerted (albeit very asynchronous in the pictured case) due to the increase in leaving group ability of X. "Super" and "hyper" leaving groups The prototypical super leaving group is triflate, and the term has come to mean any leaving group of comparable ability. Compounds where loss of a super leaving group can generate a stable carbocation are usually highly reactive and unstable. Thus, the most commonly encountered organic triflates are methyl triflate and alkenyl or aryl triflates, all of which cannot form stable carbocations on ionization, rendering them relatively stable. It has been noted that steroidal alkyl nonaflates (another super leaving group) generated from alcohols and perfluorobutanesulfonyl fluoride were not isolable as such but immediately formed the products of either elimination or substitution by fluoride generated by the reagent. Mixed acyl-trifluoromethanesulfonyl anhydrides smoothly undergo Friedel-Crafts acylation without a catalyst, unlike the corresponding acyl halides, which require a strong Lewis acid. Methyl triflate, however, does not participate in Friedel-Crafts alkylation reactions with electron-neutral aromatic rings. Beyond super leaving groups in reactivity lie the "hyper" leaving groups. Prominent among these are λ3-iodanes, which include diaryl iodonium salts, and other halonium ions. In one study, a quantitative comparison of these and other leaving groups was conducted. Relative to chloride (krel = 1), reactivities increased in the order bromide (krel = 14), iodide (krel = 91), tosylate (krel = 3.7), triflate (krel = 1.4), phenyliodonium tetrafluoroborate (, krel = 1.2). Along with the criterion that a hyper leaving group be a stronger leaving group than triflate is the necessity that the leaving group undergo reductive elimination. In the case of halonium ions this involves reduction from a trivalent halonium to a monovalent halide coupled with the release of an anionic fragment. Part of the exceptional reactivity of compounds of hyper leaving groups has been ascribed to the entropic favorability of having one molecule split into three. Dialkyl halonium ions have also been isolated and characterized for simple alkyl groups. These compounds, despite their extreme reactivity towards nucleophiles, can be obtained pure in the solid state with very weakly nucleophilic counterions such as and . The strongly electrophilic nature of these compounds engendered by their attachment to extremely labile (R = alkyl, X = Cl, Br, I) leaving groups is illustrated by their propensity to alkylate very weak nucleophiles. Heating neat samples of under reduced pressure resulted in methylation of the very poorly nucleophilic carborane anion with concomitant expulsion of the leaving group. Dialkyl halonium hexafluoroantimonate salts alkylate excess alkyl halides to give exchanged products. Their strongly electrophilic nature, along with the instability of primary carbocations generated from ionization of their alkyl groups, points to their possible involvement in Friedel-Crafts alkylation chemistry. The order of increasing lability of these leaving groups is .
Physical sciences
Organic reactions
Chemistry
241565
https://en.wikipedia.org/wiki/Complete%20blood%20count
Complete blood count
A complete blood count (CBC), also known as a full blood count (FBC), is a set of medical laboratory tests that provide information about the cells in a person's blood. The CBC indicates the counts of white blood cells, red blood cells and platelets, the concentration of hemoglobin, and the hematocrit (the volume percentage of red blood cells). The red blood cell indices, which indicate the average size and hemoglobin content of red blood cells, are also reported, and a white blood cell differential, which counts the different types of white blood cells, may be included. The CBC is often carried out as part of a medical assessment and can be used to monitor health or diagnose diseases. The results are interpreted by comparing them to reference ranges, which vary with sex and age. Conditions like anemia and thrombocytopenia are defined by abnormal complete blood count results. The red blood cell indices can provide information about the cause of a person's anemia such as iron deficiency and vitamin B12 deficiency, and the results of the white blood cell differential can help to diagnose viral, bacterial and parasitic infections and blood disorders like leukemia. Not all results falling outside of the reference range require medical intervention. The CBC is usually performed by an automated hematology analyzer, which counts cells and collects information on their size and structure. The concentration of hemoglobin is measured, and the red blood cell indices are calculated from measurements of red blood cells and hemoglobin. Manual tests can be used to independently confirm abnormal results. Approximately 10–25% of samples require a manual blood smear review, in which the blood is stained and viewed under a microscope to verify that the analyzer results are consistent with the appearance of the cells and to look for abnormalities. The hematocrit can be determined manually by centrifuging the sample and measuring the proportion of red blood cells, and in laboratories without access to automated instruments, blood cells are counted under the microscope using a hemocytometer. In 1852, Karl Vierordt published the first procedure for performing a blood count, which involved spreading a known volume of blood on a microscope slide and counting every cell. The invention of the hemocytometer in 1874 by Louis-Charles Malassez simplified the microscopic analysis of blood cells, and in the late 19th century, Paul Ehrlich and Dmitri Leonidovich Romanowsky developed techniques for staining white and red blood cells that are still used to examine blood smears. Automated methods for measuring hemoglobin were developed in the 1920s, and Maxwell Wintrobe introduced the Wintrobe hematocrit method in 1929, which in turn allowed him to define the red blood cell indices. A landmark in the automation of blood cell counts was the Coulter principle, which was patented by Wallace H. Coulter in 1953. The Coulter principle uses electrical impedance measurements to count blood cells and determine their sizes; it is a technology that remains in use in many automated analyzers. Further research in the 1970s involved the use of optical measurements to count and identify cells, which enabled the automation of the white blood cell differential. Purpose Blood is composed of a fluid portion, called plasma, and a cellular portion that contains red blood cells, white blood cells and platelets. The complete blood count evaluates the three cellular components of blood. Some medical conditions, such as anemia or thrombocytopenia, are defined by marked increases or decreases in blood cell counts. Changes in many organ systems may affect the blood, so CBC results are useful for investigating a wide range of conditions. Because of the amount of information it provides, the complete blood count is one of the most commonly performed medical laboratory tests. The CBC is often used to screen for diseases as part of a medical assessment. It is also called for when a healthcare provider suspects a person has a disease that affects blood cells, such as an infection, a bleeding disorder, or some cancers. People who have been diagnosed with disorders that may cause abnormal CBC results or who are receiving treatments that can affect blood cell counts may have a regular CBC performed to monitor their health, and the test is often performed each day on people who are hospitalized. The results may indicate a need for a blood or platelet transfusion. The complete blood count has specific applications in many medical specialties. It is often performed before a person undergoes surgery to detect anemia, ensure that platelet levels are sufficient, and screen for infection, as well as after surgery, so that blood loss can be monitored. In emergency medicine, the CBC is used to investigate numerous symptoms, such as fever, abdominal pain, and shortness of breath, and to assess bleeding and trauma. Blood counts are closely monitored in people undergoing chemotherapy or radiation therapy for cancer, because these treatments suppress the production of blood cells in the bone marrow and can produce severely low levels of white blood cells, platelets and hemoglobin. Regular CBCs are necessary for people taking some psychiatric drugs, such as clozapine and carbamazepine, which in rare cases can cause a life-threatening drop in the number of white blood cells (agranulocytosis). Because anemia during pregnancy can result in poorer outcomes for the mother and her baby, the complete blood count is a routine part of prenatal care; and in newborn babies, a CBC may be needed to investigate jaundice or to count the number of immature cells in the white blood cell differential, which can be an indicator of sepsis. The complete blood count is an essential tool of hematology, which is the study of the cause, prognosis, treatment, and prevention of diseases related to blood. The results of the CBC and smear examination reflect the functioning of the hematopoietic system—the organs and tissues involved in the production and development of blood cells, particularly the bone marrow. For example, a low count of all three cell types (pancytopenia) can indicate that blood cell production is being affected by a marrow disorder, and a bone marrow examination can further investigate the cause. Abnormal cells on the blood smear might indicate acute leukemia or lymphoma, while an abnormally high count of neutrophils or lymphocytes, in combination with indicative symptoms and blood smear findings, may raise suspicion of a myeloproliferative disorder or lymphoproliferative disorder. Examination of the CBC results and blood smear can help to distinguish between causes of anemia, such as nutritional deficiencies, bone marrow disorders, acquired hemolytic anemias and inherited conditions like sickle cell anemia and thalassemia. The reference ranges for the complete blood count represent the range of results found in 95% of apparently healthy people. By definition, 5% of results will always fall outside this range, so some abnormal results may reflect natural variation rather than signifying a medical issue. This is particularly likely if such results are only slightly outside the reference range, if they are consistent with previous results, or if there are no other related abnormalities shown by the CBC. When the test is performed on a relatively healthy population, the number of clinically insignificant abnormalities may exceed the number of results that represent disease. For this reason, professional organizations in the United States, United Kingdom and Canada recommend against pre-operative CBC testing for low-risk surgeries in individuals without relevant medical conditions. Repeated blood draws for hematology testing in hospitalized patients can contribute to hospital-acquired anemia and may result in unnecessary transfusions. Procedure The sample is collected by drawing blood into a tube containing an anticoagulant—typically EDTA—to stop its natural clotting. The blood is usually taken from a vein, but when this is difficult it may be collected from capillaries by a fingerstick, or by a heelprick in babies. Testing is typically performed on an automated analyzer, but manual techniques such as a blood smear examination or manual hematocrit test can be used to investigate abnormal results. Cell counts and hemoglobin measurements are performed manually in laboratories lacking access to automated instruments. Automated On board the analyzer, the sample is agitated to evenly distribute the cells, then diluted and partitioned into at least two channels, one of which is used to count red blood cells and platelets, the other to count white blood cells and determine the hemoglobin concentration. Some instruments measure hemoglobin in a separate channel, and additional channels may be used for differential white blood cell counts, reticulocyte counts and specialized measurements of platelets. The cells are suspended in a fluid stream and their properties are measured as they flow past sensors in a technique known as flow cytometry. Hydrodynamic focusing may be used to isolate individual cells so that more accurate results can be obtained: the diluted sample is injected into a stream of low-pressure fluid, which causes the cells in the sample to line up in single file through laminar flow. To measure the hemoglobin concentration, a reagent chemical is added to the sample to destroy (lyse) the red cells in a channel separate from that used for red blood cell counts. On analyzers that perform white blood cell counts in the same channel as hemoglobin measurement, this permits white blood cells to be counted more easily. Hematology analyzers measure hemoglobin using spectrophotometry and are based on the linear relationship between the absorbance of light and the amount of hemoglobin present. Chemicals are used to convert different forms of hemoglobin, such as oxyhemoglobin and carboxyhemoglobin, to one stable form, usually cyanmethemoglobin, and to create a permanent colour change. The absorbance of the resulting colour, when measured at a specific wavelength—usually 540 nanometres—corresponds with the concentration of hemoglobin. Sensors count and identify the cells in the sample using two main principles: electrical impedance and light scattering. Impedance-based cell counting operates on the Coulter principle: cells are suspended in a fluid carrying an electric current, and as they pass through a small opening (an aperture), they cause decreases in current because of their poor electrical conductivity. The amplitude of the voltage pulse generated as a cell crosses the aperture correlates with the amount of fluid displaced by the cell, and thus the cell's volume, while the total number of pulses correlates with the number of cells in the sample. The distribution of cell volumes is plotted on a histogram, and by setting volume thresholds based on the typical sizes of each type of cell, the different cell populations can be identified and counted. In light scattering techniques, light from a laser or a tungsten-halogen lamp is directed at the stream of cells to collect information about their size and structure. Cells scatter light at different angles as they pass through the beam, which is detected using photometers. Forward scatter, which refers to the amount of light scattered along the beam's axis, is mainly caused by diffraction of light and correlates with cellular size, while side scatter (light scattered at a 90-degree angle) is caused by reflection and refraction and provides information about cellular complexity. Radiofrequency-based methods can be used in combination with impedance. These techniques work on the same principle of measuring the interruption in current as cells pass through an aperture, but since the high-frequency RF current penetrates into the cells, the amplitude of the resulting pulse relates to factors like the relative size of the nucleus, the nucleus's structure, and the amount of granules in the cytoplasm. Small red cells and cellular debris, which are similar in size to platelets, may interfere with the platelet count, and large platelets may not be counted accurately, so some analyzers use additional techniques to measure platelets, such as fluorescent staining, multi-angle light scatter and monoclonal antibody tagging. Most analyzers directly measure the average size of red blood cells, which is called the mean cell volume (MCV), and calculate the hematocrit by multiplying the red blood cell count by the MCV. Some measure the hematocrit by comparing the total volume of red blood cells to the volume of blood sampled, and derive the MCV from the hematocrit and red blood cell count. The hemoglobin concentration, the red blood cell count and the hematocrit are used to calculate the average amount of hemoglobin within each red blood cell, the mean corpuscular hemoglobin (MCH); and its concentration, the mean corpuscular hemoglobin concentration (MCHC). Another calculation, the red blood cell distribution width (RDW), is derived from the standard deviation of the mean cell volume and reflects variation in cellular size. After being treated with reagents, white blood cells form three distinct peaks when their volumes are plotted on a histogram. These peaks correspond roughly to populations of granulocytes, lymphocytes, and other mononuclear cells, allowing a three-part differential to be performed based on cell volume alone. More advanced analyzers use additional techniques to provide a five- to seven-part differential, such as light scattering or radiofrequency analysis, or using dyes to stain specific chemicals inside cells—for example, nucleic acids, which are found in higher concentrations in immature cells or myeloperoxidase, an enzyme found in cells of the myeloid lineage. Basophils may be counted in a separate channel where a reagent destroys other white cells and leaves basophils intact. The data collected from these measurements is analyzed and plotted on a scattergram, where it forms clusters that correlate with each white blood cell type. Another approach to automating the differential count is the use of digital microscopy software, which uses artificial intelligence to classify white blood cells from photomicrographs of the blood smear. The cell images are displayed to a human operator, who can manually re-classify the cells if necessary. Most analyzers take less than a minute to run all the tests in the complete blood count. Because analyzers sample and count many individual cells, the results are very precise. However, some abnormal cells may not be identified correctly, requiring manual review of the instrument's results and identification by other means of abnormal cells the instrument could not categorize. Point-of-care testing Point-of-care testing refers to tests conducted outside of the laboratory setting, such as at a person's bedside or in a clinic. This method of testing is faster and uses less blood than conventional methods, and does not require specially trained personnel, so it is useful in emergency situations and in areas with limited access to resources. Commonly used devices for point-of-care hematology testing include the HemoCue, a portable analyzer that uses spectrophotometry to measure the hemoglobin concentration of the sample, and the i-STAT, which derives a hemoglobin reading by estimating the concentration of red blood cells from the conductivity of the blood. Hemoglobin and hematocrit can be measured on point-of-care devices designed for blood gas testing, but these measurements sometimes correlate poorly with those obtained through standard methods. There are simplified versions of hematology analyzers designed for use in clinics that can provide a complete blood count and differential. Manual The tests can be performed manually when automated equipment is not available or when the analyzer results indicate that further investigation is needed. Automated results are flagged for manual blood smear review in 10–25% of cases, which may be due to abnormal cell populations that the analyzer cannot properly count, internal flags generated by the analyzer that suggest the results could be inaccurate, or numerical results that fall outside set thresholds. To investigate these issues, blood is spread on a microscope slide, stained with a Romanowsky stain, and examined under a microscope. The appearance of the red and white blood cells and platelets is assessed, and qualitative abnormalities are reported if present. Changes in the appearance of red blood cells can have considerable diagnostic significance—for example, the presence of sickle cells is indicative of sickle cell disease, and a high number of fragmented red blood cells (schistocytes) requires urgent investigation as it can suggest a microangiopathic hemolytic anemia. In some inflammatory conditions and in paraprotein disorders like multiple myeloma, high levels of protein in the blood may cause red blood cells to appear stacked together on the smear, which is termed rouleaux. Some parasitic diseases, such as malaria and babesiosis, can be detected by finding the causative organisms on the blood smear, and the platelet count can be estimated from the blood smear, which is useful if the automated platelet count is inaccurate. To perform a manual white blood cell differential, the microscopist counts 100 cells on the blood smear and classifies them based on their appearance; sometimes 200 cells are counted. This gives the percentage of each type of white blood cell, and by multiplying these percentages by the total number of white blood cells, the absolute number of each type of white cell can be obtained. Manual counting is subject to sampling error because so few cells are counted compared with automated analysis, but it can identify abnormal cells that analyzers cannot, such as the blast cells seen in acute leukemia. Clinically significant features like toxic granulation and vacuolation can also be ascertained from microscopic examination of white blood cells. The hematocrit can performed manually by filling a capillary tube with blood, centrifuging it, and measuring the percentage of the blood that consists of red blood cells. This is useful in some conditions that can cause automated hematocrit results to be incorrect, such as polycythemia (a highly elevated red blood cell count) or severe leukocytosis (a highly elevated white blood cell count, which interferes with red blood cell measurements by causing white blood cells to be counted as red cells). Red and white blood cells and platelets can be counted using a hemocytometer, a microscope slide containing a chamber that holds a specified volume of diluted blood. The hemocytometer's chamber is etched with a calibrated grid to aid in cell counting. The cells seen in the grid are counted and divided by the volume of blood examined, which is determined from the number of squares counted on the grid, to obtain the concentration of cells in the sample. Manual cell counts are labour-intensive and inaccurate compared to automated methods, so they are rarely used except in laboratories that do not have access to automated analyzers. To count white blood cells, the sample is diluted using a fluid containing a compound that lyses red blood cells, such as ammonium oxalate, acetic acid, or hydrochloric acid. Sometimes a stain is added to the diluent that highlights the nuclei of white blood cells, making them easier to identify. Manual platelet counts are performed in a similar manner, although some methods leave the red blood cells intact. Using a phase-contrast microscope, rather than a light microscope, can make platelets easier to identify. The manual red blood cell count is rarely performed, as it is inaccurate and other methods such as hemoglobinometry and the manual hematocrit are available for assessing red blood cells; but if it is necessary to do so, red blood cells can be counted in blood that has been diluted with saline. Hemoglobin can be measured manually using a spectrophotometer or colorimeter. To measure hemoglobin manually, the sample is diluted using reagents that destroy red blood cells to release the hemoglobin. Other chemicals are used to convert different types of hemoglobin to one form, allowing it to be easily measured. The solution is then placed in a measuring cuvette and the absorbance is measured at a specific wavelength, which depends on the type of reagent used. A reference standard containing a known amount of hemoglobin is used to determine the relationship between the absorbance and the hemoglobin concentration, allowing the hemoglobin level of the sample to be measured. In rural and economically disadvantaged areas, available testing is limited by access to equipment and personnel. At primary care facilities in these regions, testing may be limited to examination of red cell morphology and manual measurement of hemoglobin, while more complex techniques like manual cell counts and differentials, and sometimes automated cell counts, are performed at district laboratories. Regional and provincial hospitals and academic centres typically have access to automated analyzers. Where laboratory facilities are not available, an estimate of hemoglobin concentration can be obtained by placing a drop of blood on a standardized type of absorbent paper and comparing it to a colour scale. Quality control Automated analyzers have to be regularly calibrated. Most manufacturers provide preserved blood with defined parameters and the analyzers are adjusted if the results are outside defined thresholds. To ensure that results continue to be accurate, quality control samples, which are typically provided by the instrument manufacturer, are tested at least once per day. The samples are formulated to provide specific results, and laboratories compare their results against the known values to ensure the instrument is functioning properly. For laboratories without access to commercial quality control material, an Indian regulatory organization recommends running patient samples in duplicate and comparing the results. A moving average measurement, in which the average results for patient samples are measured at set intervals, can be used as an additional quality control technique. Assuming that the characteristics of the patient population remain roughly the same over time, the average should remain constant; large shifts in the average value can indicate instrument problems. The MCHC values are particularly useful in this regard. In addition to analyzing internal quality control samples with known results, laboratories may receive external quality assessment samples from regulatory organizations. While the purpose of internal quality control is to ensure that analyzer results are reproducible within a given laboratory, external quality assessment verifies that results from different laboratories are consistent with each other and with the target values. The expected results for external quality assessment samples are not disclosed to the laboratory. External quality assessment programs have been widely adopted in North America and western Europe, and laboratories are often required to participate in these programs to maintain accreditation. Logistical issues may make it difficult for laboratories in under-resourced areas to implement external quality assessment schemes. Included tests The CBC measures the amounts of platelets and red and white blood cells, along with the hemoglobin and hematocrit values. Red blood cell indices—MCV, MCH and MCHC—which describe the size of red blood cells and their hemoglobin content, are reported along with the red blood cell distribution width (RDW), which measures the amount of variation in the sizes of red blood cells. A white blood cell differential, which enumerates the different types of white blood cells, may be performed, and a count of immature red blood cells (reticulocytes) is sometimes included. Red blood cells, hemoglobin, and hematocrit An example of CBC results showing a low hemoglobin, MCV, MCH and MCHC. The person was anemic. The cause could be iron deficiency or a hemoglobinopathy. Red blood cells deliver oxygen from the lungs to the tissues and on their return carry carbon dioxide back to the lungs where it is exhaled. These functions are mediated by the cells' hemoglobin. The analyzer counts red blood cells, reporting the result in units of 106 cells per microlitre of blood (× 106/μL) or 1012 cells per litre (× 1012/L), and measures their average size, which is called the mean cell volume and expressed in femtolitres or cubic micrometres. By multiplying the mean cell volume by the red blood cell count, the hematocrit (HCT) or packed cell volume (PCV), a measurement of the percentage of blood that is made up of red blood cells, can be derived; and when the hematocrit is performed directly, the mean cell volume may be calculated from the hematocrit and red blood cell count. Hemoglobin, measured after the red blood cells are lysed, is usually reported in units of grams per litre (g/L) or grams per decilitre (g/dL). Assuming that the red blood cells are normal, there is a constant relationship between hemoglobin and hematocrit: the hematocrit percentage is approximately three times greater than the hemoglobin value in g/dL, plus or minus three. This relationship, called the rule of three, can be used to confirm that CBC results are correct. Two other measurements are calculated from the red blood cell count, the hemoglobin concentration, and the hematocrit: the mean corpuscular hemoglobin and the mean corpuscular hemoglobin concentration. These parameters describe the hemoglobin content of each red blood cell. The MCH and MCHC can be confusing; in essence the MCH is a measure of the average amount of hemoglobin per red blood cell. The MCHC gives the average proportion of the cell that is hemoglobin. The MCH does not take into account the size of the red blood cells whereas the MCHC does. Collectively, the MCV, MCH, and MCHC are referred to as the red blood cell indices. Changes in these indices are visible on the blood smear: red blood cells that are abnormally large or small can be identified by comparison to the sizes of white blood cells, and cells with a low hemoglobin concentration appear pale. Another parameter is calculated from the initial measurements of red blood cells: the red blood cell distribution width or RDW, which reflects the degree of variation in the cells' size. An abnormally low hemoglobin, hematocrit, or red blood cell count indicates anemia. Anemia is not a diagnosis on its own, but it points to an underlying condition affecting the person's red blood cells. General causes of anemia include blood loss, production of defective red blood cells (ineffective erythropoeisis), decreased production of red blood cells (insufficient erythropoeisis), and increased destruction of red blood cells (hemolytic anemia). Anemia reduces the blood's ability to carry oxygen, causing symptoms like tiredness and shortness of breath. If the hemoglobin level falls below thresholds based on the person's clinical condition, a blood transfusion may be necessary. An increased number of red blood cells, leading to an increase in the hemoglobin and hematocrit, is called polycythemia. Dehydration or use of diuretics can cause a "relative" polycythemia by decreasing the amount of plasma compared to red cells. A true increase in the number of red blood cells, called absolute polycythemia, can occur when the body produces more red blood cells to compensate for chronically low oxygen levels in conditions like lung or heart disease, or when a person has abnormally high levels of erythropoietin, a hormone that stimulates production of red blood cells. In polycythemia vera, the bone marrow produces red cells and other blood cells at an excessively high rate. Evaluation of red blood cell indices is helpful in determining the cause of anemia. If the MCV is low, the anemia is termed microcytic, while anemia with a high MCV is called macrocytic anemia. Anemia with a low MCHC is called hypochromic anemia. If anemia is present but the red blood cell indices are normal, the anemia is considered normochromic and normocytic. The term hyperchromia, referring to a high MCHC, is generally not used. Elevation of the MCHC above the upper reference value is rare, mainly occurring in conditions such as spherocytosis, sickle cell disease and hemoglobin C disease. An elevated MCHC can also be a false result from conditions like red blood cell agglutination (which causes a false decrease in the red blood cell count, elevating the MCHC) or highly elevated amounts of lipids in the blood (which causes a false increase in the hemoglobin result). Microcytic anemia is typically associated with iron deficiency, thalassemia, and anemia of chronic disease, while macrocytic anemia is associated with alcoholism, folate and B12 deficiency, use of some drugs, and some bone marrow diseases. Acute blood loss, hemolytic anemia, bone marrow disorders, and various chronic diseases can result in anemia with a normocytic blood picture. The MCV serves an additional purpose in laboratory quality control. It is relatively stable over time compared to other CBC parameters, so a large change in MCV may indicate that the sample was drawn from the wrong patient. A low RDW has no clinical significance, but an elevated RDW represents increased variation in red blood cell size, a condition known as anisocytosis. Anisocytosis is common in nutritional anemias such as iron deficiency anemia and anemia due to vitamin B12 or folate deficiency, while people with thalassemia may have a normal RDW. Based on the CBC results, further steps can be taken to investigate anemia, such as a ferritin test to confirm the presence of iron deficiency, or hemoglobin electrophoresis to diagnose a hemoglobinopathy such as thalassemia or sickle cell disease. White blood cells The white blood cell and platelet counts are markedly increased, and anemia is present. The differential count shows basophilia and the presence of band neutrophils, immature granulocytes and blast cells. White blood cells defend against infections and are involved in the inflammatory response. A high white blood cell count, which is called leukocytosis, often occurs in infections, inflammation, and states of physiologic stress. It can also be caused by diseases that involve abnormal production of blood cells, such as myeloproliferative and lymphoproliferative disorders. A decreased white blood cell count, termed leukopenia, can lead to an increased risk of acquiring infections, and occurs in treatments like chemotherapy and radiation therapy and many conditions that inhibit the production of blood cells. Sepsis is associated with both leukocytosis and leukopenia. The total white blood cell count is usually reported in cells per microlitre of blood (/μL) or 109 cells per litre (× 109/L). In the white blood cell differential, the different types of white blood cells are identified and counted. The results are reported as a percentage and as an absolute number per unit volume. Five types of white blood cells—neutrophils, lymphocytes, monocytes, eosinophils, and basophils—are typically measured. Some instruments report the number of immature granulocytes, which is a classification consisting of precursors of neutrophils; specifically, promyelocytes, myelocytes and metamyelocytes. Other cell types are reported if they are identified in the manual differential. Differential results are useful in diagnosing and monitoring many medical conditions. For example, an elevated neutrophil count (neutrophilia) is associated with bacterial infection, inflammation, and myeloproliferative disorders, while a decreased count (neutropenia) may occur in individuals who are undergoing chemotherapy or taking certain drugs, or who have diseases affecting the bone marrow. Neutropenia can also be caused by some congenital disorders and may occur transiently after viral or bacterial infections in children. People with severe neutropenia and clinical signs of infection are treated with antibiotics to prevent potentially life-threatening disease. An increased number of band neutrophils—young neutrophils that lack segmented nuclei—or immature granulocytes is termed left shift and occurs in sepsis and some blood disorders, but is normal in pregnancy. An elevated lymphocyte count (lymphocytosis) is associated with viral infection and lymphoproliferative disorders like chronic lymphocytic leukemia; elevated monocyte counts (monocytosis) are associated with chronic inflammatory states; and the eosinophil count is often increased (eosinophilia) in parasitic infections and allergic conditions. An increased number of basophils, termed basophilia, can occur in myeloproliferative disorders like chronic myeloid leukemia and polycythemia vera. The presence of some types of abnormal cells, such as blast cells or lymphocytes with neoplastic features, is suggestive of a hematologic malignancy. Platelets Platelets play an essential role in clotting. When the wall of a blood vessel is damaged, platelets adhere to the exposed surface at the site of injury and plug the gap. Simultaneous activation of the coagulation cascade results in the formation of fibrin, which reinforces the platelet plug to create a stable clot. A low platelet count, known as thrombocytopenia, may cause bleeding if severe. It can occur in individuals who are undergoing treatments that suppress the bone marrow, such as chemotherapy or radiation therapy, or taking certain drugs, such as heparin, that can induce the immune system to destroy platelets. Thrombocytopenia is a feature of many blood disorders, like acute leukemia and aplastic anemia, as well as some autoimmune diseases. If the platelet count is extremely low, a platelet transfusion may be performed. Thrombocytosis, meaning a high platelet count, may occur in states of inflammation or trauma, as well as in iron deficiency, and the platelet count may reach exceptionally high levels in people with essential thrombocythemia, a rare blood disease. The platelet count can be reported in units of cells per microlitre of blood (/μL), 103 cells per microlitre , or 109 cells per litre The mean platelet volume (MPV) measures the average size of platelets in femtolitres. It can aid in determining the cause of thrombocytopenia; an elevated MPV may occur when young platelets are released into the bloodstream to compensate for increased destruction of platelets, while decreased production of platelets due to dysfunction of the bone marrow can result in a low MPV. The MPV is also useful for differentiating between congenital diseases that cause thrombocytopenia. The immature platelet fraction (IPF) or reticulated platelet count is reported by some analyzers and provides information about the rate of platelet production by measuring the number of immature platelets in the blood. Other tests Reticulocyte count Reticulocytes are immature red blood cells, which, unlike the mature cells, contain RNA. A reticulocyte count is sometimes performed as part of a complete blood count, usually to investigate the cause of a person's anemia or evaluate their response to treatment. Anemia with a high reticulocyte count can indicate that the bone marrow is producing red blood cells at a higher rate to compensate for blood loss or hemolysis, while anemia with a low reticulocyte count may suggest that the person has a condition that reduces the body's ability to produce red blood cells. When people with nutritional anemia are given nutrient supplementation, an increase in the reticulocyte count indicates that their body is responding to the treatment by producing more red blood cells. Hematology analyzers perform reticulocyte counts by staining red blood cells with a dye that binds to RNA and measuring the number of reticulocytes through light scattering or fluorescence analysis. The test can be performed manually by staining the blood with new methylene blue and counting the percentage of red blood cells containing RNA under the microscope. The reticulocyte count is expressed as an absolute number or as a percentage of red blood cells. Some instruments measure the average amount of hemoglobin in each reticulocyte; a parameter that has been studied as an indicator of iron deficiency in people who have conditions that interfere with standard tests. The immature reticulocyte fraction (IRF) is another measurement produced by some analyzers which quantifies the maturity of reticulocytes: cells that are less mature contain more RNA and thus produce a stronger fluorescent signal. This information can be useful in diagnosing anemias and evaluating red blood cell production following anemia treatment or bone marrow transplantation. Nucleated red blood cells During their formation in bone marrow, and in the liver and spleen in fetuses, red blood cells contain a cell nucleus, which is usually absent in the mature cells that circulate in the bloodstream. Nucleated red blood cells are normal in newborn babies, but when detected in children and adults, they indicate an increased demand for red blood cells, which can be caused by bleeding, some cancers and anemia. Most analyzers can detect these cells as part of the differential cell count. High numbers of nucleated red cells can cause a falsely high white cell count, which will require adjusting. Other parameters Advanced hematology analyzers generate novel measurements of blood cells which have shown diagnostic significance in research studies but have not yet found widespread clinical use. For example, some types of analyzers produce coordinate readings indicating the size and position of each white blood cell cluster. These parameters (termed cell population data) have been studied as potential markers for blood disorders, bacterial infections and malaria. Analyzers that use myeloperoxidase staining to produce differential counts can measure white blood cells' expression of the enzyme, which is altered in various disorders. Some instruments can report the percentage of red blood cells that are hypochromic in addition to reporting the average MCHC value, or provide a count of fragmented red cells (schistocytes), which occur in some types of hemolytic anemia. Because these parameters are often specific to particular brands of analyzers, it is difficult for laboratories to interpret and compare results. Reference ranges The complete blood count is interpreted by comparing the output to reference ranges, which represent the results found in 95% of apparently healthy people. Based on a statistical normal distribution, the tested samples' ranges vary with sex and age. On average, adult females have lower hemoglobin, hematocrit, and red blood cell count values than males; the difference lessens, but is still present, after menopause. CBC results for children and newborn babies differ from those of adults. Newborns' hemoglobin, hematocrit, and red blood cell count are extremely high to compensate for low oxygen levels in the womb and the high proportion of fetal hemoglobin, which is less effective at delivering oxygen to tissues than mature forms of hemoglobin, inside their red blood cells. The MCV is also increased, and the white blood cell count is elevated with a preponderance of neutrophils. The red blood cell count and related values begin to decline shortly after birth, reaching their lowest point at about two months of age and increasing thereafter. The red blood cells of older infants and children are smaller, with a lower MCH, than those of adults. In the pediatric white blood cell differential, lymphocytes often outnumber neutrophils, while in adults neutrophils predominate. Other differences between populations may affect the reference ranges: for example, people living at higher altitudes have higher hemoglobin, hematocrit, and RBC results, and people of African heritage have lower white blood cell counts on average. The type of analyzer used to run the CBC affects the reference ranges as well. Reference ranges are therefore established by individual laboratories based on their own patient populations and equipment. Limitations Some medical conditions or problems with the blood sample may produce inaccurate results. If the sample is visibly clotted, which can be caused by poor phlebotomy technique, it is unsuitable for testing, because the platelet count will be falsely decreased and other results may be abnormal. Samples stored at room temperature for several hours may give falsely high readings for MCV (mean corpuscular volume), because red blood cells swell as they absorb water from the plasma; and platelet and white blood cell differential results may be inaccurate in aged specimens, as the cells degrade over time. Samples drawn from individuals with very high levels of bilirubin or lipids in their plasma (referred to as an icteric sample or a lipemic sample, respectively) may show falsely high readings for hemoglobin, because these substances change the colour and opacity of the sample, which interferes with hemoglobin measurement. This effect can be mitigated by replacing the plasma with saline. Some individuals produce an antibody that causes their platelets to form clumps when their blood is drawn into tubes containing EDTA, the anticoagulant typically used to collect CBC samples. Platelet clumps may be counted as single platelets by automated analyzers, leading to a falsely decreased platelet count. This can be avoided by using an alternative anticoagulant such as sodium citrate or heparin. Another antibody-mediated condition that can affect complete blood count results is red blood cell agglutination. This phenomenon causes red blood cells to clump together because of antibodies bound to the cell surface. Red blood cell aggregates are counted as single cells by the analyzer, leading to a markedly decreased red blood cell count and hematocrit, and markedly elevated MCV and MCHC (mean corpuscular hemoglobin concentration). Often, these antibodies are only active at room temperature (in which case they are called cold agglutinins), and the agglutination can be reversed by heating the sample to . Samples from people with warm autoimmune hemolytic anemia may exhibit red cell agglutination that does not resolve on warming. While blast and lymphoma cells can be identified in the manual differential, microscopic examination cannot reliably determine the cells' hematopoietic lineage. This information is often necessary for diagnosing blood cancers. After abnormal cells are identified, additional techniques such as immunophenotyping by flow cytometry can be used to identify markers that provide additional information about the cells. History Before automated cell counters were introduced, complete blood count tests were performed manually: white and red blood cells and platelets were counted using microscopes. The first person to publish microscopic observations of blood cells was Antonie van Leeuwenhoek, who reported on the appearance of red cells in a 1674 letter to the Proceedings of the Royal Society of London. Jan Swammerdam had described red blood cells some years earlier, but did not publish his findings at the time. Throughout the 18th and 19th centuries, improvements in microscope technology such as achromatic lenses allowed white blood cells and platelets to be counted in unstained samples. The physiologist Karl Vierordt is credited with performing the first blood count. His technique, published in 1852, involved aspirating a carefully measured volume of blood into a capillary tube and spreading it onto a microscope slide coated with egg white. After the blood dried, he counted every cell on the slide; this process could take more than three hours to complete. The hemocytometer, introduced in 1874 by Louis-Charles Malassez, simplified the microscopic counting of blood cells. Malassez's hemocytometer consisted of a microscope slide containing a flattened capillary tube. Diluted blood was introduced to the capillary chamber by means of a rubber tube attached to one end, and an eyepiece with a scaled grid was attached to the microscope, permitting the microscopist to count the number of cells per volume of blood. In 1877, William Gowers invented a hemocytometer with a built-in counting grid, eliminating the need to produce specially calibrated eyepieces for each microscope. In the 1870s, Paul Ehrlich developed a staining technique using a combination of an acidic and basic dye that could distinguish different types of white blood cells and allow red blood cell morphology to be examined. Dmitri Leonidovich Romanowsky improved on this technique in the 1890s, using a mixture of eosin and aged methylene blue to produce a wide range of hues not present when either of the stains was used alone. This became the basis for Romanowsky staining, the technique still used to stain blood smears for manual review. The first techniques for measuring hemoglobin were devised in the late 19th century, and involved visual comparisons of the colour of diluted blood against a known standard. Attempts to automate this process using spectrophotometry and colorimetry were limited by the fact that hemoglobin is present in the blood in many different forms, meaning that it could not be measured at a single wavelength. In 1920, a method to convert the different forms of hemoglobin to one stable form (cyanmethemoglobin or hemiglobincyanide) was introduced, allowing hemoglobin levels to be measured automatically. The cyanmethemoglobin method remains the reference method for hemoglobin measurement and is still used in many automated hematology analyzers. Maxwell Wintrobe is credited with the invention of the hematocrit test. In 1929, he undertook a PhD project at the University of Tulane to determine normal ranges for red blood cell parameters, and invented a method known as the Wintrobe hematocrit. Hematocrit measurements had previously been described in the literature, but Wintrobe's method differed in that it used a large tube that could be mass-produced to precise specifications, with a built-in scale. The fraction of red blood cells in the tube was measured after centrifugation to determine the hematocrit. The invention of a reproducible method for determining hematocrit values allowed Wintrobe to define the red blood cell indices. Research into automated cell counting began in the early 20th century. A method developed in 1928 used the amount of light transmitted through a diluted blood sample, as measured by photometry, to estimate the red blood cell count, but this proved inaccurate for samples with abnormal red blood cells. Other unsuccessful attempts, in the 1930s and 1940s, involved photoelectric detectors attached to microscopes, which would count cells as they were scanned. In the late 1940s, Wallace H. Coulter, motivated by a need for better red blood cell counting methods following the bombing of Hiroshima and Nagasaki, attempted to improve on photoelectric cell counting techniques. His research was aided by his brother, Joseph R. Coulter, in a basement laboratory in Chicago. Their results using photoelectric methods were disappointing, and in 1948, after reading a paper relating the conductivity of blood to its red blood cell concentration, Wallace devised the Coulter principle—the theory that a cell suspended in a conductive medium generates a drop in current proportional to its size as it passes through an aperture. That October, Wallace built a counter to demonstrate the principle. Owing to financial constraints, the aperture was made by burning a hole through a piece of cellophane from a cigarette package. Wallace filed a patent for the technique in 1949, and in 1951 applied to the Office of Naval Research to fund the development of the Coulter counter. Wallace's patent application was granted in 1953, and after improvements to the aperture and the introduction of a mercury manometer to provide precise control over sample size, the brothers founded Coulter Electronics Inc. in 1958 to market their instruments. The Coulter counter was initially designed for counting red blood cells, but with later modifications it proved effective for counting white blood cells. Coulter counters were widely adopted by medical laboratories. The first analyzer able to produce multiple cell counts simultaneously was the Technicon , released in 1965. It achieved this by partitioning blood samples into two channels: one for counting red and white blood cells and one for measuring hemoglobin. However, the instrument was unreliable and difficult to maintain. In 1968, the Coulter Model S analyzer was released and gained widespread use. Similarly to the Technicon instrument, it used two different reaction chambers, one of which was used for the red cell count, and one of which was used for the white blood cell count and hemoglobin determination. The Model S also determined the mean cell volume using impedance measurements, which allowed the red blood cell indices and hematocrit to be derived. Automated platelet counts were introduced in 1970 with Technicon's Hemalog-8 instrument and were adopted by Coulter's S Plus series analyzers in 1980. After basic cell counting had been automated, the white blood cell differential remained a challenge. Throughout the 1970s, researchers explored two methods for automating the differential count: digital image processing and flow cytometry. Using technology developed in the 1950s and 60s to automate the reading of Pap smears, several models of image processing analyzers were produced. These instruments would scan a stained blood smear to find cell nuclei, then take a higher resolution snapshot of the cell to analyze it through densitometry. They were expensive, slow, and did little to reduce workload in the laboratory because they still required blood smears to be prepared and stained, so flow cytometry-based systems became more popular, and by 1990, no digital image analyzers were commercially available in the United States or western Europe. These techniques enjoyed a resurgence in the 2000s with the introduction of more advanced image analysis platforms using artificial neural networks. Early flow cytometry devices shot beams of light at cells in specific wavelengths and measured the resulting absorbance, fluorescence or light scatter, collecting information about the cells' features and allowing cellular contents such as DNA to be quantified. One such instrument—the Rapid Cell Spectrophotometer, developed by Louis Kamentsky in 1965 to automate cervical cytology—could generate blood cell scattergrams using cytochemical staining techniques. Leonard Ornstein, who had helped to develop the staining system on the Rapid Cell Spectrophotometer, and his colleagues later created the first commercial flow cytometric white blood cell differential analyzer, the Hemalog D. Introduced in 1974, this analyzer used light scattering, absorbance and cell staining to identify the five normal white blood cell types in addition to "large unidentified cells", a classification that usually consisted of atypical lymphocytes or blast cells. The Hemalog D could count 10,000 cells in one run, a marked improvement over the manual differential. In 1981, Technicon combined the Hemalog D with the Hemalog-8 analyzer to produce the Technicon H6000, the first combined complete blood count and differential analyzer. This analyzer was unpopular with hematology laboratories because it was labour-intensive to operate, but in the late 1980s to early 1990s similar systems were widely produced by other manufacturers such as Sysmex, Abbott, Roche and Beckman Coulter. Explanatory notes
Biology and health sciences
Diagnostics
Health
241607
https://en.wikipedia.org/wiki/Bandicoot
Bandicoot
Bandicoots are a group of more than 20 species of small to medium sized, terrestrial, largely nocturnal marsupial omnivores in the order Peramelemorphia. They are endemic to the Australia–New Guinea region, including the Bismarck Archipelago to the east and Seram and Halmahera to the west. Etymology The bandicoot is a member of the order Peramelemorphia, and the word "bandicoot" is often used informally to refer to any peramelemorph, such as the bilby. The term originally referred to the unrelated Indian bandicoot rat from the Telugu word pandikokku (పందికొక్కు) wherein pandi means pig and kokku means rat. Characteristics Bandicoots have V-shaped faces, ending with their prominent noses similar to proboscis. These noses make them, along with bilbies, similar in appearance to elephant shrews and extinct leptictids, and they are distantly related to both mammal groups. With their well-attuned snouts and sharp claws, bandicoot are fossorial diggers. They have small but fine teeth that allow them to easily chew their food. Like most marsupials, male bandicoots have bifurcated penises. The embryos of bandicoots have a chorioallantoic placenta that connects them to the uterine wall, in addition to the choriovitelline placenta that is common to all marsupials. However, the chorioallantoic placenta is small compared to those of the Placentalia, and lacks chorionic villi. Bandicoots can reach in length, and in weight. A bandicoot has a long, pointed snout, large ears, a short body, and a long tail. Its body is covered with fur that can be brown, black, golden, white, or grey in colour. Bandicoots have strong hind legs well adapted for jumping. Bandicoots also have low body temperatures and low basal metabolic rates which aides their survival in hot and dry climates. They also have low total water evaporative rate and effective panting mechanisms which further aide their survival in hotter temperatures. Classification Classification within the Peramelemorphia was previously thought to be straightforward, with two families in the order—the short-legged and mostly herbivorous bandicoots, and the longer-legged, nearly carnivorous bilbies. In recent years, however, the situation clearly has become more complex. First, the bandicoots of the New Guinean and far-northern Australian rainforests were deemed distinct from all other bandicoots and were grouped together in the separate family Peroryctidae. More recently, the bandicoot families were reunited in the Peramelidae, with the New Guinean species split into four genera in two subfamilies, Peroryctinae and Echymiperinae, while the "true bandicoots" occupy the subfamily Peramelinae. The only exception is the now-extinct pig-footed bandicoot, which has been given its own family, Chaeropodidae. Order Peramelemorphia Superfamily Perameloidea Unclassified family Genus †Galadi: 4 species Genus †Bulungu: 3 species Genus †Madju: 2 species Family Thylacomyidae Genus Macrotis: 2 species Genus †Ischnodon: 1 species Genus †Liyamayi: 1 extinct species Family †Chaeropodidae: Pig-footed bandicoot Genus †Chaeropus: 1 species Family Peramelidae Subfamily Peramelinae Genus Isoodon: short-nosed bandicoots, 3 species Genus Perameles: long-nosed bandicoots, 3 extant species Subfamily Peroryctinae Genus Peroryctes: New Guinean long-nosed bandicoots, 2 species Subfamily Echymiperinae Genus Echymipera: New Guinean spiny bandicoots, 5 species Genus Microperoryctes: New Guinean mouse bandicoots, 5 species Genus Rhynchomeles: Seram bandicoot, 1 species Superfamily †Yaraloidea Family †Yaralidae Genus †Yarala: 2 species Vernacular names The name bandicoot is an Anglicised version of a word from the Telugu language of South India which translates as 'pig-rat'. What are now called bandicoots are not found in India and bandicoot was originally applied to completely unrelated mammals—several species of large rats (rodents). Today, these species, belonging to the genera Bandicota and Nesokia, are referred to as bandicoot rats. Robert Blust reconstructs the form *mansar or *mansər 'bandicoot' for Proto-Central–Eastern Malayo-Polynesian (i.e., the reconstructed most recent common ancestor of the Central–Eastern Malayo-Polynesian languages) from related words like Oceanic Motu and Fijian , but the validity of this reconstruction is doubted by Schapper (2011). It is known as aine in the Abinomn language of Papua, Indonesia. Bandicoots have different names by the indigenous peoples of the Australia-New Guinea region. For example, the Kaurna people refer to the southern brown bandicoot as the bung or the marti.
Biology and health sciences
Marsupials
Animals
241692
https://en.wikipedia.org/wiki/Kingsnake
Kingsnake
Kingsnakes are colubrid New World members of the genus Lampropeltis, which includes 26 species. Among these, about 45 subspecies are recognized. They are nonvenomous and ophiophagous in diet. Description Kingsnakes vary widely in size and coloration. They can be as small as 24" (61 cm) or as long as 60" (152 cm). Some kingsnakes are colored in muted browns to black, while others are brightly marked in white, reds, yellows, grays, and lavenders that form rings, longitudinal stripes, speckles, and saddle-shaped bands. Most kingsnakes have quite vibrant patterns. Some species, such as the scarlet kingsnake, Mexican milk snake, and red milk snake, have coloration and patterning that can cause them to be confused with the highly venomous coral snakes. One of the mnemonic rhymes to help people distinguish between coral snakes and their nonvenomous lookalikes in the United States is "red on black, a friend of Jack; red on yellow, kill a fellow". Other variations include "red on yellow kill a fellow, red on black venom lack", and referencing the order of traffic lights "yellow, red, stop!" All these mnemonics apply only to the three species of coral snakes native to the southern United States: Micrurus fulvius (the eastern or common coral snake), Micrurus tener (the Texas coral snake), and Micruroides euryxanthus (the Arizona coral snake). Coral snakes found in other parts of the world can have distinctly different patterns, such as having red bands touching black bands, having only pink and blue bands, or having no bands at all. Etymology Lampropeltis includes the Greek words for "shiny shield": ("shiny") + ("peltē shield") + (a Latin suffix). The name is given to them in reference to their smooth, enamel-like dorsal scales. The "king" in the common name (as with the king cobra) refers to its preying on other snakes. Taxonomy Taxonomic reclassification of kingsnakes, as with many herpetiles and other animals, is a neverending process. Unexpected hybridization between kingsnake species and/or subspecies with adjacent home territories is not uncommon, thus creating new color morphs and variations, and further providing classification challenges for taxonomists; Often, different researchers will “agree to disagree”, one potentially citing a source that proves independent species-status to a group of wild snakes, while another will set out to prove that a discovered group is but a regional subspecies. In the case of L. catalinensis, for example, only a single specimen is known, thus classification is not necessarily finite; this individual could be the lone uniquely-colored snake out of a more uniformly-colored litter, or even be the one documented example of a presently-unknown, localized subspecies. The entire system actively unfolding indefinitely. Range Kingsnakes are native to North America, where they are found all over the United States and into Mexico. This genus has adapted to a wide variety of habitats, including tropical forests, shrublands, and deserts. As a whole, kingsnakes are found coast-to-coast across North America, with some as far north as Montana, North Dakota, New Jersey, Illinois and Ohio; south of those areas, there are kingsnakes to be found in nearly every corner of the lower-48 United States. Kingsnakes are also found virtually coast-to-coast across México, all the way down to the México-Guatemala border. Further south from there, milksnakes become the more predominant kingsnake in Central America, such as the Honduran milk snake. Predators Kingsnakes are often preyed upon by large vertebrates, such as birds of prey. Tarantulas also sometimes prey on them; however, a considerable threat also comes from other kingsnakes. All species of kingsnakes are known snake- and reptile-eaters, and likely won't turn down a chance to prey on their local competitors. Behavior and diet Kingsnakes are primarily terrestrial, but they are also known to be capable climbers and swimmers. Kingsnakes use constriction to kill their prey and tend to be opportunistic in their diet. They are known to seek out and eat other snakes (ophiophagy), including venomous snakes, like rattlesnakes, cottonmouths, copperheads. Some known non-venomous prey species of the kingsnake include gopher snakes, corn snakes, hognoses, and bullsnakes, garter snakes, rosy boa, water snakes, and brown snakes. Kingsnakes also eat many species of lizards, rodents, birds, and eggs. The common kingsnake is known to be immune to the venom of other snakes and does eat rattlesnakes, but it is not necessarily immune to the venom of snakes from different localities. Kingsnakes such as the California kingsnake can exert twice as much constriction force relative to body size as rat snakes and pythons. Scientists believe that such strong coils may be an adaptation to eating snakes, and other reptile prey, which can endure lower blood-oxygen levels before asphyxiating. List of kingsnake species and subspecies Kingsnake species and subspecies include (listed here alphabetically by specific and subspecific name): Guatemalan milk snake, Lampropeltis abnorma (Bocourt, 1886) Gray-banded kingsnake, Lampropeltis alterna (A. E. Brown, 1901) Mexican milk snake, Lampropeltis annulata Kennicott, 1860 California kingsnake, Lampropeltis californiae (Blainville, 1835) Mexican black kingsnake, L. c. nigrita Zweifel & Norris, 1955 Prairie kingsnake, Lampropeltis calligaster (Harlan, 1827) Santa Catalina Island kingsnake, Lampropeltis catalinensis Van Denburgh & Slevin, 1921 Scarlet kingsnake or scarlet milk snake, Lampropeltis elapsoides (Holbrook, 1838) Short-tailed snake, Lampropeltis extenuata (R.E. Brown, 1890) Central Plains milk snake, Lampropeltis gentilis (Baird & Girard, 1853) Common kingsnake, Lampropeltis getula (Linnaeus, 1766) Brooks's kingsnake, L. g. brooksi Barbour, 1919 Florida kingsnake, L. g. floridana (Blanchard, 1919) Eastern kingsnake, L. g. getula (Linnaeus, 1766) Apalachicola Lowlands kingsnake, L. g. meansi Krysko & Judd, 2006 Greer's kingsnake, Lampropeltis greeri (Webb, 1961) Speckled kingsnake, Lampropeltis holbrooki Stejneger, 1902 Madrean mountain kingsnake, Lampropeltis knoblochi Taylor, 1940 Nuevo León kingsnake, Lampropeltis leonis (Günther, 1893) Mexican kingsnake, Lampropeltis mexicana (Garman, 1884) Ecuadorian milk snake, Lampropeltis micropholis Cope, 1860 Black kingsnake, Lampropeltis nigra (Yarrow, 1882) South Florida mole kingsnake, Lampropeltis occipitolineata Price, 1987 Atlantic Central American milk snake, Lampropeltis polyzona Cope, 1860 Arizona mountain kingsnake, Lampropeltis pyromelana (Cope, 1866) Utah mountain kingsnake, L. p. infralabialis W. Tanner, 1953 Arizona mountain kingsnake, L. p. pyromelana (Cope, 1866) Mole kingsnake, Lampropeltis rhombomaculata (Holbrook, 1840) Ruthven's kingsnake, Lampropeltis ruthveni (Blanchard, 1920) Desert kingsnake, Lampropeltis splendida (Baird & Girard, 1853) Milk snake, Lampropeltis triangulum (Lacépède, 1789) Lampropeltis webbi Bryson, Dixon & Lazcano, 2005 California mountain kingsnake, Lampropeltis zonata (Lockington, 1876 ex Blainville, 1835) San Pedro kingsnake, L. z. agalma (Van Denburgh & Slevin, 1923) Todos Santos Island kingsnake, L. z. herrerae (Van Denburgh & Slevin, 1923) Sierra Nevada mountain kingsnake, L. z. multicincta (Yarrow, 1882) Coast Ranges mountain kingsnake, L. z. multifasciata (Bocourt, 1886) San Bernardino mountain kingsnake, L. z. parvirubra Zweifel, 1952 San Diego mountain kingsnake, L. z. pulchra Zweifel, 1952 Saint Helena mountain kingsnake, L. z. zonata (Lockington, 1876 ex Blainville, 1835) Additionally, Pyron and Burbrink have argued that the short-tailed snake (Stilosoma extenuatum) (Brown, 1890) should be included in Lampropeltis.
Biology and health sciences
Snakes
Animals
241717
https://en.wikipedia.org/wiki/Clinical%20trial
Clinical trial
Clinical trials are prospective biomedical or behavioral research studies on human participants designed to answer specific questions about biomedical or behavioral interventions, including new treatments (such as novel vaccines, drugs, dietary choices, dietary supplements, and medical devices) and known interventions that warrant further study and comparison. Clinical trials generate data on dosage, safety and efficacy. They are conducted only after they have received health authority/ethics committee approval in the country where approval of the therapy is sought. These authorities are responsible for vetting the risk/benefit ratio of the trial—their approval does not mean the therapy is 'safe' or effective, only that the trial may be conducted. Depending on product type and development stage, investigators initially enroll volunteers or patients into small pilot studies, and subsequently conduct progressively larger scale comparative studies. Clinical trials can vary in size and cost, and they can involve a single research center or multiple centers, in one country or in multiple countries. Clinical study design aims to ensure the scientific validity and reproducibility of the results. Costs for clinical trials can range into the billions of dollars per approved drug, and the complete trial process to approval may require 7–15 years. The sponsor may be a governmental organization or a pharmaceutical, biotechnology or medical-device company. Certain functions necessary to the trial, such as monitoring and lab work, may be managed by an outsourced partner, such as a contract research organization or a central laboratory. Only 10 percent of all drugs started in human clinical trials become approved drugs. Overview Trials of drugs Some clinical trials involve healthy subjects with no pre-existing medical conditions. Other clinical trials pertain to people with specific health conditions who are willing to try an experimental treatment. Pilot experiments are conducted to gain insights for design of the clinical trial to follow. There are two goals to testing medical treatments: to learn whether they work well enough, called "efficacy", or "effectiveness"; and to learn whether they are safe enough, called "safety". Neither is an absolute criterion; both safety and efficacy are evaluated relative to how the treatment is intended to be used, what other treatments are available, and the severity of the disease or condition. The benefits must outweigh the risks. For example, many drugs to treat cancer have severe side effects that would not be acceptable for an over-the-counter pain medication, yet the cancer drugs have been approved since they are used under a physician's care and are used for a life-threatening condition. In the US the elderly constitute 14% of the population, while they consume over one-third of drugs. People over 55 (or a similar cutoff age) are often excluded from trials because their greater health issues and drug use complicate data interpretation, and because they have different physiological capacity than younger people. Children and people with unrelated medical conditions are also frequently excluded. Pregnant women are often excluded due to potential risks to the fetus. The sponsor designs the trial in coordination with a panel of expert clinical investigators, including what alternative or existing treatments to compare to the new drug and what type(s) of patients might benefit. If the sponsor cannot obtain enough test subjects at one location investigators at other locations are recruited to join the study. During the trial, investigators recruit subjects with the predetermined characteristics, administer the treatment(s) and collect data on the subjects' health for a defined time period. Data include measurements such as vital signs, concentration of the study drug in the blood or tissues, changes to symptoms, and whether improvement or worsening of the condition targeted by the study drug occurs. The researchers send the data to the trial sponsor, who then analyzes the pooled data using statistical tests. Examples of clinical trial goals include assessing the safety and relative effectiveness of a medication or device: On a specific kind of patient At varying dosages For a new indication Evaluation for improved efficacy in treating a condition as compared to the standard therapy for that condition Evaluation of the study drug or device relative to two or more already approved/common interventions for that condition While most clinical trials test one alternative to the novel intervention, some expand to three or four and may include a placebo. Except for small, single-location trials, the design and objectives are specified in a document called a clinical trial protocol. The protocol is the trial's "operating manual" and ensures all researchers perform the trial in the same way on similar subjects and that the data is comparable across all subjects. As a trial is designed to test hypotheses and rigorously monitor and assess outcomes, it can be seen as an application of the scientific method, specifically the experimental step. The most common clinical trials evaluate new pharmaceutical products, medical devices, biologics, diagnostic assays, psychological therapies, or other interventions. Clinical trials may be required before a national regulatory authority approves marketing of the innovation. Trials of devices Similarly to drugs, manufacturers of medical devices in the United States are required to conduct clinical trials for premarket approval. Device trials may compare a new device to an established therapy, or may compare similar devices to each other. An example of the former in the field of vascular surgery is the Open versus Endovascular Repair (OVER trial) for the treatment of abdominal aortic aneurysm, which compared the older open aortic repair technique to the newer endovascular aneurysm repair device. An example of the latter are clinical trials on mechanical devices used in the management of adult female urinary incontinence. Trials of procedures Similarly to drugs, medical or surgical procedures may be subjected to clinical trials, such as comparing different surgical approaches in treatment of fibroids for subfertility. However, when clinical trials are unethical or logistically impossible in the surgical setting, case-controlled studies will be replaced. Patient and public involvement Besides being participants in a clinical trial, members of the public can be actively collaborate with researchers in designing and conducting clinical research. This is known as patient and public involvement (PPI). Public involvement involves a working partnership between patients, caregivers, people with lived experience, and researchers to shape and influence what is researcher and how. PPI can improve the quality of research and make it more relevant and accessible. People with current or past experience of illness can provide a different perspective than professionals and compliment their knowledge. Through their personal knowledge they can identify research topics that are relevant and important to those living with an illness or using a service. They can also help to make the research more grounded in the needs of the specific communities they are part of. Public contributors can also ensure that the research is presented in plain language that is clear to the wider society and the specific groups it is most relevant for. History Development Although early medical experimentation was performed often, the use of a control group to provide an accurate comparison for the demonstration of the intervention's efficacy was generally lacking. For instance, Lady Mary Wortley Montagu, who campaigned for the introduction of inoculation (then called variolation) to prevent smallpox, arranged for seven prisoners who had been sentenced to death to undergo variolation in exchange for their life. Although they survived and did not contract smallpox, there was no control group to assess whether this result was due to the inoculation or some other factor. Similar experiments performed by Edward Jenner over his smallpox vaccine were equally conceptually flawed. The first proper clinical trial was conducted by the Scottish physician James Lind. The disease scurvy, now known to be caused by a Vitamin C deficiency, would often have terrible effects on the welfare of the crew of long-distance ocean voyages. In 1740, the catastrophic result of Anson's circumnavigation attracted much attention in Europe; out of 1900 men, 1400 had died, most of them allegedly from having contracted scurvy. John Woodall, an English military surgeon of the British East India Company, had recommended the consumption of citrus fruit from the 17th century, but their use did not become widespread. Lind conducted the first systematic clinical trial in 1747. He included a dietary supplement of an acidic quality in the experiment after two months at sea, when the ship was already afflicted with scurvy. He divided twelve scorbutic sailors into six groups of two. They all received the same diet but, in addition, group one was given a quart of cider daily, group two twenty-five drops of elixir of vitriol (sulfuric acid), group three six spoonfuls of vinegar, group four half a pint of seawater, group five received two oranges and one lemon, and the last group a spicy paste plus a drink of barley water. The treatment of group five stopped after six days when they ran out of fruit, but by then one sailor was fit for duty while the other had almost recovered. Apart from that, only group one also showed some effect of its treatment. Each year, May 20 is celebrated as Clinical Trials Day in honor of Lind's research. After 1750 the discipline began to take its modern shape. The English doctor John Haygarth demonstrated the importance of a control group for the correct identification of the placebo effect in his celebrated study of the ineffective remedy called Perkin's tractors. Further work in that direction was carried out by the eminent physician Sir William Gull, 1st Baronet in the 1860s. Frederick Akbar Mahomed (d. 1884), who worked at Guy's Hospital in London, made substantial contributions to the process of clinical trials, where "he separated chronic nephritis with secondary hypertension from what we now term essential hypertension. He also founded the Collective Investigation Record for the British Medical Association; this organization collected data from physicians practicing outside the hospital setting and was the precursor of modern collaborative clinical trials." Modern trials Ideas of Sir Ronald A. Fisher still play a role in clinical trials. While working for the Rothamsted experimental station in the field of agriculture, Fisher developed his Principles of experimental design in the 1920s as an accurate methodology for the proper design of experiments. Among his major ideas include the importance of randomizationthe random assignment of individuals to different groups for the experiment; replicationto reduce uncertainty, measurements should be repeated and experiments replicated to identify sources of variation; blocking—to arrange experimental units into groups of units that are similar to each other, and thus reducing irrelevant sources of variation; use of factorial experiments—efficient at evaluating the effects and possible interactions of several independent factors. Of these, blocking and factorial design are seldom applied in clinical trials, because the experimental units are human subjects and there is typically only one independent intervention: the treatment. The British Medical Research Council officially recognized the importance of clinical trials from the 1930s. The council established the Therapeutic Trials Committee to advise and assist in the arrangement of properly controlled clinical trials on new products that seem likely on experimental grounds to have value in the treatment of disease. The first randomised curative trial was carried out at the MRC Tuberculosis Research Unit by Sir Geoffrey Marshall (1887–1982). The trial, carried out between 1946 and 1947, aimed to test the efficacy of the chemical streptomycin for curing pulmonary tuberculosis. The trial was both double-blind and placebo-controlled. The methodology of clinical trials was further developed by Sir Austin Bradford Hill, who had been involved in the streptomycin trials. From the 1920s, Hill applied statistics to medicine, attending the lectures of renowned mathematician Karl Pearson, among others. He became famous for a landmark study carried out in collaboration with Richard Doll on the correlation between smoking and lung cancer. They carried out a case-control study in 1950, which compared lung cancer patients with matched control and also began a sustained long-term prospective study into the broader issue of smoking and health, which involved studying the smoking habits and health of more than 30,000 doctors over a period of several years. His certificate for election to the Royal Society called him "...the leader in the development in medicine of the precise experimental methods now used nationally and internationally in the evaluation of new therapeutic and prophylactic agents." International clinical trials day is celebrated on 20 May. The acronyms used in the titling of clinical trials are often contrived, and have been the subject of derision. Types Clinical trials are classified by the research objective created by the investigators. In an observational study, the investigators observe the subjects and measure their outcomes. The researchers do not actively manage the study. In an interventional study, the investigators give the research subjects an experimental drug, surgical procedure, use of a medical device, diagnostic or other intervention to compare the treated subjects with those receiving no treatment or the standard treatment. Then the researchers assess how the subjects' health changes. Trials are classified by their purpose. After approval for human research is granted to the trial sponsor, the U.S. Food and Drug Administration (FDA) organizes and monitors the results of trials according to type: Prevention trials look for ways to prevent disease in people who have never had the disease or to prevent a disease from returning. These approaches may include drugs, vitamins or other micronutrients, vaccines, or lifestyle changes. Screening trials test for ways to identify certain diseases or health conditions. Diagnostic trials are conducted to find better tests or procedures for diagnosing a particular disease or condition. Treatment trials test experimental drugs, new combinations of drugs, or new approaches to surgery or radiation therapy. Quality of life trials (supportive care trials) evaluate how to improve comfort and quality of care for people with a chronic illness. Genetic trials are conducted to assess the prediction accuracy of genetic disorders making a person more or less likely to develop a disease. Epidemiological trials have the goal of identifying the general causes, patterns or control of diseases in large numbers of people. Compassionate use trials or expanded access trials provide partially tested, unapproved therapeutics to a small number of patients who have no other realistic options. Usually, this involves a disease for which no effective therapy has been approved, or a patient who has already failed all standard treatments and whose health is too compromised to qualify for participation in randomized clinical trials. Usually, case-by-case approval must be granted by both the FDA and the pharmaceutical company for such exceptions. Fixed trials consider existing data only during the trial's design, do not modify the trial after it begins, and do not assess the results until the study is completed. Adaptive clinical trials use existing data to design the trial, and then use interim results to modify the trial as it proceeds. Modifications include dosage, sample size, drug undergoing trial, patient selection criteria and "cocktail" mix. Adaptive trials often employ a Bayesian experimental design to assess the trial's progress. In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained. The aim is to more quickly identify drugs that have a therapeutic effect and to zero in on patient populations for whom the drug is appropriate. Clinical trials are conducted typically in four phases, with each phase using different numbers of subjects and having a different purpose to construct focus on identifying a specific effect. Phases Clinical trials involving new drugs are commonly classified into five phases. Each phase of the drug approval process is treated as a separate clinical trial. The drug development process will normally proceed through phases I–IV over many years, frequently involving a decade or longer. If the drug successfully passes through phases I, II, and III, it will usually be approved by the national regulatory authority for use in the general population. Phase IV trials are performed after the newly approved drug, diagnostic or device is marketed, providing assessment about risks, benefits, or best uses. {| class=wikitable |- ! Phase !! Aim!!
Physical sciences
Research methods
Basics and measurement
241809
https://en.wikipedia.org/wiki/Carbon%20disulfide
Carbon disulfide
Carbon disulfide (also spelled as carbon disulphide) is an inorganic compound with the chemical formula and structure . It is also considered as the anhydride of thiocarbonic acid. It is a colorless, flammable, neurotoxic liquid that is used as a building block in organic synthesis. Pure carbon disulfide has a pleasant, ether- or chloroform-like odor, but commercial samples are usually yellowish and are typically contaminated with foul-smelling impurities. History In 1796, the German chemist Wilhelm August Lampadius (1772–1842) first prepared carbon disulfide by heating pyrite with moist charcoal. He called it "liquid sulfur" (flüssig Schwefel). The composition of carbon disulfide was finally determined in 1813 by the team of the Swedish chemist Jöns Jacob Berzelius (1779–1848) and the Swiss-British chemist Alexander Marcet (1770–1822). Their analysis was consistent with an empirical formula of CS2. Occurrence, manufacture, properties Small amounts of carbon disulfide are released by volcanic eruptions and marshes. CS2 once was manufactured by combining carbon (or coke) and sulfur at 800–1000 °C. C + 2S → CS2 A lower-temperature reaction, requiring only 600 °C, utilizes natural gas as the carbon source in the presence of silica gel or alumina catalysts: 2 CH4 + S8 → 2 CS2 + 4 H2S The reaction is analogous to the combustion of methane. Global production/consumption of carbon disulfide is approximately one million tonnes, with China consuming 49%, followed by India at 13%, mostly for the production of rayon fiber. United States production in 2007 was 56,000 tonnes. Solvent Carbon disulfide can dissolve a variety of nonpolar chemicals including phosphorus, sulfur, selenium, bromine, iodine, fats, resins, rubber, and asphalt. Extraterrestrial In March 2024, traces of CS2 were likely detected in the atmosphere of the temperate mini-Neptune planet TOI-270 d by the James Webb Space Telescope. Reactions Combustion of CS2 affords sulfur dioxide according to this ideal stoichiometry: CS2 + 3O2 → CO2 + 2SO2 With nucleophiles For example, amines afford dithiocarbamates: 2R2NH + CS2 → [R2NH2+][R2NCS2−] Xanthates form similarly from alkoxides: RONa + CS2 → [Na+][ROCS2−] This reaction is the basis of the manufacture of regenerated cellulose, the main ingredient of viscose, rayon, and cellophane. Both xanthates and the related thioxanthates (derived from treatment of CS2 with sodium thiolates) are used as flotation agents in mineral processing. Upon treatment with sodium sulfide, carbon disulfide affords trithiocarbonate: Na2S + CS2 → [Na+]2[CS32−] Carbon disulfide does not hydrolyze readily, although the process is catalyzed by an enzyme carbon disulfide hydrolase. Compared to the isoelectronic carbon dioxide, CS2 is a weaker electrophile. While, however, reactions of nucleophiles with CO2 are highly reversible and products are only isolated with very strong nucleophiles, the reactions with CS2 are thermodynamically more favored allowing the formation of products with less reactive nucleophiles. Reduction Reduction of carbon disulfide with sodium affords sodium 1,3-dithiole-2-thione-4,5-dithiolate together with sodium trithiocarbonate: 4Na + 4CS2 → Na2C3S5 + Na2CS3 Chlorination Chlorination of CS2 provides a route to carbon tetrachloride: CS2 + 3 Cl2 → CCl4 + S2Cl2 This conversion proceeds via the intermediacy of thiophosgene, CSCl2. Coordination chemistry CS2 is a ligand for many metal complexes, forming pi complexes. One example is CpCo(η2-CS2)(PMe3). Polymerization CS2 polymerizes upon photolysis or under high pressure to give an insoluble material called car-sul or "Bridgman's black", named after the discoverer of the polymer, Percy Williams Bridgman. Trithiocarbonate (-S-C(S)-S-) linkages comprise, in part, the backbone of the polymer, which is a semiconductor. Uses The principal industrial uses of carbon disulfide, consuming 75% of the annual production, are the manufacture of viscose rayon and cellophane film. It is also a valued intermediate in chemical synthesis of carbon tetrachloride. It is widely used in the synthesis of organosulfur compounds such as xanthates, which are used in froth flotation, a method for extracting metals from their ores. Carbon disulfide is also a precursor to dithiocarbamates, which are used as drugs (e.g. Metam sodium) and rubber chemistry. Niche uses It can be used in fumigation of airtight storage warehouses, airtight flat storage, bins, grain elevators, railroad box cars, ship holds, barges, and cereal mills. Carbon disulfide is also used as an insecticide for the fumigation of grains, nursery stock, in fresh fruit conservation, and as a soil disinfectant against insects and nematodes. It can also be used for the Barking dog reaction. Health effects Carbon disulfide has been linked to both acute and chronic forms of poisoning, with a diverse range of symptoms. Concentrations of 500–3000 mg/m3 cause acute and subacute poisoning. These include a set of mostly neurological and psychiatric symptoms, called encephalopathia sulfocarbonica. Symptoms include acute psychosis (manic delirium, hallucinations), paranoic ideas, loss of appetite, gastrointestinal and sexual disorders, polyneuritis, myopathy, and mood changes (including irritability and anger). Effects observed at lower concentrations include neurological problems (encephalopathy, psychomotor and psychological disturbances, polyneuritis, abnormalities in nerve conduction), hearing problems, vision problems (burning eyes, abnormal light reactions, increased ophthalmic pressure), heart problems (increased deaths for heart disease, angina pectoris, high blood pressure), reproductive problems (increased miscarriages, immobile or deformed sperm), and decreased immune response. Occupational exposure to carbon disulfide is also associated with cardiovascular disease, particularly stroke. In 2000, the WHO believed that health harms were unlikely at levels below 100 μg/m3, and set this as a guideline level. Carbon disulfide can be smelled at levels above 200 μg/m3, and the WHO recommended a sensory guideline of below 20 μg/m3. Exposure to carbon disulfide is well-established to be harmful to health in concentrations at or above 30 mg/m3. Changes in the function of the central nervous system have been observed at concentrations of 20–25 mg/m3. There are also reports of harms to health at 10 mg/m3, for exposures of 10–15 years, but the lack of good data on past exposure levels make the association of these harms with concentrations of 10 mg/m3 findings uncertain. The measured concentration of 10 mg/m3 may be equivalent to a concentration in the general environment of 1 mg/m3. Environmental sources The primary source of carbon disulfide in the environment is rayon factories. Most global carbon disulfide emissions come from rayon production, as of 2008. Other sources include the production of cellophane, carbon tetrachloride, carbon black, and sulfur recovery. Carbon disulfide production also emits hydrogen sulfide. , about 250 g of carbon disulfide is emitted per kilogram of rayon produced. About 30 g of carbon disulfide is emitted per kilogram of carbon black produced. About 0.341 g of carbon disulfide is emitted per kilogram of sulfur recovered. Japan has reduced carbon disulfide emissions per kilogram of rayon produced, but in other rayon-producing countries, including China, emissions are assumed to be uncontrolled (based on global modelling and large-scale free-air concentration measurements). Rayon production is steady or decreasing except in China, where it is increasing, . Carbon black production in Japan and Korea uses incinerators to destroy about 99% of the carbon disulfide that would otherwise be emitted. When used as a solvent, Japanese emissions are about 40% of the carbon disulfide used; elsewhere, the average is about 80%. Most rayon production uses carbon sulfide. One exception is rayon made using the lyocell process, which uses a different solvent; the lyocell process is not widely used, because it is more expensive than the viscose process. Cuprammonium rayon also does not use carbon disulfide. Historic and current exposure Industrial workers working with carbon disulfide are at high risk. Emissions may also harm the health of people living near rayon plants. Concerns about carbon disulfide exposure have a long history. Around 1900, carbon disulfide came to be widely used in the production of vulcanized rubber. The psychosis produced by high exposures was immediately apparent (it has been reported with 6 months of exposure). Sir Thomas Oliver told a story about a rubber factory that put bars on its windows so that the workers would not jump out to their deaths. Carbon disulfide's use in the US as a heavier-than-air burrow poison for Richardson's ground squirrel also led to reports of psychosis. No systematic medical study of the issue was published, and knowledge was not transferred to the rayon industry. The first large epidemiological study of rayon workers was done in the US in the late 1930s, and found fairly severe effects in 30% of the workers. Data on increased risks of heart attacks and strokes came out in the 1960s. Courtaulds, a major rayon manufacturer, worked hard to prevent publication of this data in the UK. Average concentrations in sampled rayon plants were reduced from about 250 mg/m3 in 1955–1965 to about 20–30 mg/m3 in the 1980s (US figures only?). Rayon production has since largely moved to the developing world, especially China, Indonesia and India. Rates of disability in modern factories are unknown, . Current manufacturers using the viscose process do not provide any information on harm to their workers.
Physical sciences
Molecular compounds
Chemistry
241810
https://en.wikipedia.org/wiki/Diborane
Diborane
Diborane(6), commonly known as diborane, is the chemical compound with the formula B2H6. It is a highly toxic, colorless, and pyrophoric gas with a repulsively sweet odor. Given its simple formula, borane is a fundamental boron compound. It has attracted wide attention for its electronic structure. Several of its derivatives are useful reagents. Structure and bonding The structure of diborane has D2h symmetry. Four hydrides are terminal, while two bridge between the boron centers. The lengths of the B–Hbridge bonds and the B–Hterminal bonds are 1.33 and 1.19 Å respectively. This difference in bond lengths reflects the difference in their strengths, the B–Hbridge bonds being relatively weaker. The weakness of the B–Hbridge compared to B–Hterminal bonds is indicated by their vibrational signatures in the infrared spectrum, being ≈2100 and 2500 cm−1 respectively. The model determined by molecular orbital theory describes the bonds between boron and the terminal hydrogen atoms as conventional 2-center 2-electron covalent bonds. The bonding between the boron atoms and the bridging hydrogen atoms is, however, different from that in molecules such as hydrocarbons. Each boron uses two electrons in bonding to the terminal hydrogen atoms and has one valence electron remaining for additional bonding. The bridging hydrogen atoms provide one electron each. The B2H2 ring is held together by four electrons forming two 3-center 2-electron bonds. This type of bond is sometimes called a "banana bond". B2H6 is isoelectronic with C2H62+, which would arise from the diprotonation of the planar molecule ethylene. Diborane is one of many compounds with such unusual bonding. Of the other elements in group IIIA, gallium is known to form a similar compound digallane, Ga2H6. Aluminium forms a polymeric hydride, (AlH3)n; although unstable, Al2H6 has been isolated in solid hydrogen and is isostructural with diborane. Production and synthesis Extensive studies of diborane have led to the development of multiple synthesis routes. Most preparations entail reactions of hydride donors with boron halides or alkoxides. The industrial synthesis of diborane involves the reduction of BF3 by sodium hydride (NaH), lithium hydride (LiH) or lithium aluminium hydride (LiAlH4): 8 BF3 + 6 LiH → B2H6 + 6 LiBF4 Lithium hydride used for this purpose must be very finely powdered to avoid the formation of a passivating lithium tetrafluoroborate layer on the reactant. Alternatively, a small amount of diborane product can be added to form lithium borohydride, which will react with the BF3 to produce more diborane, making the reaction autocatalytic. Two laboratory methods start from boron trichloride with lithium aluminium hydride or from boron trifluoride ether solution with sodium borohydride. Both methods result in as much as 30% yield: 4 BCl3 + 3 LiAlH4 → 2 B2H6 + 3 LiAlCl4 4 BF3 + 3 NaBH4 → 2 B2H6 + 3 NaBF4 When heated with NaBH4, tin(II) chloride is reduced to elemental tin, forming diborane in the process: Older methods entail the direct reaction of borohydride salts with a non-oxidizing acid, such as phosphoric acid or dilute sulfuric acid. 2 BH4− + 2 H+ → 2 H2 + B2H6 Similarly, oxidation of borohydride salts has been demonstrated and remains convenient for small-scale preparations. For example, using iodine as an oxidizer: 2 + → 2 NaI + + Another small-scale synthesis uses potassium borohydride and phosphoric acid as starting materials. Reactions Diborane is a highly reactive and versatile reagent. Air, water, oxygen As a pyrophoric substance, diborane reacts exothermically with oxygen to form boron trioxide and water: 2 B2H6 + 6 O2 → 2 B2O3 + 6 H2O (ΔHr = −2035 kJ/mol = −73.47 kJ/g) Diborane reacts violently with water to form hydrogen and boric acid: B2H6 + 6 H2O → 2 B(OH)3 + 6 H2 (ΔHr = −466 kJ/mol = −16.82 kJ/g) Diborane also reacts with alcohols similarly. Methanol for example give hydrogen and trimethylborate: B2H6 + 6 MeOH → 2 B(OMe)3 + 6 H2 Lewis acidity One dominating reaction pattern involves formation of adducts with Lewis bases. Often such initial adducts proceed rapidly to give other products. For example, borane-tetrahydrofuran, which often behaves equivalently to diborane, degrades to borate esters. Its adduct with dimethyl sulfide is an important reagent in organic synthesis. With ammonia diborane forms the diammoniate of diborane, DADB with small quantities of ammonia borane as byproduct. The ratio depends on the conditions. Hydroboration In the hydroboration reaction, diborane also reacts readily with alkenes to form trialkylboranes. This reaction pattern is rather general and the resulting alkyl borates can be readily derivatized, e.g. to alcohols. Although early work on hydroboration relied on diborane, it has been replaced by borane dimethylsulfide, which is more safely handled. Other Pyrolysis of diborane gives hydrogen and diverse boron hydride clusters. For example, pentaborane was first prepared by pyrolysis of diborane at about 200 °C. Although this pyrolysis route is rarely employed, it ushered in a large research theme of borane cluster chemistry. Treating diborane with sodium amalgam gives NaBH4 and Na[B3H8] When diborane is treated with lithium hydride in diethyl ether, lithium borohydride is formed: B2H6 + 2 LiH → 2 LiBH4 Diborane reacts with anhydrous hydrogen chloride or hydrogen bromide gas to give a boron halohydride: B2H6 + HX → B2H5X + H2 (X = Cl, Br) Treating diborane with carbon monoxide at 470 K and 20 bar gives H3BCO. Reagent in organic synthesis Diborane and its variants are central organic synthesis reagents for hydroboration. Alkenes add across the B–H bonds to give trialkylboranes, which can be further elaborated. Diborane is used as a reducing agent roughly complementary to the reactivity of lithium aluminium hydride. The compound readily reduces carboxylic acids to the corresponding alcohols, whereas ketones react only sluggishly. History Diborane was first synthesised in the 19th century by hydrolysis of metal borides, but it was never analysed. From 1912 to 1936, Alfred Stock, the major pioneer in the chemistry of boron hydrides, undertook his research that led to the methods for the synthesis and handling of the highly reactive, volatile, and often toxic boron hydrides. He proposed the first ethane-like structure of diborane. Electron diffraction measurements by S. H. Bauer initially appeared to support his proposed structure. Because of a personal communication with L. Pauling (who supported the ethane-like structure), H. I. Schlessinger and A. B. Burg did not specifically discuss 3-center 2-electron bonding in their then classic review in the early 1940s. The review does, however, discuss the bridged D2h structure in some depth: "It is to be recognized that this formulation easily accounts for many of the chemical properties of diborane..." In 1943, H. Christopher Longuet-Higgins, while still an undergraduate at Oxford, was the first to explain the structure and bonding of the boron hydrides. The article reporting the work, written with his tutor R. P. Bell, also reviews the history of the subject beginning with the work of Dilthey. Shortly afterwards, the theoretical work of Longuet-Higgins was confirmed in an infrared study of diborane by Price. The structure was re-confirmed by electron-diffraction measurement in 1951 by K. Hedberg and V. Schomaker, with the confirmation of the structure shown in the schemes on this page. William Nunn Lipscomb Jr. further confirmed the molecular structure of boranes using X-ray crystallography in the 1950s and developed theories to explain their bonding. Later, he applied the same methods to related problems, including the structure of carboranes, on which he directed the research of future 1981 Nobel Prize winner Roald Hoffmann. The 1976 Nobel Prize in Chemistry was awarded to Lipscomb "for his studies on the structure of boranes illuminating problems of chemical bonding". Traditionally, diborane has often been described as electron-deficient, because the 12 valence electrons can only form 6 conventional 2-centre 2-electron bonds, which are insufficient to join all 8 atoms. However, the more correct description using 3-centre bonds shows that diborane is really electron-precise, since there are just enough valence electrons to fill the 6 bonding molecular orbitals. Nevertheless, some leading textbooks still use the term "electron-deficient". Other uses Because of the exothermicity of its reaction with oxygen, diborane has been tested as a rocket propellant. Complete combustion is strongly exothermic. However, combustion is not complete in the rocket engine, as some boron monoxide, B2O, is produced. This conversion mirrors the incomplete combustion of hydrocarbons, to produce carbon monoxide (CO). Diborane also proved difficult to handle. Diborane has been investigated as a precursor to metal boride films and for the p-doping of silicon semiconductors. Safety Diborane is a pyrophoric gas. Commercially available adducts are typically used instead, at least for applications in organic chemistry. These adducts include borane-tetrahydrofuran (borane-THF) and borane-dimethylsulfide. The toxic effects of diborane are mitigated because the compound is so unstable in air. The toxicity toward laboratory rats has been investigated.
Physical sciences
Hydrogen compounds
Chemistry
241878
https://en.wikipedia.org/wiki/SN2%20reaction
SN2 reaction
{{DISPLAYTITLE:SN2 reaction}} The bimolecular nucleophilic substitution (SN2) is a type of reaction mechanism that is common in organic chemistry. In the SN2 reaction, a strong nucleophile forms a new bond to an sp3-hybridised carbon atom via a backside attack, all while the leaving group detaches from the reaction center in a concerted (i.e. simultaneous) fashion. The name SN2 refers to the Hughes-Ingold symbol of the mechanism: "SN" indicates that the reaction is a nucleophilic substitution, and "2" that it proceeds via a bimolecular mechanism, which means both the reacting species are involved in the rate-determining step. What distinguishes SN2 from the other major type of nucleophilic substitution, the SN1 reaction, is that the displacement of the leaving group, which is the rate-determining step, is separate from the nucleophilic attack in SN1. The SN2 reaction can be considered as an organic-chemistry analogue of the associative substitution from the field of inorganic chemistry. Reaction mechanism The reaction most often occurs at an aliphatic sp3 carbon center with an electronegative, stable leaving group attached to it, which is frequently a halogen (often denoted X). The formation of the C–Nu bond, due to attack by the nucleophile (denoted Nu), occurs together with the breakage of the C–X bond. The reaction occurs through a transition state in which the reaction center is pentacoordinate and approximately sp2-hybridised. The SN2 reaction can be viewed as a HOMO–LUMO interaction between the nucleophile and substrate. The reaction occurs only when the occupied lone pair orbital of the nucleophile donates electrons to the unfilled σ* antibonding orbital between the central carbon and the leaving group. Throughout the course of the reaction, a p orbital forms at the reaction center as the result of the transition from the molecular orbitals of the reactants to those of the products. To achieve optimal orbital overlap, the nucleophile attacks 180° relative to the leaving group, resulting in the leaving group being pushed off the opposite side and the product formed with inversion of tetrahedral geometry at the central atom. For example, the synthesis of macrocidin A, a fungal metabolite, involves an intramolecular ring closing step via an SN2 reaction with a phenoxide group as the nucleophile and a halide as the leaving group, forming an ether. Reactions such as this, with an alkoxide as the nucleophile, are known as the Williamson ether synthesis. If the substrate that is undergoing SN2 reaction has a chiral centre, then inversion of configuration (stereochemistry and optical activity) may occur; this is called the Walden inversion. For example, 1-bromo-1-fluoroethane can undergo nucleophilic attack to form 1-fluoroethan-1-ol, with the nucleophile being an HO− group. In this case, if the reactant is levorotatory, then the product would be dextrorotatory, and vice versa. Factors affecting the rate of the reaction The four factors that affect the rate of the reaction, in the order of decreasing importance, are: Substrate The substrate plays the most important part in determining the rate of the reaction. For SN2 reaction to occur more quickly, the nucleophile must easily access the sigma antibonding orbital between the central carbon and leaving group. SN2 occurs more quickly with substrates that are more sterically accessible at the central carbon, i.e. those that do not have as much sterically hindering substituents nearby. Methyl and primary substrates react the fastest, followed by secondary substrates. Tertiary substrates do not react via the SN2 pathway, as the greater steric hindrance between the nucleophile and nearby groups of the substrate will leave the SN1 reaction to occur first. Substrates with adjacent pi C=C systems can favor both SN1 and SN2 reactions. In SN1, allylic and benzylic carbocations are stabilized by delocalizing the positive charge. In SN2, however, the conjugation between the reaction centre and the adjacent pi system stabilizes the transition state. Because they destabilize the positive charge in the carbocation intermediate, electron-withdrawing groups favor the SN2 reaction. Electron-donating groups favor leaving-group displacement and are more likely to react via the SN1 pathway. Nucleophile Like the substrate, steric hindrance affects the nucleophile's strength. The methoxide anion, for example, is both a strong base and nucleophile because it is a methyl nucleophile, and is thus very much unhindered. tert-Butoxide, on the other hand, is a strong base, but a poor nucleophile, because of its three methyl groups hindering its approach to the carbon. Nucleophile strength is also affected by charge and electronegativity: nucleophilicity increases with increasing negative charge and decreasing electronegativity. For example, OH− is a better nucleophile than water, and I− is a better nucleophile than Br− (in polar protic solvents). In a polar aprotic solvent, nucleophilicity increases up a column of the periodic table as there is no hydrogen bonding between the solvent and nucleophile; in this case nucleophilicity mirrors basicity. I− would therefore be a weaker nucleophile than Br− because it is a weaker base. Verdict - A strong/anionic nucleophile always favours SN2 manner of nucleophillic substitution. Leaving group Good leaving groups on the substrate lead to faster SN2 reactions. A good leaving group must be able to stabilize the electron density that comes from breaking its bond with the carbon center. This leaving group ability trend corresponds well to the pKa of the leaving group's conjugate acid (pKaH); the lower its pKaH value, the faster the leaving group is displaced. Leaving groups that are neutral, such as water, alcohols (), and amines (), are good examples because of their positive charge when bonded to the carbon center prior to nucleophilic attack. Halides (, , and , with the exception of ), serve as good anionic leaving groups because electronegativity stabilizes additional electron density; the fluoride exception is due to its strong bond to carbon. Leaving group reactivity of alcohols can be increased with sulfonates, such as tosylate (), triflate (), and mesylate (). Poor leaving groups include hydroxide (), alkoxides (), and amides (). The Finkelstein reaction is one SN2 reaction in which the leaving group can also act as a nucleophile. In this reaction, the substrate has a halogen atom exchanged with another halogen. As the negative charge is more-or-less stabilized on both halides, the reaction occurs at equilibrium. Solvent The solvent affects the rate of reaction because solvents may or may not surround a nucleophile, thus hindering or not hindering its approach to the carbon atom. Polar aprotic solvents, like tetrahydrofuran, are better solvents for this reaction than polar protic solvents because polar protic solvents will hydrogen bond to the nucleophile, hindering it from attacking the carbon with the leaving group. A polar aprotic solvent with low dielectric constant or a hindered dipole end will favour SN2 manner of nucleophilic substitution reaction. Examples: dimethylsulfoxide, dimethylformamide, acetone, etc. In parallel, solvation also has a significant impact on the intrinsic strength of the nucleophile, in which strong interactions between solvent and the nucleophile, found for polar protic solvents, furnish a weaker nucleophile. In contrast, polar aprotic solvents can only weakly interact with the nucleophile, and thus, are to a lesser extent able to reduce the strength of the nucleophile. Reaction kinetics The rate of an SN2 reaction is second order, as the rate-determining step depends on the nucleophile concentration, [Nu−] as well as the concentration of substrate, [RX]. r = k[RX][Nu−] This is a key difference between the SN1 and SN2 mechanisms. In the SN1 reaction the nucleophile attacks after the rate-limiting step is over, whereas in SN2 the nucleophile forces off the leaving group in the limiting step. In other words, the rate of SN1 reactions depend only on the concentration of the substrate while the SN2 reaction rate depends on the concentration of both the substrate and nucleophile. It has been shown that except in uncommon (but predictable cases) primary and secondary substrates go exclusively by the SN2 mechanism while tertiary substrates go via the SN1 reaction. There are two factors which complicate determining the mechanism of nucleophilic substitution reactions at secondary carbons: Many reactions studied are solvolysis reactions where a solvent molecule (often an alcohol) is the nucleophile. While still a second order reaction mechanistically, the reaction is kinetically first order as the concentration of the nucleophile–the solvent molecule, is effectively constant during the reaction. This type of reaction is often called a pseudo first order reaction. In reactions where the leaving group is also a good nucleophile (bromide for instance) the leaving group can perform an SN2 reaction on a substrate molecule. If the substrate is chiral, this inverts the configuration of the substrate before solvolysis, leading to a racemized product–the product that would be expected from an SN1 mechanism. In the case of a bromide leaving group in alcoholic solvent Cowdrey et al. have shown that bromide can have an SN2 rate constant 100-250 times higher than the rate constant for ethanol. Thus, after only a few percent solvolysis of an enantiospecific substrate, it becomes racemic. The examples in textbooks of secondary substrates going by the SN1 mechanism invariably involve the use of bromide (or other good nucleophile) as the leaving group have confused the understanding of alkyl nucleophilic substitution reactions at secondary carbons for 80 years[3]. Work with the 2-adamantyl system (SN2 not possible) by Schleyer and co-workers, the use of azide (an excellent nucleophile but very poor leaving group) by Weiner and Sneen, the development of sulfonate leaving groups (non-nucleophilic good leaving groups), and the demonstration of significant experimental problems in the initial claim of an SN1 mechanism in the solvolysis of optically active 2-bromooctane by Hughes et al.[3] have demonstrated conclusively that secondary substrates go exclusively (except in unusual but predictable cases) by the SN2 mechanism. E2 competition A common side reaction taking place with SN2 reactions is E2 elimination: the incoming anion can act as a base rather than as a nucleophile, abstracting a proton and leading to formation of the alkene. This pathway is favored with sterically hindered nucleophiles. Elimination reactions are usually favoured at elevated temperatures because of increased entropy. This effect can be demonstrated in the gas-phase reaction between a phenolate and a simple alkyl bromide taking place inside a mass spectrometer: With ethyl bromide, the reaction product is predominantly the substitution product. As steric hindrance around the electrophilic center increases, as with isobutyl bromide, substitution is disfavored and elimination is the predominant reaction. Other factors favoring elimination are the strength of the base. With the less basic benzoate substrate, isopropyl bromide reacts with 55% substitution. In general, gas phase reactions and solution phase reactions of this type follow the same trends, even though in the first, solvent effects are eliminated. Roundabout mechanism A development attracting attention in 2008 concerns a SN2 roundabout mechanism observed in a gas-phase reaction between chloride ions and methyl iodide with a special technique called crossed molecular beam imaging. When the chloride ions have sufficient velocity, the initial collision of it with the methyl iodide molecule causes the methyl iodide to spin around once before the actual SN2 displacement mechanism takes place.
Physical sciences
Organic reactions
Chemistry
241994
https://en.wikipedia.org/wiki/Aberdeen%20Angus
Aberdeen Angus
The Aberdeen Angus, sometimes simply Angus, is a Scottish breed of small beef cattle. It derives from cattle native to the counties of Aberdeen, Banff, Kincardine and Angus in north-eastern Scotland. In 2018 the breed accounted for over 17% of the beef production in the United Kingdom. The Angus is naturally polled and solid black or red; the udder may be white. The cattle have been exported to many countries of the world; there are large populations in Australia, Canada, New Zealand, South America and the United States, where it has developed into two separate and distinct breeds, the American Angus and Red Angus. In some countries it has been bred to be taller than the native Scottish stock. Its conservation status worldwide is "not at risk"; in the United Kingdom the original Native Aberdeen Angus – cattle not influenced by cross-breeding with imported stock – is listed by the Rare Breeds Survival Trust as "at risk". History Aberdeen Angus cattle have been recorded in north-eastern Scotland since at least the sixteenth century. For some time before the 1800s, the hornless cattle in Angus were called "Angus Doddies", while those in the historic province of Buchan (later part of Aberdeenshire) were known as "Buchan Humlies", both "doddie" and "humlie" meaning “polled.” In 1824, William McCombie of Tillyfour, later the Member of Parliament for West Aberdeenshire, began to improve the stock and is regarded today as the father of the breed. The breed was officially recognised in 1835, and was initially registered together with the Galloway in the Polled Herd Book. A society was formed in 1879. The cattle became commonplace throughout the British Isles in the mid-twentieth century. Argentina As stated in the fourth volume of the Herd Book of the UK's Angus, this breed was introduced to Argentina in 1879 when "Don Carlos Guerrero" imported one bull and two cows for his Estancia "Charles" located in Juancho, Partido de General Madariaga, Provincia de Buenos Aires. The bull was born on 19 April 1878; named "Virtuoso 1626" and raised by Colonel Ferguson. The cows were named "Aunt Lee 4697" raised by J. James and "Cinderela 4968" raised by R. Walker and were both born in 1878, on 31 January and 23 April respectively. Australia Angus cattle were first introduced to Van Diemen's Land (now Tasmania) in the 1820s, and to the southern mainland in 1840. The breed is now found in all Australian states and territories with calves registered with Angus Australia in 2010. Canada In 1876 William Brown, a professor of agriculture and then superintendent of the experimental farm at Guelph, Ontario, was granted permission by the government of Ontario to purchase Aberdeen Angus cattle for the Ontario Agricultural College. The herd comprised a yearling bull, Gladiolus, and a cow, Eyebright, bred by the Earl of Fife and a cow, Leochel Lass 4th, bred by R.O. Farquharson. On 12 January 1877, Eyebright gave birth to a calf, sired by Sir Wilfrid. It was the first to be born outside of Scotland. The OAC went on to import additional bulls and cows, eventually began selling Aberdeen Angus cattle in 1881. United States On 17 May 1873, George Grant brought four Angus bulls, without any cows, to Victoria, Kansas. These were seen as unusual as the normal American cattle consisted of Shorthorns and Longhorns, and the bulls were used only in crossbreeding. However, the farmers noticed the good qualities of these bulls, and afterwards many more cattle of both sexes were imported. On 21 November 1883, the American Angus Association was founded in Chicago, Illinois. The first herd book was published in March 1885. At this time both red and black animals were registered without distinction. However, in 1917 the Association barred the registering of red and other coloured animals in an effort to promote a solid black breed. The Red Angus Association of America was founded in 1954 by breeders of Red Angus cattle. It was formed because the breeders had had their cattle struck off the herd book for not conforming to the changed breed standard regarding colour. Germany A separate breed was cross bred in Germany called the German Angus. It is a cross between the Angus and several different cattle such as the German Black Pied Cattle, Gelbvieh, and Fleckvieh. The cattle are usually larger than the Angus and appear in black and red colours. Characteristics Because of their native environment, the cattle are very hardy and can survive the Scottish winters, which are often harsh, with snowfall and storms. Cows weigh about and bulls some . Bulls may be used on dairy cows to produce a beef calf. The cattle are naturally polled and may be either black or red. They reach maturity earlier than some other native British breeds such as the Hereford or North Devon. The cattle have a large muscle content and are regarded as medium-sized. In Japan the meat is prized for its marbling. Among the recessive genetic defects that can affect the cattle are: arthrogryposis multiplex ("curly calf"); neuropathic hydrocephalus ("water head"); contractural arachnodactyly or "fawn calf syndrome"; dwarfism; osteoporosis; and notomelia. Use The Aberdeen Angus is reared for beef. The meat can be marketed as superior due to its marbled appearance. This has led to many markets, including Australia, Canada, New Zealand, South Africa and the United Kingdom to adopt it into the mainstream. Angus cattle can also be used in cross-breeding to reduce the likelihood of dystocia (difficult calving) or, because of their dominant polled gene, to produce polled calves.
Biology and health sciences
Cattle
Animals
242001
https://en.wikipedia.org/wiki/Nuclear%20chemistry
Nuclear chemistry
Nuclear chemistry is the sub-field of chemistry dealing with radioactivity, nuclear processes, and transformations in the nuclei of atoms, such as nuclear transmutation and nuclear properties. It is the chemistry of radioactive elements such as the actinides, radium and radon together with the chemistry associated with equipment (such as nuclear reactors) which are designed to perform nuclear processes. This includes the corrosion of surfaces and the behavior under conditions of both normal and abnormal operation (such as during an accident). An important area is the behavior of objects and materials after being placed into a nuclear waste storage or disposal site. It includes the study of the chemical effects resulting from the absorption of radiation within living animals, plants, and other materials. The radiation chemistry controls much of radiation biology as radiation has an effect on living things at the molecular scale. To explain it another way, the radiation alters the biochemicals within an organism, the alteration of the bio-molecules then changes the chemistry which occurs within the organism; this change in chemistry then can lead to a biological outcome. As a result, nuclear chemistry greatly assists the understanding of medical treatments (such as cancer radiotherapy) and has enabled these treatments to improve. It includes the study of the production and use of radioactive sources for a range of processes. These include radiotherapy in medical applications; the use of radioactive tracers within industry, science and the environment, and the use of radiation to modify materials such as polymers. It also includes the study and use of nuclear processes in non-radioactive areas of human activity. For instance, nuclear magnetic resonance (NMR) spectroscopy is commonly used in synthetic organic chemistry and physical chemistry and for structural analysis in macro-molecular chemistry. History After Wilhelm Röntgen discovered X-rays in 1895, many scientists began to work on ionizing radiation. One of these was Henri Becquerel, who investigated the relationship between phosphorescence and the blackening of photographic plates. When Becquerel (working in France) discovered that, with no external source of energy, the uranium generated rays which could blacken (or fog) the photographic plate, radioactivity was discovered. Marie Skłodowska-Curie (working in Paris) and her husband Pierre Curie isolated two new radioactive elements from uranium ore. They used radiometric methods to identify which stream the radioactivity was in after each chemical separation; they separated the uranium ore into each of the different chemical elements that were known at the time, and measured the radioactivity of each fraction. They then attempted to separate these radioactive fractions further, to isolate a smaller fraction with a higher specific activity (radioactivity divided by mass). In this way, they isolated polonium and radium. It was noticed in about 1901 that high doses of radiation could cause an injury in humans. Henri Becquerel had carried a sample of radium in his pocket and as a result he suffered a highly localized dose which resulted in a radiation burn. This injury resulted in the biological properties of radiation being investigated, which in time resulted in the development of medical treatment. Ernest Rutherford, working in Canada and England, showed that radioactive decay can be described by a simple equation (a linear first degree derivative equation, now called first order kinetics), implying that a given radioactive substance has a characteristic "half-life" (the time taken for the amount of radioactivity present in a source to diminish by half). He also coined the terms alpha, beta and gamma rays, he converted nitrogen into oxygen, and most importantly he supervised the students who conducted the Geiger–Marsden experiment (gold foil experiment) which showed that the 'plum pudding model' of the atom was wrong. In the plum pudding model, proposed by J. J. Thomson in 1904, the atom is composed of electrons surrounded by a 'cloud' of positive charge to balance the electrons' negative charge. To Rutherford, the gold foil experiment implied that the positive charge was confined to a very small nucleus leading first to the Rutherford model, and eventually to the Bohr model of the atom, where the positive nucleus is surrounded by the negative electrons. In 1934, Marie Curie's daughter (Irène Joliot-Curie) and son-in-law (Frédéric Joliot-Curie) were the first to create artificial radioactivity: they bombarded boron with alpha particles to make the neutron-poor isotope nitrogen-13; this isotope emitted positrons. In addition, they bombarded aluminium and magnesium with neutrons to make new radioisotopes. In the early 1920s Otto Hahn created a new line of research. Using the "emanation method", which he had recently developed, and the "emanation ability", he founded what became known as "applied radiochemistry" for the researching of general chemical and physical-chemical questions. In 1936 Cornell University Press published a book in English (and later in Russian) titled Applied Radiochemistry, which contained the lectures given by Hahn when he was a visiting professor at Cornell University in Ithaca, New York, in 1933. This important publication had a major influence on almost all nuclear chemists and physicists in the United States, the United Kingdom, France, and the Soviet Union during the 1930s and 1940s, laying the foundation for modern nuclear chemistry. Hahn and Lise Meitner discovered radioactive isotopes of radium, thorium, protactinium and uranium. He also discovered the phenomena of radioactive recoil and nuclear isomerism, and pioneered rubidium–strontium dating. In 1938, Hahn, Lise Meitner and Fritz Strassmann discovered nuclear fission, for which Hahn received the 1944 Nobel Prize for Chemistry. Nuclear fission was the basis for nuclear reactors and nuclear weapons. Hahn is referred to as the father of nuclear chemistry and godfather of nuclear fission. Main areas Radiochemistry is the chemistry of radioactive materials, in which radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). For further details please see the page on radiochemistry. Radiation chemistry Radiation chemistry is the study of the chemical effects of radiation on matter; this is very different from radiochemistry as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide. Prior to radiation chemistry, it was commonly believed that pure water could not be destroyed. Initial experiments were focused on understanding the effects of radiation on matter. Using a X-ray generator, Hugo Fricke studied the biological effects of radiation as it became a common treatment option and diagnostic method. Fricke proposed and subsequently proved that the energy from X - rays were able to convert water into activated water, allowing it to react with dissolved species. Chemistry for nuclear power Radiochemistry, radiation chemistry and nuclear chemical engineering play a very important role for uranium and thorium fuel precursors synthesis, starting from ores of these elements, fuel fabrication, coolant chemistry, fuel reprocessing, radioactive waste treatment and storage, monitoring of radioactive elements release during reactor operation and radioactive geological storage, etc. Study of nuclear reactions A combination of radiochemistry and radiation chemistry is used to study nuclear reactions such as fission and fusion. Some early evidence for nuclear fission was the formation of a short-lived radioisotope of barium which was isolated from neutron irradiated uranium (139Ba, with a half-life of 83 minutes and 140Ba, with a half-life of 12.8 days, are major fission products of uranium). At the time, it was thought that this was a new radium isotope, as it was then standard radiochemical practice to use a barium sulfate carrier precipitate to assist in the isolation of radium. More recently, a combination of radiochemical methods and nuclear physics has been used to try to make new 'superheavy' elements; it is thought that islands of relative stability exist where the nuclides have half-lives of years, thus enabling weighable amounts of the new elements to be isolated. For more details of the original discovery of nuclear fission see the work of Otto Hahn. The nuclear fuel cycle This is the chemistry associated with any part of the nuclear fuel cycle, including nuclear reprocessing. The fuel cycle includes all the operations involved in producing fuel, from mining, ore processing and enrichment to fuel production (Front-end of the cycle). It also includes the 'in-pile' behavior (use of the fuel in a reactor) before the back end of the cycle. The back end includes the management of the used nuclear fuel in either a spent fuel pool or dry storage, before it is disposed of into an underground waste store or reprocessed. Normal and abnormal conditions The nuclear chemistry associated with the nuclear fuel cycle can be divided into two main areas, one area is concerned with operation under the intended conditions while the other area is concerned with maloperation conditions where some alteration from the normal operating conditions has occurred or (more rarely) an accident is occurring. Without this process, none of this would be true. Reprocessing Law In the United States, it is normal to use fuel once in a power reactor before placing it in a waste store. The long-term plan is currently to place the used civilian reactor fuel in a deep store. This non-reprocessing policy was started in March 1977 because of concerns about nuclear weapons proliferation. President Jimmy Carter issued a Presidential directive which indefinitely suspended the commercial reprocessing and recycling of plutonium in the United States. This directive was likely an attempt by the United States to lead other countries by example, but many other nations continue to reprocess spent nuclear fuels. The Russian government under President Vladimir Putin repealed a law which had banned the import of used nuclear fuel, which makes it possible for Russians to offer a reprocessing service for clients outside Russia (similar to that offered by BNFL). PUREX chemistry The current method of choice is to use the PUREX liquid-liquid extraction process which uses a tributyl phosphate/hydrocarbon mixture to extract both uranium and plutonium from nitric acid. This extraction is of the nitrate salts and is classed as being of a solvation mechanism. For example, the extraction of plutonium by an extraction agent (S) in a nitrate medium occurs by the following reaction. Pu4+aq + 4NO3−aq + 2Sorganic → [Pu(NO3)4S2]organic A complex bond is formed between the metal cation, the nitrates and the tributyl phosphate, and a model compound of a dioxouranium(VI) complex with two nitrate anions and two triethyl phosphate ligands has been characterised by X-ray crystallography. When the nitric acid concentration is high the extraction into the organic phase is favored, and when the nitric acid concentration is low the extraction is reversed (the organic phase is stripped of the metal). It is normal to dissolve the used fuel in nitric acid, after the removal of the insoluble matter the uranium and plutonium are extracted from the highly active liquor. It is normal to then back extract the loaded organic phase to create a medium active liquor which contains mostly uranium and plutonium with only small traces of fission products. This medium active aqueous mixture is then extracted again by tributyl phosphate/hydrocarbon to form a new organic phase, the metal bearing organic phase is then stripped of the metals to form an aqueous mixture of only uranium and plutonium. The two stages of extraction are used to improve the purity of the actinide product, the organic phase used for the first extraction will suffer a far greater dose of radiation. The radiation can degrade the tributyl phosphate into dibutyl hydrogen phosphate. The dibutyl hydrogen phosphate can act as an extraction agent for both the actinides and other metals such as ruthenium. The dibutyl hydrogen phosphate can make the system behave in a more complex manner as it tends to extract metals by an ion exchange mechanism (extraction favoured by low acid concentration), to reduce the effect of the dibutyl hydrogen phosphate it is common for the used organic phase to be washed with sodium carbonate solution to remove the acidic degradation products of the tributyl phosphatioloporus. New methods being considered for future use The PUREX process can be modified to make a UREX (URanium EXtraction) process which could be used to save space inside high level nuclear waste disposal sites, such as Yucca Mountain nuclear waste repository, by removing the uranium which makes up the vast majority of the mass and volume of used fuel and recycling it as reprocessed uranium. The UREX process is a PUREX process which has been modified to prevent the plutonium being extracted. This can be done by adding a plutonium reductant before the first metal extraction step. In the UREX process, ~99.9% of the uranium and >95% of technetium are separated from each other and the other fission products and actinides. The key is the addition of acetohydroxamic acid (AHA) to the extraction and scrubs sections of the process. The addition of AHA greatly diminishes the extractability of plutonium and neptunium, providing greater proliferation resistance than with the plutonium extraction stage of the PUREX process. Adding a second extraction agent, octyl(phenyl)-N,N-dibutyl carbamoylmethyl phosphine oxide (CMPO) in combination with tributylphosphate, (TBP), the PUREX process can be turned into the TRUEX (TRansUranic EXtraction) process this is a process which was invented in the US by Argonne National Laboratory, and is designed to remove the transuranic metals (Am/Cm) from waste. The idea is that by lowering the alpha activity of the waste, the majority of the waste can then be disposed of with greater ease. In common with PUREX this process operates by a solvation mechanism. As an alternative to TRUEX, an extraction process using a malondiamide has been devised. The DIAMEX (DIAMideEXtraction) process has the advantage of avoiding the formation of organic waste which contains elements other than carbon, hydrogen, nitrogen, and oxygen. Such an organic waste can be burned without the formation of acidic gases which could contribute to acid rain. The DIAMEX process is being worked on in Europe by the French CEA. The process is sufficiently mature that an industrial plant could be constructed with the existing knowledge of the process. In common with PUREX this process operates by a solvation mechanism. Selective Actinide Extraction (SANEX). As part of the management of minor actinides, it has been proposed that the lanthanides and trivalent minor actinides should be removed from the PUREX raffinate by a process such as DIAMEX or TRUEX. In order to allow the actinides such as americium to be either reused in industrial sources or used as fuel the lanthanides must be removed. The lanthanides have large neutron cross sections and hence they would poison a neutron-driven nuclear reaction. To date, the extraction system for the SANEX process has not been defined, but currently, several different research groups are working towards a process. For instance, the French CEA is working on a bis-triazinyl pyridine (BTP) based process. Other systems such as the dithiophosphinic acids are being worked on by some other workers. This is the UNiversal EXtraction process which was developed in Russia and the Czech Republic, it is a process designed to remove all of the most troublesome (Sr, Cs and minor actinides) radioisotopes from the raffinates left after the extraction of uranium and plutonium from used nuclear fuel. The chemistry is based upon the interaction of caesium and strontium with poly ethylene oxide (poly ethylene glycol) and a cobalt carborane anion (known as chlorinated cobalt dicarbollide). The actinides are extracted by CMPO, and the diluent is a polar aromatic such as nitrobenzene. Other diluents such as meta-nitrobenzotrifluoride and phenyl trifluoromethyl sulfone have been suggested as well. Absorption of fission products on surfaces Another important area of nuclear chemistry is the study of how fission products interact with surfaces; this is thought to control the rate of release and migration of fission products both from waste containers under normal conditions and from power reactors under accident conditions. Like chromate and molybdate, the 99TcO4 anion can react with steel surfaces to form a corrosion resistant layer. In this way, these metaloxo anions act as anodic corrosion inhibitors. The formation of 99TcO2 on steel surfaces is one effect which will retard the release of 99Tc from nuclear waste drums and nuclear equipment which has been lost before decontamination (e.g. submarine reactors lost at sea). This 99TcO2 layer renders the steel surface passive, inhibiting the anodic corrosion reaction. The radioactive nature of technetium makes this corrosion protection impractical in almost all situations. It has also been shown that 99TcO4 anions react to form a layer on the surface of activated carbon (charcoal) or aluminium. A short review of the biochemical properties of a series of key long lived radioisotopes can be read on line. 99Tc in nuclear waste may exist in chemical forms other than the 99TcO4 anion, these other forms have different chemical properties. Similarly, the release of iodine-131 in a serious power reactor accident could be retarded by absorption on metal surfaces within the nuclear plant. Education Despite the growing use of nuclear medicine, the potential expansion of nuclear power plants, and worries about protection against nuclear threats and the management of the nuclear waste generated in past decades, the number of students opting to specialize in nuclear and radiochemistry has decreased significantly over the past few decades. Now, with many experts in these fields approaching retirement age, action is needed to avoid a workforce gap in these critical fields, for example by building student interest in these careers, expanding the educational capacity of universities and colleges, and providing more specific on-the-job training. Nuclear and Radiochemistry (NRC) is mostly being taught at university level, usually first at the Master- and PhD-degree level. In Europe, as substantial effort is being done to harmonize and prepare the NRC education for the industry's and society's future needs. This effort is being coordinated in a project funded by the Coordinated Action supported by the European Atomic Energy Community's 7th Framework Program. Although NucWik is primarily aimed at teachers, anyone interested in nuclear and radiochemistry is welcome and can find a lot of information and material explaining topics related to NRC. Spinout areas Some methods first developed within nuclear chemistry and physics have become so widely used within chemistry and other physical sciences that they may be best thought of as separate from normal nuclear chemistry. For example, the isotope effect is used so extensively to investigate chemical mechanisms and the use of cosmogenic isotopes and long-lived unstable isotopes in geology that it is best to consider much of isotopic chemistry as separate from nuclear chemistry. Kinetics (use within mechanistic chemistry) The mechanisms of chemical reactions can be investigated by observing how the kinetics of a reaction is changed by making an isotopic modification of a substrate, known as the kinetic isotope effect. This is now a standard method in organic chemistry. Briefly, replacing normal hydrogen (protons) by deuterium within a molecule causes the molecular vibrational frequency of X-H (for example C-H, N-H and O-H) bonds to decrease, which leads to a decrease in vibrational zero-point energy. This can lead to a decrease in the reaction rate if the rate-determining step involves breaking a bond between hydrogen and another atom. Thus, if the reaction changes in rate when protons are replaced by deuteriums, it is reasonable to assume that the breaking of the bond to hydrogen is part of the step which determines the rate. Uses within geology, biology and forensic science Cosmogenic isotopes are formed by the interaction of cosmic rays with the nucleus of an atom. These can be used for dating purposes and for use as natural tracers. In addition, by careful measurement of some ratios of stable isotopes it is possible to obtain new insights into the origin of bullets, ages of ice samples, ages of rocks, and the diet of a person can be identified from a hair or other tissue sample. (See Isotope geochemistry and Isotopic signature for further details). Biology Within living things, isotopic labels (both radioactive and nonradioactive) can be used to probe how the complex web of reactions which makes up the metabolism of an organism converts one substance to another. For instance a green plant uses light energy to convert water and carbon dioxide into glucose by photosynthesis. If the oxygen in the water is labeled, then the label appears in the oxygen gas formed by the plant and not in the glucose formed in the chloroplasts within the plant cells. For biochemical and physiological experiments and medical methods, a number of specific isotopes have important applications. Stable isotopes have the advantage of not delivering a radiation dose to the system being studied; however, a significant excess of them in the organ or organism might still interfere with its functionality, and the availability of sufficient amounts for whole-animal studies is limited for many isotopes. Measurement is also difficult, and usually requires mass spectrometry to determine how much of the isotope is present in particular compounds, and there is no means of localizing measurements within the cell. 2H (deuterium), the stable isotope of hydrogen, is a stable tracer, the concentration of which can be measured by mass spectrometry or NMR. It is incorporated into all cellular structures. Specific deuterated compounds can also be produced. 15N, a stable isotope of nitrogen, has also been used. It is incorporated mainly into proteins. Radioactive isotopes have the advantages of being detectable in very low quantities, in being easily measured by scintillation counting or other radiochemical methods, and in being localizable to particular regions of a cell, and quantifiable by autoradiography. Many compounds with the radioactive atoms in specific positions can be prepared, and are widely available commercially. In high quantities they require precautions to guard the workers from the effects of radiation—and they can easily contaminate laboratory glassware and other equipment. For some isotopes the half-life is so short that preparation and measurement is difficult. By organic synthesis it is possible to create a complex molecule with a radioactive label that can be confined to a small area of the molecule. For short-lived isotopes such as 11C, very rapid synthetic methods have been developed to permit the rapid addition of the radioactive isotope to the molecule. For instance a palladium catalysed carbonylation reaction in a microfluidic device has been used to rapidly form amides and it might be possible to use this method to form radioactive imaging agents for PET imaging. 3H (tritium), the radioisotope of hydrogen, is available at very high specific activities, and compounds with this isotope in particular positions are easily prepared by standard chemical reactions such as hydrogenation of unsaturated precursors. The isotope emits very soft beta radiation, and can be detected by scintillation counting. 11C, carbon-11 is usually produced by cyclotron bombardment of 14N with protons. The resulting nuclear reaction is . Additionally, carbon-11 can also be made using a cyclotron; boron in the form of boric oxide is reacted with protons in a (p,n) reaction. Another alternative route is to react 10B with deuterons. By rapid organic synthesis, the 11C compound formed in the cyclotron is converted into the imaging agent which is then used for PET. 14C, carbon-14 can be made (as above), and it is possible to convert the target material into simple inorganic and organic compounds. In most organic synthesis work it is normal to try to create a product out of two approximately equal sized fragments and to use a convergent route, but when a radioactive label is added, it is normal to try to add the label late in the synthesis in the form of a very small fragment to the molecule to enable the radioactivity to be localised in a single group. Late addition of the label also reduces the number of synthetic stages where radioactive material is used. 18F, fluorine-18 can be made by the reaction of neon with deuterons, 20Ne reacts in a (d,4He) reaction. It is normal to use neon gas with a trace of stable fluorine (19F2). The 19F2 acts as a carrier which increases the yield of radioactivity from the cyclotron target by reducing the amount of radioactivity lost by absorption on surfaces. However, this reduction in loss is at the cost of the specific activity of the final product. Nuclear spectroscopy Nuclear spectroscopy are methods that use the nucleus to obtain information of the local structure in matter. Important methods are NMR (see below), Mössbauer spectroscopy and Perturbed angular correlation. These methods use the interaction of the hyperfine field with the nucleus' spin. The field can be magnetic or/and electric and are created by the electrons of the atom and its surrounding neighbours. Thus, these methods investigate the local structure in matter, mainly condensed matter in condensed matter physics and solid state chemistry. Nuclear magnetic resonance (NMR) NMR spectroscopy uses the net spin of nuclei in a substance upon energy absorption to identify molecules. This has now become a standard spectroscopic tool within synthetic chemistry. One major use of NMR is to determine the bond connectivity within an organic molecule. NMR imaging also uses the net spin of nuclei (commonly protons) for imaging. This is widely used for diagnostic purposes in medicine, and can provide detailed images of the inside of a person without inflicting any radiation upon them. In a medical setting, NMR is often known simply as "magnetic resonance" imaging, as the word 'nuclear' has negative connotations for many people.
Physical sciences
Chemistry: General
null
242006
https://en.wikipedia.org/wiki/Reaction%20rate
Reaction rate
The reaction rate or rate of reaction is the speed at which a chemical reaction takes place, defined as proportional to the increase in the concentration of a product per unit time and to the decrease in the concentration of a reactant per unit time. Reaction rates can vary dramatically. For example, the oxidative rusting of iron under Earth's atmosphere is a slow reaction that can take many years, but the combustion of cellulose in a fire is a reaction that takes place in fractions of a second. For most reactions, the rate decreases as the reaction proceeds. A reaction's rate can be determined by measuring the changes in concentration over time. Chemical kinetics is the part of physical chemistry that concerns how rates of chemical reactions are measured and predicted, and how reaction-rate data can be used to deduce probable reaction mechanisms. The concepts of chemical kinetics are applied in many disciplines, such as chemical engineering, enzymology and environmental engineering. Formal definition Consider a typical balanced chemical reaction: {\mathit{a}A} + {\mathit{b}B} -> {\mathit{p}P} + {\mathit{q}Q} The lowercase letters (, , , and ) represent stoichiometric coefficients, while the capital letters represent the reactants ( and ) and the products ( and ). According to IUPAC's Gold Book definition the reaction rate for a chemical reaction occurring in a closed system at constant volume, without a build-up of reaction intermediates, is defined as: where denotes the concentration of the substance or . The reaction rate thus defined has the units of mol/L/s. The rate of a reaction is always positive. A negative sign is present to indicate that the reactant concentration is decreasing. The IUPAC recommends that the unit of time should always be the second. The rate of reaction differs from the rate of increase of concentration of a product P by a constant factor (the reciprocal of its stoichiometric number) and for a reactant A by minus the reciprocal of the stoichiometric number. The stoichiometric numbers are included so that the defined rate is independent of which reactant or product species is chosen for measurement. For example, if and then is consumed three times more rapidly than , but is uniquely defined. An additional advantage of this definition is that for an elementary and irreversible reaction, is equal to the product of the probability of overcoming the transition state activation energy and the number of times per second the transition state is approached by reactant molecules. When so defined, for an elementary and irreversible reaction, is the rate of successful chemical reaction events leading to the product. The above definition is only valid for a single reaction, in a closed system of constant volume. If water is added to a pot containing salty water, the concentration of salt decreases, although there is no chemical reaction. For an open system, the full mass balance must be taken into account: where is the inflow rate of in molecules per second; the outflow; is the instantaneous reaction rate of (in number concentration rather than molar) in a given differential volume, integrated over the entire system volume at a given moment. When applied to the closed system at constant volume considered previously, this equation reduces to: , where the concentration is related to the number of molecules by Here is the Avogadro constant. For a single reaction in a closed system of varying volume the so-called rate of conversion can be used, in order to avoid handling concentrations. It is defined as the derivative of the extent of reaction with respect to time. Here is the stoichiometric coefficient for substance , equal to , , , and in the typical reaction above. Also is the volume of reaction and is the concentration of substance . When side products or reaction intermediates are formed, the IUPAC recommends the use of the terms the rate of increase of concentration and rate of the decrease of concentration for products and reactants, properly. Reaction rates may also be defined on a basis that is not the volume of the reactor. When a catalyst is used the reaction rate may be stated on a catalyst weight (mol g−1 s−1) or surface area (mol m−2 s−1) basis. If the basis is a specific catalyst site that may be rigorously counted by a specified method, the rate is given in units of s−1 and is called a turnover frequency. Influencing factors Factors that influence the reaction rate are the nature of the reaction, concentration, pressure, reaction order, temperature, solvent, electromagnetic radiation, catalyst, isotopes, surface area, stirring, and diffusion limit. Some reactions are naturally faster than others. The number of reacting species, their physical state (the particles that form solids move much more slowly than those of gases or those in solution), the complexity of the reaction and other factors can greatly influence the rate of a reaction. Reaction rate increases with concentration, as described by the rate law and explained by collision theory. As reactant concentration increases, the frequency of collision increases. The rate of gaseous reactions increases with pressure, which is, in fact, equivalent to an increase in the concentration of the gas. The reaction rate increases in the direction where there are fewer moles of gas and decreases in the reverse direction. For condensed-phase reactions, the pressure dependence is weak. The order of the reaction controls how the reactant concentration (or pressure) affects the reaction rate. Usually conducting a reaction at a higher temperature delivers more energy into the system and increases the reaction rate by causing more collisions between particles, as explained by collision theory. However, the main reason that temperature increases the rate of reaction is that more of the colliding particles will have the necessary activation energy resulting in more successful collisions (when bonds are formed between reactants). The influence of temperature is described by the Arrhenius equation. For example, coal burns in a fireplace in the presence of oxygen, but it does not when it is stored at room temperature. The reaction is spontaneous at low and high temperatures but at room temperature, its rate is so slow that it is negligible. The increase in temperature, as created by a match, allows the reaction to start and then it heats itself because it is exothermic. That is valid for many other fuels, such as methane, butane, and hydrogen. Reaction rates can be independent of temperature (non-Arrhenius) or decrease with increasing temperature (anti-Arrhenius). Reactions without an activation barrier (for example, some radical reactions), tend to have anti-Arrhenius temperature dependence: the rate constant decreases with increasing temperature. Many reactions take place in solution and the properties of the solvent affect the reaction rate. The ionic strength also has an effect on the reaction rate. Electromagnetic radiation is a form of energy. As such, it may speed up the rate or even make a reaction spontaneous as it provides the particles of the reactants with more energy. This energy is in one way or another stored in the reacting particles (it may break bonds, and promote molecules to electronically or vibrationally excited states...) creating intermediate species that react easily. As the intensity of light increases, the particles absorb more energy and hence the rate of reaction increases. For example, when methane reacts with chlorine in the dark, the reaction rate is slow. It can be sped up when the mixture is put under diffused light. In bright sunlight, the reaction is explosive. The presence of a catalyst increases the reaction rate (in both the forward and reverse reactions) by providing an alternative pathway with a lower activation energy. For example, platinum catalyzes the combustion of hydrogen with oxygen at room temperature. The kinetic isotope effect consists of a different reaction rate for the same molecule if it has different isotopes, usually hydrogen isotopes, because of the relative mass difference between hydrogen and deuterium. In reactions on surfaces, which take place, for example, during heterogeneous catalysis, the rate of reaction increases as the surface area does. That is because more particles of the solid are exposed and can be hit by reactant molecules. Stirring can have a strong effect on the rate of reaction for heterogeneous reactions. Some reactions are limited by diffusion. All the factors that affect a reaction rate, except for concentration and reaction order, are taken into account in the reaction rate coefficient (the coefficient in the rate equation of the reaction). Rate equation For a chemical reaction , the rate equation or rate law is a mathematical expression used in chemical kinetics to link the rate of a reaction to the concentration of each reactant. For a closed system at constant volume, this is often of the form For reactions that go to completion (which implies very small ), or if only the initial rate is analyzed (with initial vanishing product concentrations), this simplifies to the commonly quoted form For gas phase reaction the rate equation is often alternatively expressed in terms of partial pressures. In these equations is the reaction rate coefficient or rate constant, although it is not really a constant, because it includes all the parameters that affect reaction rate, except for time and concentration. Of all the parameters influencing reaction rates, temperature is normally the most important one and is accounted for by the Arrhenius equation. The exponents and are called reaction orders and depend on the reaction mechanism. For an elementary (single-step) reaction, the order with respect to each reactant is equal to its stoichiometric coefficient. For complex (multistep) reactions, however, this is often not true and the rate equation is determined by the detailed mechanism, as illustrated below for the reaction of H2 and NO. For elementary reactions or reaction steps, the order and stoichiometric coefficient are both equal to the molecularity or number of molecules participating. For a unimolecular reaction or step, the rate is proportional to the concentration of molecules of reactant, so the rate law is first order. For a bimolecular reaction or step, the number of collisions is proportional to the product of the two reactant concentrations, or second order. A termolecular step is predicted to be third order, but also very slow as simultaneous collisions of three molecules are rare. By using the mass balance for the system in which the reaction occurs, an expression for the rate of change in concentration can be derived. For a closed system with constant volume, such an expression can look like Example of a complex reaction: hydrogen and nitric oxide For the reaction the observed rate equation (or rate expression) is As for many reactions, the experimental rate equation does not simply reflect the stoichiometric coefficients in the overall reaction: It is third order overall: first order in H2 and second order in NO, even though the stoichiometric coefficients of both reactants are equal to 2. In chemical kinetics, the overall reaction rate is often explained using a mechanism consisting of a number of elementary steps. Not all of these steps affect the rate of reaction; normally the slowest elementary step controls the reaction rate. For this example, a possible mechanism is Reactions 1 and 3 are very rapid compared to the second, so the slow reaction 2 is the rate-determining step. This is a bimolecular elementary reaction whose rate is given by the second-order equation where is the rate constant for the second step. However N2O2 is an unstable intermediate whose concentration is determined by the fact that the first step is in equilibrium, so that where is the equilibrium constant of the first step. Substitution of this equation in the previous equation leads to a rate equation expressed in terms of the original reactants This agrees with the form of the observed rate equation if it is assumed that . In practice the rate equation is used to suggest possible mechanisms which predict a rate equation in agreement with experiment. The second molecule of H2 does not appear in the rate equation because it reacts in the third step, which is a rapid step after the rate-determining step, so that it does not affect the overall reaction rate. Temperature dependence Each reaction rate coefficient has a temperature dependency, which is usually given by the Arrhenius equation: where , is the pre-exponential factor or frequency factor, is the exponential function, is the activation energy, is the gas constant. Since at temperature the molecules have energies given by a Boltzmann distribution, one can expect the number of collisions with energy greater than to be proportional to . The values for and are dependent on the reaction. There are also more complex equations possible, which describe the temperature dependence of other rate constants that do not follow this pattern. Temperature is a measure of the average kinetic energy of the reactants. As temperature increases, the kinetic energy of the reactants increases. That is, the particles move faster. With the reactants moving faster this allows more collisions to take place at a greater speed, so the chance of reactants forming into products increases, which in turn results in the rate of reaction increasing. A rise of ten degrees Celsius results in approximately twice the reaction rate. The minimum kinetic energy required for a reaction to occur is called the activation energy and is denoted by or . The transition state or activated complex shown on the diagram is the energy barrier that must be overcome when changing reactants into products. The molecules with an energy greater than this barrier have enough energy to react. For a successful collision to take place, the collision geometry must be right, meaning the reactant molecules must face the right way so the activated complex can be formed. A chemical reaction takes place only when the reacting particles collide. However, not all collisions are effective in causing the reaction. Products are formed only when the colliding particles possess a certain minimum energy called threshold energy. As a rule of thumb, reaction rates for many reactions double for every ten degrees Celsius increase in temperature. For a given reaction, the ratio of its rate constant at a higher temperature to its rate constant at a lower temperature is known as its temperature coefficient, (). Q10 is commonly used as the ratio of rate constants that are ten degrees Celsius apart. Pressure dependence The pressure dependence of the rate constant for condensed-phase reactions (that is, when reactants and products are solids or liquid) is usually sufficiently weak in the range of pressures normally encountered in industry that it is neglected in practice. The pressure dependence of the rate constant is associated with the activation volume. For the reaction proceeding through an activation-state complex: the activation volume, , is: where denotes the partial molar volume of a species and (a double dagger) indicates the activation-state complex. For the above reaction, one can expect the change of the reaction rate constant (based either on mole fraction or on molar concentration) with pressure at constant temperature to be: In practice, the matter can be complicated because the partial molar volumes and the activation volume can themselves be a function of pressure. Reactions can increase or decrease their rates with pressure, depending on the value of . As an example of the possible magnitude of the pressure effect, some organic reactions were shown to double the reaction rate when the pressure was increased from atmospheric (0.1 MPa) to 50 MPa (which gives  −0.025 L/mol).
Physical sciences
Kinetics
Chemistry
242135
https://en.wikipedia.org/wiki/Projective%20space
Projective space
In mathematics, the concept of a projective space originated from the visual effect of perspective, where parallel lines seem to meet at infinity. A projective space may thus be viewed as the extension of a Euclidean space, or, more generally, an affine space with points at infinity, in such a way that there is one point at infinity of each direction of parallel lines. This definition of a projective space has the disadvantage of not being isotropic, having two different sorts of points, which must be considered separately in proofs. Therefore, other definitions are generally preferred. There are two classes of definitions. In synthetic geometry, point and line are primitive entities that are related by the incidence relation "a point is on a line" or "a line passes through a point", which is subject to the axioms of projective geometry. For some such set of axioms, the projective spaces that are defined have been shown to be equivalent to those resulting from the following definition, which is more often encountered in modern textbooks. Using linear algebra, a projective space of dimension is defined as the set of the vector lines (that is, vector subspaces of dimension one) in a vector space of dimension . Equivalently, it is the quotient set of by the equivalence relation "being on the same vector line". As a vector line intersects the unit sphere of in two antipodal points, projective spaces can be equivalently defined as spheres in which antipodal points are identified. A projective space of dimension 1 is a projective line, and a projective space of dimension 2 is a projective plane. Projective spaces are widely used in geometry, allowing for simpler statements and simpler proofs. For example, in affine geometry, two distinct lines in a plane intersect in at most one point, while, in projective geometry, they intersect in exactly one point. Also, there is only one class of conic sections, which can be distinguished only by their intersections with the line at infinity: two intersection points for hyperbolas; one for the parabola, which is tangent to the line at infinity; and no real intersection point of ellipses. In topology, and more specifically in manifold theory, projective spaces play a fundamental role, being typical examples of non-orientable manifolds. Motivation As outlined above, projective spaces were introduced for formalizing statements like "two coplanar lines intersect in exactly one point, and this point is at infinity if the lines are parallel". Such statements are suggested by the study of perspective, which may be considered as a central projection of the three dimensional space onto a plane (see Pinhole camera model). More precisely, the entrance pupil of a camera or of the eye of an observer is the center of projection, and the image is formed on the projection plane. Mathematically, the center of projection is a point of the space (the intersection of the axes in the figure); the projection plane (, in blue on the figure) is a plane not passing through , which is often chosen to be the plane of equation , when Cartesian coordinates are considered. Then, the central projection maps a point to the intersection of the line with the projection plane. Such an intersection exists if and only if the point does not belong to the plane (, in green on the figure) that passes through and is parallel to . It follows that the lines passing through split in two disjoint subsets: the lines that are not contained in , which are in one to one correspondence with the points of , and those contained in , which are in one to one correspondence with the directions of parallel lines in . This suggests to define the points (called here projective points for clarity) of the projective plane as the lines passing through . A projective line in this plane consists of all projective points (which are lines) contained in a plane passing through . As the intersection of two planes passing through is a line passing through , the intersection of two distinct projective lines consists of a single projective point. The plane defines a projective line which is called the line at infinity of . By identifying each point of with the corresponding projective point, one can thus say that the projective plane is the disjoint union of and the (projective) line at infinity. As an affine space with a distinguished point may be identified with its associated vector space (see ), the preceding construction is generally done by starting from a vector space and is called projectivization. Also, the construction can be done by starting with a vector space of any positive dimension. So, a projective space of dimension can be defined as the set of vector lines (vector subspaces of dimension one) in a vector space of dimension . A projective space can also be defined as the elements of any set that is in natural correspondence with this set of vector lines. This set can be the set of equivalence classes under the equivalence relation between vectors defined by "one vector is the product of the other by a nonzero scalar". In other words, this amounts to defining a projective space as the set of vector lines in which the zero vector has been removed. A third equivalent definition is to define a projective space of dimension as the set of pairs of antipodal points in a sphere of dimension (in a space of dimension ). Definition Given a vector space over a field , the projective space is the set of equivalence classes of under the equivalence relation defined by if there is a nonzero element of such that . If is a topological vector space, the quotient space is a topological space, endowed with the quotient topology of the subspace topology of . This is the case when is the field of the real numbers or the field of the complex numbers. If is finite dimensional, the dimension of is the dimension of minus one. In the common case where , the projective space is denoted (as well as or , although this notation may be confused with exponentiation). The space is often called the projective space of dimension over , or the projective -space, since all projective spaces of dimension are isomorphic to it (because every vector space of dimension is isomorphic to ). The elements of a projective space are commonly called points. If a basis of has been chosen, and, in particular if , the projective coordinates of a point P are the coordinates on the basis of any element of the corresponding equivalence class. These coordinates are commonly denoted , the colons and the brackets being used for distinguishing from usual coordinates, and emphasizing that this is an equivalence class, which is defined up to the multiplication by a non zero constant. That is, if are projective coordinates of a point, then are also projective coordinates of the same point, for any nonzero in . Also, the above definition implies that are projective coordinates of a point if and only if at least one of the coordinates is nonzero. If is the field of real or complex numbers, a projective space is called a real projective space or a complex projective space, respectively. If is one or two, a projective space of dimension is called a projective line or a projective plane, respectively. The complex projective line is also called the Riemann sphere. All these definitions extend naturally to the case where is a division ring; see, for example, Quaternionic projective space. The notation is sometimes used for . If is a finite field with elements, is often denoted (see PG(3,2)). Related concepts Subspace Let be a projective space, where is a vector space over a field , and be the canonical map that maps a nonzero vector to its equivalence class, which is the vector line containing with the zero vector removed. Every linear subspace of is a union of lines. It follows that is a projective space, which can be identified with . A projective subspace is thus a projective space that is obtained by restricting to a linear subspace the equivalence relation that defines . If and are two different points of , the vectors and are linearly independent. It follows that: There is exactly one projective line that passes through two different points of , and A subset of is a projective subspace if and only if, given any two different points, it contains the whole projective line passing through these points. In synthetic geometry, where projective lines are primitive objects, the first property is an axiom, and the second one is the definition of a projective subspace. Span Every intersection of projective subspaces is a projective subspace. It follows that for every subset of a projective space, there is a smallest projective subspace containing , the intersection of all projective subspaces containing . This projective subspace is called the projective span of , and is a spanning set for it. A set of points is projectively independent if its span is not the span of any proper subset of . If is a spanning set of a projective space , then there is a subset of that spans and is projectively independent (this results from the similar theorem for vector spaces). If the dimension of is , such an independent spanning set has elements. Contrarily to the cases of vector spaces and affine spaces, an independent spanning set does not suffice for defining coordinates. One needs one more point, see next section. Frame A projective frame or projective basis is an ordered set of points in a projective space that allows defining coordinates. More precisely, in an -dimensional projective space, a projective frame is a tuple of points such that any of them are independent; that is, they are not contained in a hyperplane. If is an -dimensional vector space, and is the canonical projection from to , then is a projective frame if and only if is a basis of and the coefficients of on this basis are all nonzero. By rescaling the first vectors, any frame can be rewritten as such that ; this representation is unique up to the multiplication of all with a common nonzero factor. The projective coordinates or homogeneous coordinates of a point on a frame with are the coordinates of on the basis . They are only defined up to scaling with a common nonzero factor. The canonical frame of the projective space consists of images by of the elements of the canonical basis of (that is, the tuples with only one nonzero entry, equal to 1), and the image by of their sum. Projective geometry Projective transformation Topology A projective space is a topological space, as endowed with the quotient topology of the topology of a finite dimensional real vector space. Let be the unit sphere in a normed vector space , and consider the function that maps a point of to the vector line passing through it. This function is continuous and surjective. The inverse image of every point of consist of two antipodal points. As spheres are compact spaces, it follows that: For every point of , the restriction of to a neighborhood of is a homeomorphism onto its image, provided that the neighborhood is small enough for not containing any pair of antipodal points. This shows that a projective space is a manifold. A simple atlas can be provided, as follows. As soon as a basis has been chosen for , any vector can be identified with its coordinates on the basis, and any point of may be identified with its homogeneous coordinates. For , the set is an open subset of , and since every point of has at least one nonzero coordinate. To each is associated a chart, which is the homeomorphisms such that where hats means that the corresponding term is missing. These charts form an atlas, and, as the transition maps are analytic functions, it results that projective spaces are analytic manifolds. For example, in the case of , that is of a projective line, there are only two , which can each be identified to a copy of the real line. In both lines, the intersection of the two charts is the set of nonzero real numbers, and the transition map is in both directions. The image represents the projective line as a circle where antipodal points are identified, and shows the two homeomorphisms of a real line to the projective line; as antipodal points are identified, the image of each line is represented as an open half circle, which can be identified with the projective line with a single point removed. CW complex structure Real projective spaces have a simple CW complex structure, as can be obtained from by attaching an -cell with the quotient projection as the attaching map. Algebraic geometry Originally, algebraic geometry was the study of common zeros of sets of multivariate polynomials. These common zeros, called algebraic varieties belong to an affine space. It appeared soon, that in the case of real coefficients, one must consider all the complex zeros for having accurate results. For example, the fundamental theorem of algebra asserts that a univariate square-free polynomial of degree has exactly complex roots. In the multivariate case, the consideration of complex zeros is also needed, but not sufficient: one must also consider zeros at infinity. For example, Bézout's theorem asserts that the intersection of two plane algebraic curves of respective degrees and consists of exactly points if one consider complex points in the projective plane, and if one counts the points with their multiplicity. Another example is the genus–degree formula that allows computing the genus of a plane algebraic curve from its singularities in the complex projective plane. So a projective variety is the set of points in a projective space, whose homogeneous coordinates are common zeros of a set of homogeneous polynomials. Any affine variety can be completed, in a unique way, into a projective variety by adding its points at infinity, which consists of homogenizing the defining polynomials, and removing the components that are contained in the hyperplane at infinity, by saturating with respect to the homogenizing variable. An important property of projective spaces and projective varieties is that the image of a projective variety under a morphism of algebraic varieties is closed for Zariski topology (that is, it is an algebraic set). This is a generalization to every ground field of the compactness of the real and complex projective space. A projective space is itself a projective variety, being the set of zeros of the zero polynomial. Scheme theory Scheme theory, introduced by Alexander Grothendieck during the second half of 20th century, allows defining a generalization of algebraic varieties, called schemes, by gluing together smaller pieces called affine schemes, similarly as manifolds can be built by gluing together open sets of . The Proj construction is the construction of the scheme of a projective space, and, more generally of any projective variety, by gluing together affine schemes. In the case of projective spaces, one can take for these affine schemes the affine schemes associated to the charts (affine spaces) of the above description of a projective space as a manifold. Synthetic geometry In synthetic geometry, a projective space can be defined axiomatically as a set (the set of points), together with a set of subsets of (the set of lines), satisfying these axioms: Each two distinct points and are in exactly one line. Veblen's axiom: If , , , are distinct points and the lines through and meet, then so do the lines through and . Any line has at least 3 points on it. The last axiom eliminates reducible cases that can be written as a disjoint union of projective spaces together with 2-point lines joining any two points in distinct projective spaces. More abstractly, it can be defined as an incidence structure consisting of a set of points, a set of lines, and an incidence relation that states which points lie on which lines. The structures defined by these axioms are more general than those obtained from the vector space construction given above. If the (projective) dimension is at least three then, by the Veblen–Young theorem, there is no difference. However, for dimension two, there are examples that satisfy these axioms that can not be constructed from vector spaces (or even modules over division rings). These examples do not satisfy the theorem of Desargues and are known as non-Desarguesian planes. In dimension one, any set with at least three elements satisfies the axioms, so it is usual to assume additional structure for projective lines defined axiomatically. It is possible to avoid the troublesome cases in low dimensions by adding or modifying axioms that define a projective space. gives such an extension due to Bachmann. To ensure that the dimension is at least two, replace the three point per line axiom above by: There exist four points, no three of which are collinear. To avoid the non-Desarguesian planes, include Pappus's theorem as an axiom; If the six vertices of a hexagon lie alternately on two lines, the three points of intersection of pairs of opposite sides are collinear. And, to ensure that the vector space is defined over a field that does not have even characteristic include Fano's axiom; The three diagonal points of a complete quadrangle are never collinear. A subspace of the projective space is a subset , such that any line containing two points of is a subset of (that is, completely contained in ). The full space and the empty space are always subspaces. The geometric dimension of the space is said to be if that is the largest number for which there is a strictly ascending chain of subspaces of this form: A subspace in such a chain is said to have (geometric) dimension . Subspaces of dimension 0 are called points, those of dimension 1 are called lines and so on. If the full space has dimension then any subspace of dimension is called a hyperplane. Projective spaces admit an equivalent formulation in terms of lattice theory. There is a bijective correspondence between projective spaces and geomodular lattices, namely, subdirectly irreducible, compactly generated, complemented, modular lattices. Classification Dimension 0 (no lines): The space is a single point. Dimension 1 (exactly one line): All points lie on the unique line. Dimension 2: There are at least 2 lines, and any two lines meet. A projective space for is equivalent to a projective plane. These are much harder to classify, as not all of them are isomorphic with a . The Desarguesian planes (those that are isomorphic with a satisfy Desargues's theorem and are projective planes over division rings, but there are many non-Desarguesian planes. Dimension at least 3: Two non-intersecting lines exist. proved the Veblen–Young theorem, to the effect that every projective space of dimension is isomorphic with a , the -dimensional projective space over some division ring . Finite projective spaces and planes A finite projective space is a projective space where is a finite set of points. In any finite projective space, each line contains the same number of points and the order of the space is defined as one less than this common number. For finite projective spaces of dimension at least three, Wedderburn's theorem implies that the division ring over which the projective space is defined must be a finite field, , whose order (that is, number of elements) is (a prime power). A finite projective space defined over such a finite field has points on a line, so the two concepts of order coincide. Notationally, is usually written as . All finite fields of the same order are isomorphic, so, up to isomorphism, there is only one finite projective space for each dimension greater than or equal to three, over a given finite field. However, in dimension two there are non-Desarguesian planes. Up to isomorphism there are finite projective planes of orders 2, 3, 4, ..., 10, respectively. The numbers beyond this are very difficult to calculate and are not determined except for some zero values due to the Bruck–Ryser theorem. The smallest projective plane is the Fano plane, with 7 points and 7 lines. The smallest 3-dimensional projective spaces is , with 15 points, 35 lines and 15 planes. Morphisms Injective linear maps between two vector spaces and over the same field  induce mappings of the corresponding projective spaces via: where is a non-zero element of and [...] denotes the equivalence classes of a vector under the defining identification of the respective projective spaces. Since members of the equivalence class differ by a scalar factor, and linear maps preserve scalar factors, this induced map is well-defined. (If is not injective, it has a null space larger than ; in this case the meaning of the class of is problematic if is non-zero and in the null space. In this case one obtains a so-called rational map, see also Birational geometry.) Two linear maps and in induce the same map between and if and only if they differ by a scalar multiple, that is if for some . Thus if one identifies the scalar multiples of the identity map with the underlying field , the set of -linear morphisms from to is simply . The automorphisms can be described more concretely. (We deal only with automorphisms preserving the base field ). Using the notion of sheaves generated by global sections, it can be shown that any algebraic (not necessarily linear) automorphism must be linear, i.e., coming from a (linear) automorphism of the vector space . The latter form the group . By identifying maps that differ by a scalar, one concludes that the quotient group of modulo the matrices that are scalar multiples of the identity. (These matrices form the center of .) The groups are called projective linear groups. The automorphisms of the complex projective line are called Möbius transformations. Dual projective space When the construction above is applied to the dual space rather than , one obtains the dual projective space, which can be canonically identified with the space of hyperplanes through the origin of . That is, if is -dimensional, then is the Grassmannian of planes in . In algebraic geometry, this construction allows for greater flexibility in the construction of projective bundles. One would like to be able to associate a projective space to every quasi-coherent sheaf over a scheme , not just the locally free ones. See EGAII, Chap. II, par. 4 for more details. Generalizations dimension The projective space, being the "space" of all one-dimensional linear subspaces of a given vector space is generalized to Grassmannian manifold, which is parametrizing higher-dimensional subspaces (of some fixed dimension) of . sequence of subspaces More generally flag manifold is the space of flags, i.e., chains of linear subspaces of . other subvarieties Even more generally, moduli spaces parametrize objects such as elliptic curves of a given kind. other rings Generalizing to associative rings (rather than only fields) yields, for example, the projective line over a ring. patching Patching projective spaces together yields projective space bundles. Severi–Brauer varieties are algebraic varieties over a field , which become isomorphic to projective spaces after an extension of the base field . Another generalization of projective spaces are weighted projective spaces; these are themselves special cases of toric varieties.
Mathematics
Non-Euclidean geometry
null
242247
https://en.wikipedia.org/wiki/Taxaceae
Taxaceae
Taxaceae (), commonly called the yew family, is a coniferous family which includes six extant and two extinct genera, and about 30 species of plants, or in older interpretations three genera and 7 to 12 species. Description They are many-branched, small trees and shrubs. The leaves are evergreen, spirally arranged, often twisted at the base to appear 2-ranked. They are linear to lanceolate, and have pale green or white stomatal bands on the undersides. The plants are dioecious, or rarely monoecious. The catkin like male cones are long, and shed pollen in the early spring. They are sometimes externally only slightly differentiated from the branches. The fertile bracts have 2-8 pollen sacs. The female 'cones' are highly reduced. Only the upper or uppermost bracts are fertile and bear one or rarely two seeds. The ovule usually exceeds the scale, although ovules are sometimes rarely enclosed by it. They may be found on the ends of branches or on the branches. They may grow singly or in tufts or clumps. As the seed matures, a fleshy aril partly encloses it. The developmental origin of the aril is unclear, but it may represent a fused pair of swollen leaves. The mature aril is brightly coloured, soft, juicy and sweet, and is eaten by birds which then disperse the hard seed undamaged in their droppings. However, the seeds are highly poisonous to humans, containing the poisons taxine and taxol. Distribution Species are mostly found in the tropics and temperate zones in the northern temperate. There are only a few species in the southern hemisphere. Classification Taxaceae is now generally included with all other conifers in the order Pinales, as DNA analysis has shown that the yews are phylogenetically nested in the Pinales, a conclusion supported by micromorphology studies. Formerly they were often treated as distinct from other conifers by placing them in a separate order Taxales. Ernest Henry Wilson referred to Taxaceae as "taxads" in his 1916 book. Taxaceae is thought to be the sister group to Cupressaceae, from which it diverged during the early-mid Triassic. The clade comprising both is sister to Sciadopityaceae, which diverged from them during the early-mid Permian. The oldest confirmed member of Taxaceae is Palaeotaxus rediviva from the earliest Jurassic (Hettangian) of Sweden. Fossils belonging to the living genus Amentotaxus from the Middle Jurassic of China indicate that Taxaceae had already substantially diversified during the Jurassic. The broadly defined Taxaceae (including Cephalotaxus) comprises six extant genera and about 30 species overall. Cephalotaxus is now included in Taxaceae, rather than being recognized as the core of its own family, Cephalotaxaceae. Phylogenetic evidence strongly supports a very close relationship between Cephalotaxus and other members of Taxaceae, and morphological differences between them are not substantial. Previous recognition of two distinct families, Taxaceae and Cephalotaxaceae (e.g.,), was based on relatively minor morphological details: Taxaceae (excluding Cephalotaxus) has smaller mature seeds growing to in 6–8 months, that are not fully enclosed by the aril; in contrast, Cephalotaxus seeds have a longer maturation period (from 18–20 months), and larger mature seeds () fully enclosed by the aril. However, there are also very clear morphological connections between Cephalotaxus and other members of Taxaceae, and considered in tandem with the phylogenetic evidence, there is no compelling need to recognize Cephalotaxus (or other genera in Taxaceae) as a distinct family,. Phylogeny Phylogeny of Taxaceae. Amentotaxus – Catkin-yew Amentotaxus argotaenia - Catkin yew Amentotaxus assamica - Assam catkin yew Amentotaxus formosana - Taiwan catkin yew Amentotaxus poilanei - Poilane's catkin yew Amentotaxus yunnanensis - Yunnan catkin yew Austrotaxus – New Caledonia yew Austrotaxus spicata - New Caledonia yew or southern yew Cephalotaxus – Plum yew Cephalotaxus fortunei - Chinese plum-yew Cephalotaxus griffithii - Griffith's plum yew Cephalotaxus hainanensis - Hainan plum-yew Cephalotaxus harringtonii - Korean plum yew, Japanese plum-yew Cephalotaxus koreana - Korean plum yew Cephalotaxus lanceolata - Gongshan plum yew Cephalotaxus latifolia - Broad-leaved plum yew Cephalotaxus mannii - Mann's yew plum Cephalotaxus oliveri - Oliver's plum yew Cephalotaxus sinensis - Chinese plum yew Cephalotaxus wilsoniana - Taiwan plum yew, Taiwan cow's-tail pine, or Wilson plum yew Pseudotaxus – White-berry yew Pseudotaxus chienii - the whiteberry yew Taxus – Common yew Taxus baccata European yew Taxus biternata Delicate branch yew Taxus brevifolia Pacific yew, western yew Taxus caespitosa Caespitosa yew Taxus calcicola Asian limestone yew Taxus canadensis Canada yew Taxus celebica Celebes yew Taxus chinensis China yew Taxus contorta West Himalayan yew Taxus cuspidata Rigid branch yew, Japanese yew Taxus fastigiata Irish yew Taxus floridana Florida yew Taxus florinii Florin yew Taxus globosa Mesoamerican yew Taxus kingstonii Kingston yew Taxus mairei Maire yew Taxus obscura Obscure yew Taxus ocreata Scaly yew Taxus phytonii Phyton yew Taxus recurvata English yew Taxus rehderiana Rehder yew Taxus scutata Scutaceous yew Taxus suffnessii Suffness yew Taxus sumatrana Sumatera yew Taxus umbraculifera Umbrelliform yew Taxus wallichiana Wallich yew, East Himalayan yew Torreya – Nutmeg yew Torreya californica - California torreya Torreya fargesii - Farges nutmeg tree Torreya grandis - Chinese nutmeg yew Torreya jackii - Jack's nutmeg tree, longleaf torreya etc Torreya nucifera - kaya, Japanese torreya, or Japanese nutmeg-yew Torreya taxifolia - Gopher wood Torreya clarnensis
Biology and health sciences
Pinophyta (Conifers)
Plants
242299
https://en.wikipedia.org/wiki/Maxilla
Maxilla
In vertebrates, the maxilla (: maxillae ) is the upper fixed (not fixed in Neopterygii) bone of the jaw formed from the fusion of two maxillary bones. In humans, the upper jaw includes the hard palate in the front of the mouth.<ref>Merriam-Webster Online Dictionary.</ref> The two maxillary bones are fused at the intermaxillary suture, forming the anterior nasal spine. This is similar to the mandible (lower jaw), which is also a fusion of two mandibular bones at the mandibular symphysis. The mandible is the movable part of the jaw. Anatomy Structure The maxilla is a paired bone - the two maxillae unite with each other at the intermaxillary suture. The maxilla consists of: The body of the maxilla: pyramid-shaped; has an orbital, a nasal, an infratemporal, and a facial surface; contains the maxillary sinus. Four processes: the zygomatic process the frontal process the alveolar process the palatine process It has three surfaces: the anterior, posterior, medial Features of the maxilla include: the infraorbital sulcus, canal, and foramen the maxillary sinus the incisive foramen Articulations Each maxilla articulates with nine bones: frontal, ethmoid, nasal, zygomatic, lacrimal, and palatine bones, the vomer, the inferior nasal concha, as well as the maxilla of the other side. Sometimes it articulates with the orbital surface, and sometimes with the lateral pterygoid plate of the sphenoid. Development The maxilla is ossified in membrane. Mall and Fawcett maintain that it is ossified from two centers only, one for the maxilla proper and one for the premaxilla. These centers appear during the sixth week of prenatal development and unite in the beginning of the third month, but the suture between the two portions persists on the palate until nearly middle life. Mall states that the frontal process is developed from both centers. The maxillary sinus appears as a shallow groove on the nasal surface of the bone about the fourth month of development, but does not reach its full size until after the second dentition. The maxilla was formerly described as ossifying from six centers, viz.: One, the orbitonasal, forms that portion of the body of the bone which lies medial to the infraorbital canal, including the medial part of the floor of the orbit and the lateral wall of the nasal cavity. A second, the zygomatic, gives origin to the portion which lies lateral to the infraorbital canal, including the zygomatic process. From a third, the palatine, is developed the palatine process posterior to the incisive canal together with the adjoining part of the nasal wall. A fourth, the premaxillary, forms the incisive bone which carries the incisor teeth and corresponds to the premaxilla of the lower vertebrates. A fifth, the nasal, gives rise to the frontal process and the portion above the canine tooth. And a sixth, the infravomerine,'' lies between the palatine and premaxillary centers and beneath the vomer; this center, together with the corresponding center of the opposite bone, separates the incisive canals from each other. Changes by age At birth the transverse and antero-posterior diameters of the bone are each greater than the vertical. The frontal process is well-marked and the body of the bone consists of little more than the alveolar process, the teeth sockets reaching almost to the floor of the orbit. The maxillary sinus presents the appearance of a furrow on the lateral wall of the nose. In the adult the vertical diameter is the greatest, owing to the development of the alveolar process and the increase in size of the sinus. Function The alveolar process of the maxillae holds the upper teeth, and is referred to as the maxillary arch. Each maxilla attaches laterally to the zygomatic bones (cheek bones). Each maxilla assists in forming the boundaries of three cavities: the roof of the mouth the floor and lateral wall of the nasal cavity the wall of the orbit Each maxilla also enters into the formation of two fossae: the infratemporal and pterygopalatine, and two fissures, the inferior orbital and pterygomaxillary. -When the tender bones of the upper jaw and lower nostril are severely or repetitively damaged, at any age the surrounding cartilage can begin to deteriorate just as it does after death. Clinical significance A maxilla fracture is a form of facial fracture. A maxilla fracture is often the result of facial trauma such as violence, falls or automobile accidents. Maxilla fractures are classified according to the Le Fort classification. In other animals Sometimes (e.g. in bony fish), the maxilla is called "upper maxilla", with the mandible being the "lower maxilla". Conversely, in birds the upper jaw is often called "upper mandible". In most vertebrates, the foremost part of the upper jaw, to which the incisors are attached in mammals consists of a separate pair of bones, the premaxillae. These fuse with the maxilla proper to form the bone found in humans, and some other mammals. In bony fish, amphibians, and reptiles, both maxilla and premaxilla are relatively plate-like bones, forming only the sides of the upper jaw, and part of the face, with the premaxilla also forming the lower boundary of the nostrils. However, in mammals, the bones have curved inward, creating the palatine process and thereby also forming part of the roof of the mouth. Birds do not have a maxilla in the strict sense; the corresponding part of their beaks (mainly consisting of the premaxilla) is called "upper mandible". Cartilaginous fish, such as sharks, also lack a true maxilla. Their upper jaw is instead formed from a cartilaginous bar that is not homologous with the bone found in other vertebrates. Additional images
Biology and health sciences
Skeletal system
Biology
242643
https://en.wikipedia.org/wiki/Calcium%20carbide
Calcium carbide
Calcium carbide, also known as calcium acetylide, is a chemical compound with the chemical formula of CaC2. Its main use industrially is in the production of acetylene and calcium cyanamide. The pure material is colorless, while pieces of technical-grade calcium carbide are grey or brown and consist of about 80–85% of CaC2 (the rest is CaO (calcium oxide), Ca3P2 (calcium phosphide), CaS (calcium sulfide), Ca3N2 (calcium nitride), SiC (silicon carbide), C (carbon), etc.). In the presence of trace moisture, technical-grade calcium carbide emits an unpleasant odor reminiscent of garlic. Applications of calcium carbide include manufacture of acetylene gas, generation of acetylene in carbide lamps, manufacture of chemicals for fertilizer, and steelmaking. Production Calcium carbide is produced industrially in an electric arc furnace from a mixture of lime and coke at approximately . This is an endothermic reaction requiring per mole and high temperatures to drive off the carbon monoxide. This method has not changed since its invention in 1892: CaO + 3 C → CaC2 + CO The high temperature required for this reaction is not practically achievable by traditional combustion, so the reaction is performed in an electric arc furnace with graphite electrodes. The carbide product produced generally contains around 80% calcium carbide by weight. The carbide is crushed to produce small lumps that can range from a few mm up to 50 mm. The impurities are concentrated in the finer fractions. The CaC2 content of the product is assayed by measuring the amount of acetylene produced on hydrolysis. As an example, the British and German standards for the content of the coarser fractions are 295 L/kg and 300 L/kg respectively (at 101 kPa pressure and temperature). Impurities present in the carbide include calcium phosphide, which produces phosphine when hydrolysed. This reaction was an important part of the Industrial Revolution in chemistry, and was made possible in the United States as a result of massive amounts of inexpensive hydroelectric power produced at Niagara Falls before the turn of the 20th century. The electric arc furnace method was discovered in 1892 by T. L. Willson, and independently in the same year by H. Moissan. In Jajce, Bosnia and Herzegovina, the Austrian industrialist Josef Kranz and his "Bosnische-Elektrizitäts AG" company, whose successor later became "Elektro-Bosna", opened the largest chemical factory for the production of calcium carbide at the time in Europe in 1899. A hydroelectric power station on the Pliva river with an installed capacity of 8 MW was constructed to supply electricity for the factory, the first power station of its kind in Southeast Europe, and became operational on 24 March 1899. Crystal structure Pure calcium carbide is a colourless solid. The common crystalline form at room temperature is a distorted rock-salt structure with the C22− units lying parallel. There are three different polymorphs which appear at room temperature: the tetragonal structure and two different monoclinic structures. Applications Production of acetylene The reaction of calcium carbide with water, producing acetylene and calcium hydroxide, was discovered by Friedrich Wöhler in 1862. CaC2(s) + 2H2O(l) → C2H2(g) + Ca(OH)2(aq) This reaction was the basis of the industrial manufacture of acetylene, and is the major industrial use of calcium carbide. Today acetylene is mainly manufactured by the partial combustion of methane or appears as a side product in the ethylene stream from cracking of hydrocarbons. Approximately 400,000 tonnes are produced this way annually (see acetylene preparation). In China, acetylene derived from calcium carbide remains a raw material for the chemical industry, in particular for the production of polyvinyl chloride. Locally produced acetylene is more economical than using imported oil. Production of calcium carbide in China has been increasing. In 2005 output was 8.94 million tons, with the capacity to produce 17 million tons. In the United States, Europe, and Japan, consumption of calcium carbide is generally declining. Production levels in the US during the 1990s were 236,000 tons per year. Production of calcium cyanamide Calcium carbide reacts with nitrogen at high temperature to form calcium cyanamide: CaC2 + N2 → CaCN2 + C Commonly known as nitrolime, calcium cyanamide is used as fertilizer. It is hydrolysed to cyanamide, H2NCN. Steelmaking Calcium carbide is used: in the desulfurization of iron (pig iron, cast iron and steel) as a fuel in steelmaking to extend the scrap ratio to liquid iron, depending on economics. as a powerful deoxidizer at ladle treatment facilities. Carbide lamps Calcium carbide is used in carbide lamps. Water dripping on carbide produces acetylene gas, which burns and produces light. While these lamps gave steadier and brighter light than candles, they were dangerous in coal mines, where flammable methane gas made them a serious hazard. The presence of flammable gases in coal mines led to miner safety lamps such as the Davy lamp, in which a wire gauze reduces the risk of methane ignition. Carbide lamps were still used extensively in slate, copper, and tin mines where methane is not a serious hazard. Most miners' lamps have now been replaced by electric lamps. Carbide lamps are still used for mining in some less wealthy countries, for example in the silver mines near Potosí, Bolivia. Carbide lamps are also still used by some cavers exploring caves and other underground areas, although they are increasingly being replaced in this use by LED lights. Carbide lamps were also used extensively as headlamps in early automobiles, motorcycles and bicycles, but have been replaced entirely by electric lamps. Other uses Calcium carbide is sometimes used as source of acetylene, which like ethylene gas, is a ripening agent. However, this is illegal in some countries as, in the production of acetylene from calcium carbide, contamination often leads to trace production of phosphine and arsine. These impurities can be removed by passing the acetylene gas through acidified copper sulfate solution, but, in developing countries, this precaution is often neglected. Calcium carbide is used in toy cannons such as the Big-Bang Cannon, as well as in bamboo cannons. In the Netherlands calcium carbide is used around new-year to shoot with milk churns. Calcium carbide, together with calcium phosphide, is used in floating, self-igniting naval signal flares, such as those produced by the Holmes' Marine Life Protection Association. Calcium carbide is used to determine the moisture content of soil. When soil and calcium carbide are mixed in a closed pressure cylinder, the water content in soil reacts with calcium carbide to release acetylene whose pressure can be measured to determine the moisture content. Calcium carbide is sold commercially as a mole repellent. When it comes into contact with water, the gas produced drives moles away.
Physical sciences
Carbide salts
Chemistry
242666
https://en.wikipedia.org/wiki/Transducer
Transducer
A transducer is a device that converts energy from one form to another. Usually a transducer converts a signal in one form of energy to a signal in another. Transducers are often employed at the boundaries of automation, measurement, and control systems, where electrical signals are converted to and from other physical quantities (energy, force, torque, light, motion, position, etc.). The process of converting one form of energy to another is known as transduction. Types Mechanical transducers convert physical quantities into mechanical outputs or vice versa; Electrical transducers convert physical quantities into electrical outputs or signals. Examples of these are: a thermocouple that changes temperature differences into a small voltage; a linear variable differential transformer (LVDT), used to measure displacement (position) changes by means of electrical signals. Sensors, actuators and transceivers Transducers can be categorized by the direction information passes through them: A sensor is a transducer that receives and responds to a signal or stimulus from a physical system. It produces a signal, which represents information about the system, which is used by some type of telemetry, information or control system. An actuator is a device that is responsible for moving or controlling a mechanism or system. It is controlled by a signal from a control system or manual control. It is operated by a source of energy, which can be mechanical force, electrical current, hydraulic fluid pressure, or pneumatic pressure, and converts that energy into motion. An actuator is the mechanism by which a control system acts upon an environment. The control system can be simple (a fixed mechanical or electrical system), software-based (e.g. a printer driver, robot control system), a human, or any other input. Bidirectional transducers can convert physical phenomena to electrical signals and electrical signals into physical phenomena. An example of an inherently bidirectional transducer is an antenna, which can convert radio waves (electromagnetic waves) into an electrical signal to be processed by a radio receiver, or translate an electrical signal from a transmitter into radio waves. Another example is a voice coil, which is used in loudspeakers to translate an electrical audio signal into sound, and in dynamic microphones to translate sound waves into an audio signal. Transceivers integrate simultaneous bidirectional functionality. The most ubiquitous example are likely radio transceivers (called transponders in aircraft), used in virtually every form of wireless (tele-)communications and network device connections. Another example is ultrasound transceivers that are used for instance in medical ultrasound (echo) scans. Active vs passive transducers Passive transducers require an external power source to operate, which is called an excitation signal. The signal is modulated by the sensor to produce an output signal. For example, a thermistor does not generate any electrical signal, but by passing an electric current through it, its resistance can be measured by detecting variations in the current or voltage across the thermistor. Active transducers in contrast, generate electric current in response to an external stimulus which serves as the output signal without the need of an additional energy source. Such examples are a photodiode, and a piezoelectric sensor, photovoltic, thermocouple. Characteristics Some specifications that are used to rate transducers: Dynamic range: This is the ratio between the largest amplitude signal and the smallest amplitude signal the transducer can effectively translate. Transducers with larger dynamic range are more "sensitive" and precise. Repeatability: This is the ability of the transducer to produce an identical output when stimulated by the same input. Noise: All transducers add some random noise to their output. In electrical transducers this may be electrical noise due to thermal motion of charges in circuits. Noise corrupts small signals more than large ones. Hysteresis: This is a property in which the output of the transducer depends not only on its current input but its past input. For example, an actuator which uses a gear train may have some backlash, which means that if the direction of motion of the actuator reverses, there will be a dead zone before the output of the actuator reverses, caused by play between the gear teeth. Applications Electromagnetic Antennae – converts propagating electromagnetic waves to and from conducted electrical signals Magnetic cartridges – converts relative physical motion to and from electrical signals Tape head, disk read-and-write heads – converts magnetic fields on a magnetic medium to and from electrical signals Hall effect sensors – convert a magnetic field level into an electrical signal Variable reluctance sensors – the movement of nearby ferrous metal objects induce an alternating current electrical signal Pickups – detect movement of metal strings and induce an electrical signal (AC voltage) Electrochemical pH probes Electro-galvanic oxygen sensors Hydrogen sensors Potentiometric sensor Electromechanical Electromechanical input feeds meters and sensors, while electromechanical output devices are generically called actuators): Accelerometers Air flow sensors Electroactive polymers Rotary motors, linear motors Galvanometers Linear variable differential transformers or rotary variably differential transformers Load cells – converts force to mV/V electrical signal using strain gauges Microelectromechanical systems Potentiometers (when used for measuring position) Pressure sensors String potentiometers Tactile sensors Vibration powered generators Vibrating structure gyroscopes Electroacoustic Loudspeakers, earphones – convert electrical signals into sound (amplified signal → magnetic field → motion → air pressure) Microphones – convert sound into an electrical signal (air pressure → motion of conductor/coil → magnetic field → electrical signal) Tactile transducers – convert electrical signal into vibration (electrical signal → vibration) Thermophones – convert electrical signals into temperature fluctuations, which become sound (electrical signal → periodic heating of a thin conductor → temperature waves → sound waves) Piezoelectric crystals – convert deformations of solid-state crystals (vibrations) to and from electrical signals Geophones – convert a ground movement (displacement) into voltage (vibrations → motion of conductor/coil → magnetic field → signal) Gramophone pickups – (air pressure → motion → magnetic field → electrical signal) Hydrophones – convert changes in water pressure into an electrical signal Sonar transponders (water pressure → Motion of conductor/coil → magnetic field → electrical signal) Ultrasonic transceivers, transmitting ultrasound (transduced from electricity) as well as receiving it after sound reflection from target objects, availing for imaging of those objects Electro-optical Also known as photoelectric: Fluorescent lamps – convert electrical power into incoherent light Incandescent lamps – convert electrical power into incoherent light Light-emitting diodes – convert electrical power into incoherent light Laser diodes – convert electrical power into coherent light Photodiodes, photoresistors, phototransistors, photomultipliers – convert changing light levels into electrical signals Photodetector or photoresistor or light dependent resistor (LDR) – convert changes in light levels into changes in electrical resistance Cathode-ray tube (CRT) – convert electrical signals into visual signals Electrostatic Electrometers Thermoelectric Resistance temperature detectors (RTD) – convert temperature into an electrical resistance signal Thermocouples – convert relative temperatures of metallic junctions to electrical voltage Thermistors (includes PTC resistor and NTC resistor) Radioacoustic Geiger-Müller tubes – convert incident ionizing radiation to an electrical impulse signal Radio receivers – convert electromagnetic transmissions to electrical signals. Radio transmitters – convert electrical signals to electromagnetic transmissions.
Technology
Components
null
242688
https://en.wikipedia.org/wiki/Mole%20salamander
Mole salamander
The mole salamanders (genus Ambystoma) are a group of advanced salamanders endemic to North America. The group has become famous due to the study of the axolotl (A. mexicanum) in research on paedomorphosis, and the tiger salamander (A. tigrinum, A. mavortium) which is often sold as a pet, and is the official amphibian of four US states. General description Terrestrial mole salamanders are identified by having wide, protruding eyes, prominent costal grooves, and thick arms. Most have vivid patterning on dark backgrounds, with marks ranging from deep blue spots to large yellow bars depending on the species. Terrestrial adults spend most of their lives underground in burrows, either of their own making or abandoned by other animals. Some northern species may hibernate in these burrows throughout the winter. They live alone and feed on any available invertebrate. Adults spend little time in the water, only returning to the ponds of their birth to breed. All mole salamanders are oviparous and lay large eggs in clumps in the water. Their fully aquatic larvae are branchiate, with three pairs of external gills behind their heads and above their gill slits. Larvae have large caudal fins, which extend from the back of their heads to their tails and to their cloacae. Larvae grow limbs soon after hatching, with four toes on the fore arms, and five toes on the hind legs. Their eyes are wide-set and lack true eyelids. The larvae of some species (especially those in the south, and tiger salamanders) can reach their adult size before undergoing metamorphosis. During metamorphosis, the gills of the larvae disappear, as do the fins. Their tails, skin, and limbs become thicker, and the eyes develop lids. Their lungs become fully developed, allowing for a fully terrestrial existence. Some species of mole salamanders (as well as populations of normally terrestrial species) are neotenic (retaining their larval form into adulthood). The most famous example is the axolotl. They cannot produce thyroxine, so their only means of metamorphosis is mainly through the outside injection of it. This usually shortens the lifespan of the salamander. Tiger salamander complex Morphologically, tiger salamanders (Ambystoma tigrinum complex) have large heads, small eyes, and thick bodies. This basic morphology is similar across most mole salamanders (genus Ambystoma), though tiger salamanders are among the largest of the mole salamanders, and have relatively large larvae. Tiger salamanders inhabit a wide variety of ecosystems across North America. Given this geographic diversity, subpopulations of tiger salamanders exhibit morphological and behavioral diversity. Whether subpopulations constitute independent species or subspecies within the Ambystoma tigrinum complex, as well as the driving forces behind diversification, remains an active area of research as of 2024. Several subspecies within the Ambystoma tigrinum complex have been reclassified as an independent species. For example: Ambystoma mavortium (barred tiger salamander) comprises former subspecies A. t. diaboli, A. t. mavortium, A. t. melanostictum, A. t. nebulosum, and A. t. stebbinsi. Ambystoma californiense (California tiger salamander) Ambystoma velasci (Plateau tiger salamander), which may be paraphyletic and shares habitats with axolotl (A. mexicanum) Hybrid all-female populations Unisexual (all-female) populations of ambystomatid salamanders are widely distributed across the Great Lakes region and northeastern North America. The females require sperm from a co‑occurring, related species to fertilize their eggs and initiate development. Usually the eggs then discard the sperm genome and develop asexually (i.e., gynogenesis, with premeiotic doubling); however, they may incorporate the genome from the sperm into the resulting offspring. Sperm incorporation commonly takes the form of genome addition (resulting in ploidy elevation in the offspring), or genome replacement, wherein one of the maternal genomes is discarded. This unique mode of reproduction has been termed kleptogenesis by Bogart and colleagues. This is in contrast to hybridogenesis, where the maternal genomes are passed hemiclonally and the paternal genome is discarded every generation before the egg matures and reacquired from the sperm of another species. The nuclear DNA of the unisexuals generally comprises genomes from up to five species: the blue-spotted salamander (A. laterale), Jefferson salamander (A. jeffersonianum), small-mouthed salamander (A. texanum), streamside salamander (A. barbouri), and tiger salamander (A. tigrinum), denoted respectively as L, J, Tx, B, and Ti. This flexibility results in a large number of possible nuclear biotypes (genome combinations) in the unisexuals. For example, an LJJ individual would be a triploid with one A. laterale genome and two A. jeffersonianum genomes, while an LTxJTi individual would be a tetraploid with genomes from four species. Because they have hybrid genomes, unisexual salamanders are a cryptic species with morphology similar to coexisting species. For example, LLJs look like blue-spotted salamanders and LJJs look like Jefferson salamanders. Silvery salamanders LJJ (A. platineum), Tremblay's salamanders LLJ (A. tremblayi), and Kelly's Island salamanders LTxTx and LTxTi (A. nothagenes) were initially described as species. Species names were later dropped for all unisexual salamanders because of the complexity of their genomes. The offspring of a single mother may have different genome complements; for example, a single egg mass may have both LLJJ and LJJ larvae. Despite the complexity of the nuclear genome, all unisexuals form a monophyletic group based on their mitochondrial DNA. The maternal ancestor of the unisexual ambystomatids was most closely related to the streamside salamander, with the original hybridization likely occurring 2.4~3.9 million years ago, making it the oldest known lineage of all-female vertebrates. The hybridization was most probably with an A. laterale. All known unisexuals have at least one A. laterale genome and this is thought to be essential for unisexuality. However, the A. laterale genome has been replaced several times, independently, in each of the lineages by matings with A. laterale. Limb regeneration Ambystoma mexicanum, a neotenic salamander with exceptional regenerative capabilities is one of the principal models for studying limb regeneration. Limb regeneration involves the propagation of a mass of low differentiated and highly proliferative cells termed the blastema. During limb regeneration, blastema cells experience DNA double-strand breaks and thus require homologous recombination, a form of DNA repair that deals with double-strand breaks. Taxonomy Rhyacosiredon was previously considered a separate genus within the family Ambystomatidae. However, cladistic analysis of the mole salamanders found the existence of Rhyacosiredon makes Ambystoma paraphyletic, since the species are more closely related to some Ambystoma species than those species are to others in Ambystoma. The stream-type morphology of these salamanders (which includes larvae and neotenes with short gills and thicker gular folds) may have led to their misclassification as a different genus. The genus name Ambystoma was given by Johann Jakob von Tschudi in 1839, and is traditionally translated as "cup-mouth",. Tschudi did not provide a derivation for the name, and many thought that he intended the name Amblystoma, "blunt-mouth." Occasionally, old specimens and documents use the name Amblystoma. Writing in 1907, Leonhard Stejneger offered a derivation of Ambystoma based on the contraction of a Greek phrase meaning "to cram into the mouth," but others have not found this explanation convincing. In the absence of clear evidence that Tschudi committed a lapsus, the name given in 1839 stands. Species The genus Ambystoma contains 32 species, listed below, the newest being A. bishopi. Some species are Terrestrial, others are neotenic, and some species have established populations of both neotenic and terrestrial forms. In addition, two groups of unisexual hybrid populations are sometimes named under their own species: Silvery salamander (A. platineum) Tremblay's salamander (A. tremblayi)
Biology and health sciences
Salamanders and newts
Animals
242695
https://en.wikipedia.org/wiki/Class%20field%20theory
Class field theory
In mathematics, class field theory (CFT) is the fundamental branch of algebraic number theory whose goal is to describe all the abelian Galois extensions of local and global fields using objects associated to the ground field. Hilbert is credited as one of pioneers of the notion of a class field. However, this notion was already familiar to Kronecker and it was actually Weber who coined the term before Hilbert's fundamental papers came out. The relevant ideas were developed in the period of several decades, giving rise to a set of conjectures by Hilbert that were subsequently proved by Takagi and Artin (with the help of Chebotarev's theorem). One of the major results is: given a number field F, and writing K for the maximal abelian unramified extension of F, the Galois group of K over F is canonically isomorphic to the ideal class group of F. This statement was generalized to the so called Artin reciprocity law; in the idelic language, writing CF for the idele class group of F, and taking L to be any finite abelian extension of F, this law gives a canonical isomorphism where denotes the idelic norm map from L to F. This isomorphism is named the reciprocity map. The existence theorem states that the reciprocity map can be used to give a bijection between the set of abelian extensions of F and the set of closed subgroups of finite index of A standard method for developing global class field theory since the 1930s was to construct local class field theory, which describes abelian extensions of local fields, and then use it to construct global class field theory. This was first done by Emil Artin and Tate using the theory of group cohomology, and in particular by developing the notion of class formations. Later, Neukirch found a proof of the main statements of global class field theory without using cohomological ideas. His method was explicit and algorithmic. Inside class field theory one can distinguish special class field theory and general class field theory. Explicit class field theory provides an explicit construction of maximal abelian extensions of a number field in various situations. This portion of the theory consists of Kronecker–Weber theorem, which can be used to construct the abelian extensions of , and the theory of complex multiplication to construct abelian extensions of CM-fields. There are three main generalizations of class field theory: higher class field theory, the Langlands program (or 'Langlands correspondences'), and anabelian geometry. Formulation in contemporary language In modern mathematical language, class field theory (CFT) can be formulated as follows. Consider the maximal abelian extension A of a local or global field K. It is of infinite degree over K; the Galois group G of A over K is an infinite profinite group, so a compact topological group, and it is abelian. The central aims of class field theory are: to describe G in terms of certain appropriate topological objects associated to K, to describe finite abelian extensions of K in terms of open subgroups of finite index in the topological object associated to K. In particular, one wishes to establish a one-to-one correspondence between finite abelian extensions of K and their norm groups in this topological object for K. This topological object is the multiplicative group in the case of local fields with finite residue field and the idele class group in the case of global fields. The finite abelian extension corresponding to an open subgroup of finite index is called the class field for that subgroup, which gave the name to the theory. The fundamental result of general class field theory states that the group G is naturally isomorphic to the profinite completion of CK, the multiplicative group of a local field or the idele class group of the global field, with respect to the natural topology on CK related to the specific structure of the field K. Equivalently, for any finite Galois extension L of K, there is an isomorphism (the Artin reciprocity map) of the abelianization of the Galois group of the extension with the quotient of the idele class group of K by the image of the norm of the idele class group of L. For some small fields, such as the field of rational numbers or its quadratic imaginary extensions there is a more detailed very explicit but too specific theory which provides more information. For example, the abelianized absolute Galois group G of is (naturally isomorphic to) an infinite product of the group of units of the p-adic integers taken over all prime numbers p, and the corresponding maximal abelian extension of the rationals is the field generated by all roots of unity. This is known as the Kronecker–Weber theorem, originally conjectured by Leopold Kronecker. In this case the reciprocity isomorphism of class field theory (or Artin reciprocity map) also admits an explicit description due to the Kronecker–Weber theorem. However, principal constructions of such more detailed theories for small algebraic number fields are not extendable to the general case of algebraic number fields, and different conceptual principles are in use in the general class field theory. The standard method to construct the reciprocity homomorphism is to first construct the local reciprocity isomorphism from the multiplicative group of the completion of a global field to the Galois group of its maximal abelian extension (this is done inside local class field theory) and then prove that the product of all such local reciprocity maps when defined on the idele group of the global field is trivial on the image of the multiplicative group of the global field. The latter property is called the global reciprocity law and is a far reaching generalization of the Gauss quadratic reciprocity law. One of the methods to construct the reciprocity homomorphism uses class formation which derives class field theory from axioms of class field theory. This derivation is purely topological group theoretical, while to establish the axioms one has to use the ring structure of the ground field. There are methods which use cohomology groups, in particular the Brauer group, and there are methods which do not use cohomology groups and are very explicit and fruitful for applications. History The origins of class field theory lie in the quadratic reciprocity law proved by Gauss. The generalization took place as a long-term historical project, involving quadratic forms and their 'genus theory', work of Ernst Kummer and Leopold Kronecker/Kurt Hensel on ideals and completions, the theory of cyclotomic and Kummer extensions. The first two class field theories were very explicit cyclotomic and complex multiplication class field theories. They used additional structures: in the case of the field of rational numbers they use roots of unity, in the case of imaginary quadratic extensions of the field of rational numbers they use elliptic curves with complex multiplication and their points of finite order. Much later, the theory of Shimura provided another very explicit class field theory for a class of algebraic number fields. In positive characteristic , Kawada and Satake used Witt duality to get a very easy description of the -part of the reciprocity homomorphism. However, these very explicit theories could not be extended to more general number fields. General class field theory used different concepts and constructions which work over every global field. The famous problems of David Hilbert stimulated further development, which led to the reciprocity laws, and proofs by Teiji Takagi, Philipp Furtwängler, Emil Artin, Helmut Hasse and many others. The crucial Takagi existence theorem was known by 1920 and all the main results by about 1930. One of the last classical conjectures to be proved was the principalisation property. The first proofs of class field theory used substantial analytic methods. In the 1930s and subsequently saw the increasing use of infinite extensions and Wolfgang Krull's theory of their Galois groups. This combined with Pontryagin duality to give a clearer if more abstract formulation of the central result, the Artin reciprocity law. An important step was the introduction of ideles by Claude Chevalley in the 1930s to replace ideal classes, essentially clarifying and simplifying the description of abelian extensions of global fields. Most of the central results were proved by 1940. Later the results were reformulated in terms of group cohomology, which became a standard way to learn class field theory for several generations of number theorists. One drawback of the cohomological method is its relative inexplicitness. As the result of local contributions by Bernard Dwork, John Tate, Michiel Hazewinkel and a local and global reinterpretation by Jürgen Neukirch and also in relation to the work on explicit reciprocity formulas by many mathematicians, a very explicit and cohomology-free presentation of class field theory was established in the 1990s. (See, for example, Class Field Theory by Neukirch.) Applications Class field theory is used to prove Artin-Verdier duality. Very explicit class field theory is used in many subareas of algebraic number theory such as Iwasawa theory and Galois modules theory. Most main achievements toward the Langlands correspondence for number fields, the BSD conjecture for number fields, and Iwasawa theory for number fields use very explicit but narrow class field theory methods or their generalizations. The open question is therefore to use generalizations of general class field theory in these three directions. Generalizations of class field theory There are three main generalizations, each of great interest. They are: the Langlands program, anabelian geometry, and higher class field theory. Often, the Langlands correspondence is viewed as a nonabelian class field theory. If and when it is fully established, it would contain a certain theory of nonabelian Galois extensions of global fields. However, the Langlands correspondence does not include as much arithmetical information about finite Galois extensions as class field theory does in the abelian case. It also does not include an analog of the existence theorem in class field theory: the concept of class fields is absent in the Langlands correspondence. There are several other nonabelian theories, local and global, which provide alternatives to the Langlands correspondence point of view. Another generalization of class field theory is anabelian geometry, which studies algorithms to restore the original object (e.g. a number field or a hyperbolic curve over it) from the knowledge of its full absolute Galois group or algebraic fundamental group. Another natural generalization is higher class field theory, divided into higher local class field theory and higher global class field theory. It describes abelian extensions of higher local fields and higher global fields. The latter come as function fields of schemes of finite type over integers and their appropriate localizations and completions. It uses algebraic K-theory, and appropriate Milnor K-groups generalize the used in one-dimensional class field theory.
Mathematics
Other
null
242702
https://en.wikipedia.org/wiki/Mpox
Mpox
Mpox (, ; formerly known as monkeypox) is an infectious viral disease that can occur in humans and other animals. Symptoms include a rash that forms blisters and then crusts over, fever, and swollen lymph nodes. The illness is usually mild, and most infected individuals recover within a few weeks without treatment. The time from exposure to the onset of symptoms ranges from three to seventeen days, and symptoms typically last from two to four weeks. However, cases may be severe, especially in children, pregnant women, or people with suppressed immune systems. The disease is caused by the monkeypox virus, a zoonotic virus in the genus Orthopoxvirus. The variola virus, which causes smallpox, is also in this genus. Human-to-human transmission can occur through direct contact with infected skin or body fluids, including sexual contact. People remain infectious from the onset of symptoms until all the lesions have scabbed and healed. The virus may spread from infected animals through handling infected meat or via bites or scratches. Diagnosis can be confirmed by polymerase chain reaction (PCR) testing a lesion for the virus's DNA. Vaccination is recommended for those at high risk of infection. No vaccine has been developed specifically against mpox, but smallpox vaccines have been found to be effective. There is no specific treatment for the disease, so the aim of treatment is to manage the symptoms and prevent complications. Antiviral drugs such as tecovirimat can be used to treat mpox, although their effectiveness has not been proven. Mpox is endemic in Central and Western Africa, where several species of mammals are suspected to act as a natural reservoir of the virus. The first human cases were diagnosed in 1970 in Basankusu, Democratic Republic of the Congo. Since then, the frequency and severity of outbreaks have significantly increased, possibly as a result of waning immunity since the cessation of routine smallpox vaccination. A global outbreak of clade II in 2022–2023 marked the first incidence of widespread community transmission outside of Africa. In July 2022, the World Health Organization (WHO) declared the outbreak a public health emergency of international concern (PHEIC). The WHO reverted this status in May 2023, as the outbreak came under control, citing a combination of vaccination and public health information as successful control measures. An outbreak of new variant of clade I mpox (known as clade Ib) was detected in the Democratic Republic of the Congo during 2023. As of August 2024, it had spread to several African countries, raising concerns that it may have adapted to more sustained human transmission. In August 2024, the WHO declared the outbreak a public health emergency of international concern. Nomenclature The name monkeypox was originally coined after being found in 1958 during two outbreaks in research monkeys in Copenhagen, Denmark. Beginning during the 2022 outbreak, public health experts and researchers, particularly in Africa, had urged the World Health Organization (WHO) to rename the disease. Social media has been rife with racist comments that associate the disease’s name with African populations. Stigmatizing remarks had also wrongly identified monkeypox as a "gay disease," as gay men, bisexuals, and men who have sex with men are among the most affected globally. This stigma is thought to deter individuals from seeking diagnosis, vaccination, and treatment, reminiscent of the early days of the HIV/AIDS pandemic in the 1980s. Additionally, misinformation has incited violence against monkeys in certain regions, wrongly held accountable for transmitting monkeypox. The WHO putforth its approval for the new name Mpox, which was gradually adopted as the preferred term in the International Classification of Diseases (ICD) after December 2023. The name change retains a connection to poxviruses while making it easier to spell in various languages. The subtypes of mpox virus were also renamed; the clade formerly known as "Congo Basin (Central African)" was renamed cladeI, and the clade formerly known as "West African" was renamed cladeII. For the purpose of preserving access to historical records to facilitate research, the term monkeypox and old subtypes names will remain in the ICD database as searchable terms. Signs and symptoms Initial symptoms of mpox infection are fever, muscle pains, and sore throat, followed by an itchy or painful rash, headache, swollen lymph nodes, and fatigue. Not everyone will exhibit the complete range of symptoms. People with mpox usually become symptomatic about a week after infection. However the incubation period can vary in a range between one day and four weeks. The rash comprises numerous small lesions, which may appear on the palms, soles, face, mouth, throat, genitals, or anus. They begin as small flat spots, before developing into small bumps, which then fill with fluid, eventually bursting and scabbing over, typically lasting around ten days. In rare cases, lesions may become necrotic, requiring debridement and taking longer to heal. Some people may manifest only a single sore from the disease, while others may have hundreds. An individual can be infected with Orthopoxvirus monkeypox without showing any symptoms. Symptoms typically last for two to four weeks but may persist longer in people with weakened immune systems. Complications Complications include secondary infections, pneumonia, sepsis, encephalitis, and loss of vision following corneal infection. Persons with weakened immune systems, whether due to medication, medical conditions, or HIV, are more likely to develop severe cases of the disease. If infection occurs during pregnancy, this may lead to stillbirth or other complications. Outcome Provided there are no complications, sequelae are rare; after healing, the scabs may leave pale marks before becoming darker scars. Deaths Historically, the case fatality rate (CFR) of past outbreaks was estimated at between 1% and 10%, with clade I considered to be more severe than clade II. The case fatality rate of the 2022–2023 global outbreak caused by clade IIb was very low, estimated at 0.16%, with the majority of deaths in individuals who were already immunocompromized. In contrast, , the outbreak of clade I in Democratic Republic of the Congo has a CFR of 4.9%. The difference between these estimates is attributed to: differences in the virulence of clade I versus clade II. under-reporting of mild or asymptomatic cases in the endemic areas of Africa, which generally have poor healthcare infrastructure. evolution of the virus to cause milder disease in humans. better general health, and better health care, in the populations most affected by the 2022–2023 global outbreak. In other animals It is thought that small mammals provide a reservoir for the virus in endemic areas. Spread among animals occurs via the fecal–oral route and through the nose, through wounds and eating infected meat. The disease has also been reported in a wide range of other animals, including monkeys, anteaters, hedgehogs, prairie dogs, squirrels, and shrews. Signs and symptoms in animals are not well researched and further studies are in progress. There have been instances of animal infection outside of endemic Africa; during the 2003 US outbreak, prairie dogs (Cynomys ludovicianus) became infected and presented with fever, cough, sore eyes, poor feeding and rash. There has also been an instance of a domestic dog (Canis familiaris) which became infected displaying lesions and ulceration. Cause Mpox in both humans and animals is caused by infection with Orthopoxvirus monkeypox – a double-stranded DNA virus in the genus Orthopoxvirus, family Poxviridae, making it closely related to the smallpox, cowpox, and vaccinia viruses. The two major subtypes of virus are cladeI and cladeII. In April 2024, after detection of a new variant, cladeI was split into subclades designated Ia and Ib. CladeII is similarly divided into subclades: cladeIIa and cladeIIb. CladeI is estimated to cause more severe disease and higher mortality than cladeII. The virus is considered to be endemic in tropical rainforest regions of Central and West Africa. In addition to monkeys, the virus has been identified in Gambian pouched rats (Cricetomys gambianus), dormice (Graphiurus spp.) and African squirrels (Heliosciurus, and Funisciurus). The use of these animals as food may be an important source of transmission to humans. Transmission The natural reservoir of Orthopoxvirus monkeypox is thought to be small mammals in tropical Africa. The virus can be transmitted from animal to human from bites or scratches, or during activities such as hunting, skinning, or cooking infected animals. The virus enters the body through broken skin, or mucosal surfaces such as the mouth, respiratory tract, or genitals. Mpox can be transmitted from one person to another through contact with infectious lesion material or fluid on the skin, in the mouth or on the genitals; this includes touching, close contact, and during sex. During the 2022–2023 global outbreak of clade II, transmission between people was almost exclusively via sexual contact. There is also a risk of infection from fomites (objects which can become infectious after being touched by an infected person) such as clothing or bedding which has been contaminated with lesion material. Diagnosis Clinical differential diagnosis distinguishes between rash illnesses, such as chickenpox, measles, bacterial skin infections, scabies, poison ivy, syphilis, and medication-associated allergies. Polymerase chain reaction (PCR) testing of samples from skin lesions is the preferred diagnostic test, although it has the disadvantage of being relatively slow to deliver a result. In October 2024, the WHO approved the first diagnostic test under the Emergency Use Listing (EUL) procedure. The Alinity m MPXV assay enables the detection of the virus by laboratory testing swabs of skin lesions, giving a result in less than two hours. Prevention Vaccine Historically, smallpox vaccine had been reported to reduce the risk of mpox among previously vaccinated persons in Africa. The decrease in immunity to poxviruses in exposed populations is a factor in the increasing prevalence of human mpox. It is attributed to waning cross-protective immunity among those vaccinated before 1980, when mass smallpox vaccinations were discontinued, and to the gradually increasing proportion of unvaccinated individuals. As of August 2024, there are four vaccines in use to prevent mpox. All were originally developed to combat smallpox. MVA-BN (marketed as Jynneos, Imvamune or Imvanex) manufactured by Bavarian Nordic. Licensed for use against mpox in Europe, United States and Canada. LC16 from KMB Biologics (Japan) – licensed for use in Japan. OrthopoxVac, licensed for use in Russia and manufactured by the State Research Center of Virology and Biotechnology VECTOR in Russia ACAM2000, manufactured by Emergent BioSolutions. Approved for use against mpox in the United States as of August 2024. The MVA-BN vaccine, originally developed for smallpox, has been approved in the United States for use by persons who are either considered at high risk of exposure to mpox, or who may have recently been exposed to it. The United States Centers for Disease Control and Prevention (CDC) recommends that persons investigating mpox outbreaks, those caring for infected individuals or animals, and those exposed by close or intimate contact with infected individuals or animals should receive a vaccination. Other measures The CDC has made detailed recommendations in addition to the standard precautions for infection control. These include that healthcare providers don a gown, mask, goggles, and a disposable filtering respirator (such as an N95), and that an infected person should be isolated a private room to keep others from possible contact. Those living in countries where mpox is endemic should avoid contact with sick mammals such as rodents, marsupials, non-human primates (dead or alive) that could harbour Orthopoxvirus monkeypox and should refrain from eating or handling wild game (bush meat). During the 2022–2023 outbreak, several public health authorities launched public awareness campaigns in order to reduce spread of the disease. Treatment Most cases of mpox present with mild symptoms and there is complete recovery within 2 to 4 weeks. There is no specific treatment for the disease, although antivirals such as tecovirimat have been approved for the treatment of severe mpox. A 2023 Cochrane review found no completed randomized controlled trials studying therapeutics for the treatment of mpox. The review identified non-randomized controlled trials which evaluated the safety of therapeutics for mpox, finding no significant risks from tecovirimat and low certainty evidence that suggests brincidofovir may cause mild liver injury. Pain is common and may be severe; supportive care such as pain or fever control may be administered. People with mild disease should isolate at home, stay hydrated, eat well, and take steps to maintain their mental health. People who are at high risk from the disease include children, pregnant women, the elderly and those who are immunocompromized. For these people, or those who have severe disease, hospital admission and careful monitoring of symptoms is recommended. Symptomatic treatment is recommended for complications such as proctitis and pruritis. A trial in the Democratic Republic of the Congo found that the antiviral drug tecovirimat did not shorten the duration of mpox lesions in people with clade I mpox. Despite this, the trial's overall mortality rate of 1.7% was notably lower than the 3.6% or higher mortality rate seen in the Democratic Republic of the Congo's general mpox cases. This suggests that hospitalization and high-quality supportive care significantly improve outcomes for mpox people. The trial was sponsored by the NIH and co-led by the Democratic Republic of the Congo's Institut National de Recherche Biomédicale. An additional 2024 study on Siga Technologies’ antiviral drug, tecovirimat, found it ineffective in reducing lesion healing time or pain in adults with the clade II strain of mpox. Based on interim results, a safety board recommended halting further patient enrollment. The trial, launched in September 2022 by the U.S. National Institute of Allergy and Infectious Diseases, involved patients from several countries, including the U.S., Argentina, and Japan, who had mpox symptoms for less than 14 days. An interim analysis revealed no significant differences in lesion resolution or pain reduction between tecovirimat and a placebo. Diagnostics in resource limited settings With the August 2024 outbreak in the DRC, the World Health Organization (WHO) urged manufacturers to submit their products for emergency review. This initiative is part of the WHO's effort to ensure effective diagnostics, particularly for low-income populations. The agency has called for manufacturers to submit their tests for Emergency Use Listing, which would allow the WHO to approve these medical products more quickly. This process is designed to help countries procure essential products through UN agencies and other partners. The urgency comes as a new, easily transmissible form of the 2024 outbreak has raised global concerns, leading the WHO to declare mpox a global public health emergency. Epidemiology History Mpox was first identified as a distinct illness in 1958 among laboratory monkeys in Copenhagen, Denmark. The first documented human cases occurred in 1970, involving six unvaccinated children during the smallpox eradication efforts, with the first being a 9-month-old boy in the Democratic Republic of the Congo. From 1981 to 1986, over 300 human cases of mpox were reported in the Democratic Republic of the Congo (then known as Zaire), primarily due to contact with animals. The virus has been detected in Gambian pouched rats, dormice, and African squirrels, which are often used as food. Many more mpox cases have been reported in Central and West Africa, particularly in the Democratic Republic of the Congo, where 2,000 cases per year were recorded between 2011 and 2014. However, the collected data is often incomplete and unconfirmed, hindering accurate estimations of the number of mpox cases over time. Originally thought to be uncommon in humans, cases have increased since the 1980s, possibly as a result of waning immunity following the cessation of routine smallpox vaccination. Future threat The natural reservoir of Orthopoxvirus monkeypox has not been conclusively determined. Small rodents are considered the most likely candidate. Without a major vaccination campaign, mpox outbreaks in humans will continue indefinitely in the endemic areas, with an ongoing risk that disease outbreaks will spread to non-endemic areas. Other evidence – that the virus is evolving to be more transmissible among humans, that it can infect a wide range of host species, and that human-to-animal transmission can occur – led to concerns that mpox may either become established in new natural reservoirs outside of Africa, or cause future global epidemics. Following the 2022–2023 outbreak, mpox (clade IIb) remains present in the human population outside Africa at very low levels. In November 2023, the WHO reported increasing numbers of cases of mpox (clade I) in the Democratic Republic of the Congo, with 12,569 cases year-to-date and 651 fatalities; there was also the first evidence of sexual transmission of clade I. There has been a rise in mpoxvirus clade I infections in the Democratic Republic of the Congo (DRC) since November 2023, with more cases now reported in other African countries that previously had no mpox cases. Two imported cases were also found in Sweden and Thailand. As of August 23, 2024, over 20,000 mpox cases have been reported in 13 African Union Member States, with 3,311 confirmed cases and 582 deaths. Most cases are found in the DRC, where subclade Ia and Ib are prevalent. Clade Ib was linked to a reported mpox case in Sweden on August 15, 2024, which was related to traveling to an African country where the virus is found. Despitethelow incidence, cases associated with cladeII have been reported in EU/EEA countries since. In 2024, the WHO added the monkeypox virus to its list of "priority pathogens" that could cause a pandemic. Outbreaks This section is an incomplete list of disease outbreaks which have been reported, including significant outbreaks in the endemic countries in tropical Africa (Benin, Cameroon, the Central African Republic, the Democratic Republic of the Congo, Gabon, Ghana, Ivory Coast, Liberia, Nigeria, the Republic of the Congo, Sierra Leone, and South Sudan). Outbreaks of mpox are frequent in areas where the disease is endemic – these areas often have poor healthcare infrastructure and outbreaks are rarely documented. United States In May 2003, a young child became ill with fever and rash after being bitten by a prairie dog purchased at a local swap meet near Milwaukee, Wisconsin. In total, 71 cases of mpox were reported through 20 June 2003. All cases were traced to Gambian pouched rats imported from Accra, Ghana, in April 2003 by a Texas exotic animal distributor. No deaths resulted. Electron microscopy and serologic studies were used to confirm that the disease was human mpox. Everyone affected reported direct or close contact with prairie dogs, later found to be infected with the Orthopoxvirus monkeypox. In July 2021, in the US, an American returning from a trip in Nigeria was diagnosed with mpox. Subsequent testing identified the virus as belonging to cladeII. The patient was hospitalized and treated with tecovirimat and was discharged after 32 days. The first case of clade I mpox in the United States was identified in November 2024; the California Department of Public Health reported that an unidentified individual outside San Francisco had tested positive following travel to and from East Africa. Clade II mpox continues to circulate at low levels. Sudan During 2022, an outbreak of clade I mpox was reported in refugee camps in Sudan. The first case in the country was recorded in August, and in September, six additional cases were discovered in Khartoum. In October, more than 100 cases were reported among Ethiopian refugee camps. Nigeria Two cases of human mpox infections were identified in Nigeria in 1971. In September 2017, Orthopoxvirus monkeypox was reported in Nigeria. The subsequent outbreak was, at that time, the largest ever outbreak of cladeII of the virus, with 118 confirmed cases. Unlike previous outbreaks of this clade, infection was predominantly among young male adults and human-to-human transmission appears to have readily occurred. Seven deaths (5 male, 2 female, case fatality rate of 6%) were reported, including a baby and four HIV/AIDS people. Additionally, a pregnant woman in her second trimester had a spontaneous miscarriage attributed to Orthopoxvirus monkeypox infection. In May 2022, the Nigerian government released a report stating that between 2017 and 2022, 558 cases were confirmed across 32 states and the Federal Capital Territory. There were 8 deaths reported, making for a 1.4% Case Fatality Ratio. In 2022, NCDC implemented a National Technical Working Group for reporting and monitoring infections, strengthening response capacity. United Kingdom In September 2018, the United Kingdom's first case of mpox was recorded. The person, a Nigerian national, is believed to have contracted mpox in Nigeria before travelling to the United Kingdom. A second case was confirmed in the town of Blackpool, with a further case that of a medical worker who cared for the infected person from Blackpool. In December 2019, mpox was diagnosed in a person in South West England who had traveled to the UK from Nigeria. In May 2021, two cases of mpox from a single household were identified by Public Health Wales in the UK. The index case had traveled from Nigeria. Covid guidance to isolate after travel helped detection of the outbreak and to prevent further transmission. Singapore In May 2019, a 38-year-old man who traveled from Nigeria was hospitalized in an isolation ward at the National Centre for Infectious Diseases in Singapore, after being confirmed as the country's first case of mpox. As a result, 22 people were quarantined. The case may have been linked to a simultaneous outbreak in Nigeria. 2022–2023 global outbreak An outbreak of mpox caused by clade IIb of the virus was first identified in May 2022. The first case was detected in London, United Kingdom, on 6 May, in a patient with a recent travel history from Nigeria, where the disease is endemic. Subsequent cases were reported in an increasing number of countries and regions. In July 2022, the WHO declared the outbreak a public health emergency of international concern. This status was terminated in May 2023 due to steady progress in controlling the spread of the disease, attributed to a combination of vaccination and public health information. , clade IIb mpox cases outside of endemic regions in Africa continued to be reported at a low level. 2023–2024 Central Africa outbreak During 2023, a clade I outbreak of mpox disease in the Democratic Republic of the Congo resulted in 14,626 suspected cases being reported, with 654 associated deaths, making for a case-fatality rate of 4.5%. The outbreak continued into 2024, with 3,576 suspected mpox cases and 265 deaths reported in the Democratic Republic of the Congo through the first nine weeks of the year, making for an estimated CFR of 7.4%. Transmission of the virus in the outbreak appears to be primarily through sexual and close familial contact, with cases occurring in areas without a history of mpox, such as South Kivu and Kinshasa. An estimated 64% of the cases and 85% of fatalities have occurred in children. The outbreak consists of two separate sub-variants of clade I, with one of the sub-variants having a novel mutation, making detection with standard assays unreliable. The outbreak spread to the neighbouring country of the Republic of the Congo, with 43 cases reported in March 2024. By August 2024, the outbreak spread further into central and southern Africa with cases of clade I and clade II strains reported in Burundi, Rwanda, Uganda, Kenya, Côte d'Ivoire, and South Africa. The WHO declared a global health emergency in August 2024. Sweden became the first non-African country to report a case of clade I mpox. A case of mpox was confirmed in Pakistan.
Biology and health sciences
Viral diseases
Health
22101775
https://en.wikipedia.org/wiki/Atmosphere%20of%20Pluto
Atmosphere of Pluto
The atmosphere of Pluto is the layer of gasses that surround the dwarf planet Pluto. It consists mainly of nitrogen (N2), with minor amounts of methane (CH4) and carbon monoxide (CO), all of which are vaporized from surface ices on Pluto's surface. It contains layered haze, probably consisting of heavier compounds which form from these gases due to high-energy radiation. The atmosphere of Pluto is notable for its strong and not completely understood seasonal changes caused by peculiarities of the orbital and axial rotation of Pluto. The surface pressure of the atmosphere of Pluto, measured by New Horizons in 2015, is about (), roughly 1/100,000 of Earth's atmospheric pressure. The temperature on the surface is , but it quickly rises with altitude due to a methane-generated greenhouse effect. Near the altitude of it reaches , where it then slowly decreases afterwards with height. Pluto is the only trans-Neptunian object with a known atmosphere. Its closest analog is the atmosphere of Triton, although in some aspects it resembles even the atmosphere of Mars. The atmosphere of Pluto has been studied since the 1980s by way of earth-based observation of occultations of stars by Pluto and spectroscopy. In 2015, it was studied from a close distance by the spacecraft New Horizons. Composition The main component of the atmosphere of Pluto is nitrogen. The methane content, according to measurements by New Horizons, is 0.25%. For carbon monoxide, estimates are around 0.0515%. Under the influence of high-energy cosmic radiation, these gases react to form more complex compounds (not volatile at Pluto's surface temperatures), including ethane (C2H6), ethylene (C2H4), acetylene (C2H2), heavier hydrocarbons and nitriles and hydrogen cyanide (HCN) (the amount of ethylene is about 0.0001%, and the amount of acetylene is about 0.0003%). These compounds slowly precipitate on the surface. They probably also include tholins, which are responsible for the brown color of Pluto (like some other bodies in the outer solar system). The most volatile compound of the atmosphere of Pluto is nitrogen, the second is carbon monoxide and the third is methane. The indicator of volatility is saturated vapor pressure (sublimation pressure). At temperature (close to minimum value for surface of Pluto) it is about for nitrogen, for carbon monoxide and for methane. It quickly increases with temperature, and at (close to the maximum value) approaches to , and respectively. For heavier-than-methane hydrocarbons, water, ammonia, carbon dioxide and hydrogen cyanide, this pressure remains negligibly low (about or still lower), which indicates absence of volatility at Pluto's conditions (at least in cold lower atmosphere). Methane and carbon monoxide, due to their lower abundance and volatility, could be expected to demonstrate stronger deviations from pressure equilibrium with surface ices and bigger temporal and spatial variations of concentration. But actually the concentration of, at least, methane, does not depend noticeably on height (at least, in the lower 20–30 km), longitude or time. But temperature dependence of the volatilities of methane and nitrogen suggest that the concentration of methane will decrease as Pluto moves further from the Sun. It is notable that the observed concentration of methane is 2 orders of magnitude higher than expected from Raoult's law on the basis of its concentration in surface ice and the ratio of the sublimation pressures of methane and nitrogen. Reasons of this discrepancy are unknown. It could be due to the existence of separate patches of relatively clean methane ice, or due to an increased methane content in the uppermost layer of usual mixed ice. Seasonal and orbital changes of insolation result in migration of surface ices: they sublimate in some places and condensate in others. According to some estimates, this causes meter-sized changes of their thickness. This, alongside changes in viewing geometry, results in appreciable changes of the brightness and color of Pluto. Methane and carbon monoxide, despite their low abundance, are significant for the thermal structure of the atmosphere: methane is a strong heating agent and carbon monoxide is a cooling one (although the degree of cooling contributed by carbon monoxide is not completely clear). Haze New Horizons discovered in the atmosphere of Pluto a multi-layered haze, which covers the entirety of the dwarf planet and reaches altitudes over 200 km. The best images show about 20 layers of the haze. Horizontal extent of the layers is no less than 1000 km. The thickness of the layers varies from 1 to >10 km, and vertical distance between them is about 10 km. In northern regions the haze is 2-3 times denser than near the equator. Despite the very low density of the atmosphere, the haze is rather appreciable: it even scatters enough light to allow photographing some details of Pluto's night side. Long shadows from mountains are seen on the haze. Its normal optical depth is estimated as 0.004 or 0.013 (thus, it diminishes the intensity of a vertical beam of light by or ; for grazing light the effect is much stronger). Scale height of the haze is ; it approximately coincides with scale height of pressure in the middle atmosphere. At the heights of it diminishes to 30 km. Size of the haze particles is unclear. Its blue color points to a particle radius near 10 nm, but the ratio of brightnesses at different phase angles indicates a radius exceeding 100 nm. This can be explained by aggregation of small (tens of nm) particles into larger (hundreds of nm) clusters. The haze probably consists of particles of non-volatile compounds, which are synthesized from atmospheric gases under influence of cosmic high-energy radiation. The layers show the presence of atmospheric waves (presence of which is also suggested by observations of occultations), and such waves can be created by wind blowing over Pluto's rough surface. The haze is the most probable reason for a kink in the curve of light intensity vs. time obtained by New Horizons during the flight through Pluto's shadow (see image on right) – below altitude the atmosphere attenuates light much stronger than above. A similar kink was observed during stellar occultation in 1988. Firstly it was also interpreted as weakening of light by haze, but now it is thought to be mainly a result of strong inverse temperature gradient in lower atmosphere. During later occultations (when the atmosphere of Pluto was already denser) this kink was absent. More evidence of the haze was obtained in 2002 due to a new occultation. The stellar light which managed to reach Earth during the occultation (due to refraction in Pluto's atmosphere), demonstrated an increase of intensity with wavelength. This was interpreted as a reliable evidence of light scattering by aerosols (similar to the reddening of rising Sun). However, this feature was absent during later occultations (including 29 June 2015), and on 14 July 2015, New Horizons found the haze to be blue. In the final batch of images received from New Horizons, a number of potential clouds were observed. Structure Pluto has no or almost no troposphere; observations by New Horizons suggest only a thin tropospheric boundary layer. Its thickness in the place of measurement was 4 km, and the temperature was 37±3 K. The layer is not continuous. Above it lays a layer with fast increase of temperature with height, the stratosphere. The temperature gradient is estimated to be 2.2, or 5.5 degrees per km. It is a result of greenhouse effect, caused by methane. The mean temperature of the surface is (measured in 2005), and the mean value for the whole atmosphere is (2008). At height the temperature reaches its maximum (; stratopause) and then slowly decreases (about ; mesosphere). Causes of this decrease are unclear; it could be related to the cooling effect of carbon monoxide, or hydrogen cyanide, or other reasons. Above 200 km the temperature reaches approximately and then remains constant. The temperature of the upper layers of the atmosphere does not show noticeable temporal changes. In 1988, 2002 and 2006 it was approximately constant and equal to (with uncertainty about ), despite a twofold increase in pressure. Dependence on latitude or morning/evening conditions is also absent: temperature is the same above every part of the surface. It is consistent with theoretical data, which predict fast mixing of the atmosphere. But there is evidence for small vertical heterogeneities in temperature. They reveal themselves in sharp and brief spikes of brightness during stellar occultations. Amplitude of these heterogeneities is estimated to be on the scale of a few km. They can be caused by atmospheric gravity waves or turbulence, which can be related to convection or wind. Interaction with the atmosphere significantly influences the temperature of the surface. Calculations show that the atmosphere, despite its very low pressure, can significantly diminish diurnal variations in temperature. But there still remain temperature variations of about  – partly because of cooling of the surface due to sublimation of ices. Pressure The pressure of the atmosphere of Pluto is very low and strongly time-dependent. Observations of stellar occultations by Pluto show that it increased about 3 times between 1988 and 2015, even though Pluto has been moving away from the Sun since 1989. This is probably caused by Pluto's north pole coming into sunlight in 1987, which intensified evaporation of nitrogen from the northern hemisphere, whereas its southern pole is still too warm for condensation of nitrogen. Absolute values of surface pressure are difficult to obtain from occultation data, because this data does not usually reach the lowest layers of the atmosphere. So, surface pressure has to be extrapolated, and this is somewhat ambiguous due to the height dependence of temperature and, consequently, pressure not being completely clear. The radius of Pluto must also be known, but it was poorly constrained before 2015. So, precise values of Pluto's surface pressure were impossible to calculate in previous times. For some occultations since 1988, pressure was calculated for a reference level of from the center of Pluto (which later turned out to be 88±4 km from the surface). Curves of pressure vs. distance from the center, obtained from occultations in 1988 and 2002, in combination with the now known radius of Pluto () give values of about for 1988 and for 2002. Spectral data provided values in 2008 and in 2012 for the distance from the center (1±4 km from the surface). An occultation on 4 May 2013 gave data almost precisely for the surface level (1190 km from the center, or 3±4 km from the surface): . An occultation on 29/30 June 2015, just 2 weeks before New Horizons encounter, provided a surface pressure of . The first direct and reliable data about the lowermost layers of the atmosphere of Pluto were obtained by New Horizons on 14 July 2015 from radio-occultation measurements. The surface pressure was estimated to be ( at entry of the spacecraft behind Pluto and at the exit). This is consistent with occultation data from previous years, although some of the previous calculations based on this data gave about 2 times higher results. The stellar occultation 17 July 2019 have shown the Pluto's atmospheric pressure have dropped about 30% from maximal values in 2015, reaching 0.967 Pa. 6 June 2020 further pressure decline to 0.91 Pa was measured. The scale height of pressure in Pluto's atmosphere varies significantly with height (in other words, the height dependence of the pressure deviates from exponential). This is caused by strong height variations of temperature. For the lowermost layer of the atmosphere the scale height is about , and for heights  — . Seasonal changes Due to orbital eccentricity, in the aphelion Pluto receives 2.8 times less heat than in perihelion. It should cause strong changes in its atmosphere, although details of these processes are not clear. Firstly it was thought that in aphelion the atmosphere must largely freeze out and fall on the surface (this is suggested by strong temperature dependence of sublimation pressure of its compounds), but more elaborated models predict that Pluto has a significant atmosphere year-round. Pluto's last passage through its perihelion was on 5 September 1989. As of 2015, it is moving away from the Sun and its overall surface illumination is decreasing. However, the situation is complicated by its big axial tilt (122.5°), which results in long polar days and nights on large parts of its surface. Shortly before the perihelion, on 16 December 1987, Pluto underwent equinox, and its north pole came out of the polar night, which had lasted 124 Earth years. Data, existing as of 2014, allowed the scientists to build a model of seasonal changes in Pluto's atmosphere. During the previous aphelion (1865) significant quantity of volatile ices were present in both the northern and southern hemispheres. Approximately at the same time, the equinox occurred and the southern hemisphere became tilted towards the Sun. Local ices began to migrate to the northern hemisphere, and around 1900 the southern Hemisphere became largely devoid of ices. After the following equinox (1987), the southern hemisphere turned away from the Sun. Nonetheless, its surface was already substantially heated, and its big thermal inertia (provided by non-volatile water ice) greatly slowed down its cooling. That is why gases, which now intensively evaporate from northern hemisphere, cannot quickly condense in the southern, and keep accumulating in the atmosphere, increasing its pressure. Around , the southern hemisphere will cool enough to permit intensive condensation of the gases, and they will migrate there from the northern hemisphere, where is polar day. It will last till equinox near aphelion (about 2113). The northern hemisphere will not lose its volatile ices completely, and their evaporation will supply the atmosphere even at the aphelion. The overall change of atmospheric pressure in this model is about 4 times; the minimum was reached near , and the maximum will be near 2030. The full temperature range is only several degrees. In July 2019, an occultation by Pluto showed that its atmospheric pressure, against expectations, had fallen by 20% since 2016. In 2021, astronomers at the Southwest Research Institute confirmed the result using data from an occultation in 2018, which showed that light was appearing less gradually from behind Pluto's disc, indicating a thinning atmosphere. Escape Early data suggested that Pluto's atmosphere loses molecules () of nitrogen per second, an amount corresponding to the loss of a surface layer of volatile ices several hundred meters or several kilometers thick during the lifetime of the Solar System. However, subsequent data from New Horizons revealed that this figure was overestimated by at least four orders of magnitude; Pluto's atmosphere is currently losing only 1×1023 molecules of nitrogen and 5×1025 molecules of methane every second. This presumes a loss of several centimeters of nitrogen ice and several dozen meters of methane ice during the lifetime of the Solar System. Molecules with high enough velocity, which escape into outer space, are ionized by solar ultraviolet radiation. As the solar wind encounters the obstacle formed by the ions, it is slowed and diverted, possibly forming a shock wave upstream of Pluto. The ions are "picked up" by the solar wind and carried in its flow past the dwarf planet to form an ion or plasma tail. The Solar Wind around Pluto (SWAP) instrument on the New Horizons spacecraft made the first measurements of this region of low-energy atmospheric ions shortly after its closest approach on 14 July 2015. Such measurements will enable the SWAP team to determine the rate at which Pluto loses its atmosphere and, in turn, will yield insight into the evolution of the Pluto's atmosphere and surface. The reddish-brown cap of the north pole of Charon, the largest of Pluto's moons (Mordor Macula), may be composed of tholins, organic macromolecules produced from methane, nitrogen and other gases released from the atmosphere of Pluto and transferred over about distance to the orbiting moon. Models show that Charon can receive about 2.5% of gases lost by Pluto. History of study As early as the 1940s, Gerard Kuiper searched for evidence of the atmosphere in the spectrum of Pluto, without success. In the 1970s, some astronomers forwarded the hypothesis of a thick atmosphere and even oceans of neon: according to some views of those times, all other gases that are abundant in the Solar System would either freeze or escape. However, this hypothesis was based on a heavily overestimated mass of Pluto. No observational data about its atmosphere and chemical composition existed at the time. The first strong, though indirect evidence of the atmosphere appeared in 1976. Infrared photometry by the 4-meter Nicholas U. Mayall Telescope revealed methane ice on Pluto's surface, which must sublimate significantly at Plutonian temperatures. Existence of the atmosphere of Pluto was proven via stellar occultation. If a star is occulted by a body without an atmosphere, its light disappears sharply, but occultations by Pluto show a gradual decrease. This is mainly due to atmospheric refraction (not absorption or scattering). The first such observations were made on 19 August 1985 by Noah Brosch and Haim Mendelson of the Wise Observatory in Israel. But quality of the data was rather low due to unfavorable observational conditions (in addition, the detailed description was published only 10 years later). On 9 June 1988 the existence of the atmosphere was convincingly proven by occultation observations from eight sites (the best data were obtained by the Kuiper Airborne Observatory). Scale height of the atmosphere was measured, making it possible to calculate the ratio of the temperature to the mean molecular mass. The temperature and pressure themselves were impossible to calculate at the time due to an absence of data on the chemical composition of the atmosphere and a large uncertainty in the radius and mass of Pluto. The question of composition was answered in 1992 via infrared spectra of Pluto by the 3.8-meter United Kingdom Infrared Telescope. The surface of Pluto turned out to be covered mainly by nitrogen ice. Since nitrogen is, in addition, more volatile than methane, this observation implied a prevalence of nitrogen also in the atmosphere (although gaseous nitrogen was not seen in the spectrum). Furthermore, a small admixture of frozen carbon monoxide was discovered. The same year observations by the 3.0-meter NASA Infrared Telescope Facility revealed the first conclusive evidence of gaseous methane. Understanding the state of the atmosphere requires knowing the surface temperature. Best estimates are derived from measurements of thermal emission of Pluto. The first values, calculated in 1987 from observations by IRAS, were about , with subsequent studies suggesting . In 2005, observations by the Submillimeter Array succeeded in distinguishing the emissions of Pluto and Charon, and the average temperature of Pluto's surface was measured to be (). It was approximately colder than expected; the difference may be due to cooling from the sublimation of nitrogen ice,. Further research revealed that the temperature is strongly different in different regions: from 40 to . Around the year 2000, Pluto entered star-rich fields of the Milky Way, where it will reside until the 2020s. The first stellar occultations after 1988 were on 20 July and 21 August 2002 by teams led by Bruno Sicardy of the Paris Observatory and James L. Elliot of MIT. Atmospheric pressure turned out to be about 2 times higher than in 1988. The next occultation observed was on 12 June 2006, with later ones occurring more frequently. Processing of these data shows that the pressure continues to increase. An occultation of an exceptionally bright star, about 10 times brighter than the Sun itself, was observed on 29/30 June 2015 – only 2 weeks before the New Horizons encounter. On 14 July 2015 the New Horizons spacecraft made the first explorations of the atmosphere of Pluto from close distance, including radio occultation measurements and observations of weakening of solar radiation during flight through Pluto's shadow. It provided the first direct measurements of parameters of the lower atmosphere. The surface pressure turned out to be .
Physical sciences
Solar System
Astronomy
1206475
https://en.wikipedia.org/wiki/Carborane
Carborane
Carboranes (or carbaboranes) are electron-delocalized (non-classically bonded) clusters composed of boron, carbon and hydrogen atoms. Like many of the related boron hydrides, these clusters are polyhedra or fragments of polyhedra. Carboranes are one class of heteroboranes. In terms of scope, carboranes can have as few as 5 and as many as 14 atoms in the cage framework. The majority have two cage carbon atoms. The corresponding C-alkyl and B-alkyl analogues are also known in a few cases. Structure and bonding Carboranes and boranes adopt 3-dimensional cage (cluster) geometries in sharp contrast to typical organic compounds. Cages are compatible with sigma—delocalized bonding, whereas hydrocarbons are typically chains or rings. Like for other electron-delocalized polyhedral clusters, the electronic structure of these cluster compounds can be described by the Wade–Mingos rules. Like the related boron hydrides, these clusters are polyhedra or fragments of polyhedra, and are similarly classified as closo-, nido-, arachno-, hypho-, hypercloso-, iso-, klado-, conjuncto- and megalo-, based on whether they represent a complete (closo-) polyhedron or a polyhedron that is missing one (nido-), two (arachno-), three (hypho-), or more vertices. Carboranes are a notable example of heteroboranes. The essence, these rules emphasize delocalized, multi-centered bonding for B-B, C-C, and B-C interactions. Structurally, they can be considered to be related to the icosahedral (Ih) via formal replacement of two of its fragments with CH. Isomers Geometrical isomers of carboranes can exist on the basis of the various locations of carbon within the cage. Isomers necessitate the use of the numerical prefixes in a compound's name. The closo-dicarbadecaborane can exist in three isomers: 1,2-, 1,7-, and 1,12-. Preparation Carboranes have been prepared by many routes, the most common being addition of alkynyl reagents to boron hydride clusters to form dicarbon carboranes. For this reason, the great majority of carborane have two carbon vertices. Monocarba derivatives Monocarboranes are clusters with cages. The 12-vertex derivative is best studied, but several are known. Typically they are prepared by the addition of one-carbon reagents to boron hydride clusters. One-carbon reagents include cyanide, isocyanides, and formaldehyde. For example, monocarbadodecaborate () is produced from decaborane and formaldehyde, followed by addition of borane dimethylsulfide. Monocarboranes are precursors to weakly coordinating anions. Dicarba clusters Dicarbaboranes can be prepared from boron hydrides using alkynes as the source of the two carbon centers. In addition to the closo- series mentioned above, several open-cage dicarbon species are known including nido- (isostructural and isoelectronic with ) and arachno-. Syntheses of icosahedral closo-dicarbadodecaborane derivatives () employ alkynes as the source and decaborane () to supply the unit. Classification by cage size The following classification is adapted from Grimes's book on carboranes. Small, open carboranes This family of clusters includes the nido cages . Relatively little work has been devoted to these compounds. Pentaborane[9] reacts with acetylene to give nido-1,2-. Upon treatment with sodium hydride, latter forms the salt [1,2-. Small, closed carboranes This family of clusters includes the closo cages . This family of clusters are also lightly studied owing to synthetic difficulties. Also reflecting synthetic challenges, many of these compounds are best known as their alkyl derivatives. 1,5- is the only known isomer of the five-vertex cage. It is prepared from the reaction of pentaborane(9) with acetylene in two operations beginning with condensation with acetylene followed by pyrolysis (cracking) of the product: nido-2,3- closo-2,3- Intermediate-sized carboranes Structures This family of clusters includes the closo cages and their derivatives. Isomerism is well established in this family: 2,3- and 2,4- 2,3- and 2,4- 1,2- and 1,6- 1,10-, 1,6-, and 1,2- 1,2 and 1,3-. Syntheses Carboranes of intermediate nuclearity are most efficiently generated by degradations from larger clusters. In contrast, smaller carboranes are usually prepared by building-up routes, e.g. from pentaborane + alkyne, etc. For example ortho-carborane can be degraded to give , which can be manipulated with oxidants, protonation, and thermolysis. Chromate oxidation of 11-vertex clusters results in deboronation, giving . From that species, other clusters result by pyrolysis, sometimes in the presence of diborane: . In general, isomers having non-adjacent cage carbon atoms are more thermally stable than those with adjacent carbons. Thus, heating tends to induce mutual separation of the carbon atoms in the framework. Icosahedral carboranes The icosahedral charge-neutral closo-carboranes, 1,2-, 1,7-, and 1,12- (informally ortho-, meta-, and para-carborane) are particularly stable and are commercially available. The ortho-carborane forms first upon the reaction of decaborane and acetylene. It converts quantitatively to the meta-carborane upon heating in an inert atmosphere. Producing meta-carborane from ortho-carborane requires 700 °C, proceeding in ca. 25% yield. is also well established. Reactions The metalation of carboranes is illustrated by the reactions of closo- with iron carbonyl sources. Two closo Fe- and -containing products are obtained, according to these idealized equations: Base-induced degradation of carboranes give anionic nido derivatives, which can also be employed as ligands for transition metals, generating metallacarboranes, which are carboranes containing one or more transition metal or main group metal atoms in the cage framework. Most famous are the dicarbollide, complexes with the formula , where M stands for metal. Research Dicarbollide complexes have been investigated for many years, but commercial applications are rare. The bis(dicarbollide) has been used as a precipitant for removal of from radiowastes. The medical applications of carboranes have been explored. C-functionalized carboranes represent a source of boron for boron neutron capture therapy. The compound is a superacid, forming an isolable salt with protonated benzene cation, (benzenium cation). The formula of that salt is . The superacid protonates fullerene, .
Physical sciences
Hydrogen compounds
Chemistry
1208003
https://en.wikipedia.org/wiki/Type%20genus
Type genus
In biological taxonomy, the type genus is the genus which defines a biological family and the root of the family name. Zoological nomenclature According to the International Code of Zoological Nomenclature, "The name-bearing type of a nominal family-group taxon is a nominal genus called the 'type genus'; the family-group name is based upon that of the type genus." Any family-group name must have a type genus (and any genus-group name must have a type species, but any species-group name may, but need not, have one or more type specimens). The type genus for a family-group name is also the genus that provided the stem to which was added the ending -idae (for families). Example: The family name Formicidae has as its type genus the genus Formica Linnaeus, 1758. Botanical nomenclature In botanical nomenclature, the phrase "type genus" is used, unofficially, as a term of convenience. In the ICN this phrase has no status. The code uses type specimens for ranks up to family, and types are optional for higher ranks. The Code does not refer to the genus containing that type as a "type genus". Example: "Poa is the type genus of the family Poaceae and of the order Poales" is another way of saying that the names Poaceae and Poales are based on the generic name Poa. Bacteriological nomenclature The 2008 Revision of the Bacteriological Code states, "The nomenclatural type […] of a taxon above genus, up to and including order, is the legitimate name of the included genus on whose name the name of the relevant taxon is based. One taxon of each category must include the type genus. The names of the taxa which include the type genus must be formed by the addition of the appropriate suffix to the stem of the name of the type genus[…]." In 2019, it was proposed that all ranks above genus should use the genus category as the nomenclatural type. This proposal was subsequently adopted for the rank of phylum. Example: Pseudomonas is the type genus of the family Pseudomonadaceae, the order Pseudomonadales, and the phylum Pseudomonadota.
Biology and health sciences
Taxonomic rank
Biology
1208210
https://en.wikipedia.org/wiki/Ladder%20paradox
Ladder paradox
The ladder paradox (or barn-pole paradox) is a thought experiment in special relativity. It involves a ladder, parallel to the ground, travelling horizontally at relativistic speed (near the speed of light) and therefore undergoing a Lorentz length contraction. The ladder is imagined passing through the open front and rear doors of a garage or barn which is shorter than its rest length, so if the ladder was not moving it would not be able to fit inside. To a stationary observer, due to the contraction, the moving ladder is able to fit entirely inside the building as it passes through. On the other hand, from the point of view of an observer moving with the ladder, the ladder will not be contracted, and it is the building which will be Lorentz contracted to an even smaller length. Therefore, the ladder will not be able to fit inside the building as it passes through. This poses an apparent discrepancy between the realities of both observers. This apparent paradox results from the mistaken assumption of absolute simultaneity. The ladder is said to fit into the garage if both of its ends can be made to be simultaneously inside the garage. The paradox is resolved when it is considered that in relativity, simultaneity is relative to each observer, making the answer to whether the ladder fits inside the garage also relative to each of them. Paradox The simplest version of the problem involves a garage, with a front and back door which are open, and a ladder which, when at rest with respect to the garage, is too long to fit inside. We now move the ladder at a high horizontal velocity through the stationary garage. Because of its high velocity, the ladder undergoes the relativistic effect of length contraction, and becomes significantly shorter. As a result, as the ladder passes through the garage, it is, for a time, completely contained inside it. We could, if we liked, simultaneously close both doors for a brief time, to demonstrate that the ladder fits. So far, this is consistent. The apparent paradox comes when we consider the symmetry of the situation. As an observer moving with the ladder is travelling at constant velocity in the inertial reference frame of the garage, this observer also occupies an inertial frame, where, by the principle of relativity, the same laws of physics apply. From this perspective, it is the ladder which is now stationary, and the garage which is moving with high velocity. It is therefore the garage which is length contracted, and we now conclude that it is far too small to have ever fully contained the ladder as it passed through: the ladder does not fit, and we cannot close both doors on either side of the ladder without hitting it. This apparent contradiction is the paradox. Resolution The solution to the apparent paradox lies in the relativity of simultaneity: what one observer (e.g. with the garage) considers to be two simultaneous events may not in fact be simultaneous to another observer (e.g. with the ladder). When we say the ladder "fits" inside the garage, what we mean precisely is that, at some specific time, the position of the back of the ladder and the position of the front of the ladder were both inside the garage; in other words, the front and back of the ladder were inside the garage simultaneously. As simultaneity is relative, then, two observers disagree on whether the ladder fits. To the observer with the garage, the back end of the ladder was in the garage at the same time that the front end of the ladder was, and so the ladder fit; but to the observer with the ladder, these two events were not simultaneous, and the ladder did not fit. A clear way of seeing this is to consider the doors, which, in the frame of the garage, close for the brief period that the ladder is fully inside. We now look at these events in the frame of the ladder. The first event is the front of the ladder approaching the exit door of the garage. The door closes, and then opens again to let the front of the ladder pass through. At a later time, the back of the ladder passes through the entrance door, which closes and then opens. We see that, as simultaneity is relative, the two doors did not need to be shut at the same time, and the ladder did not need to fit inside the garage. The situation can be further illustrated by the Minkowski diagram below. The diagram is in the rest frame of the garage. The vertical light-blue band shows the garage in spacetime, and the light-red band shows the ladder in spacetime. The x and t axes are the garage space and time axes, respectively, and x and t are the ladder space and time axes, respectively. In the frame of the garage, the ladder at any specific time is represented by a horizontal set of points, parallel to the x axis, in the red band. One example is the bold blue line segment, which lies inside the blue band representing the garage, and which represents the ladder at a time when it is fully inside the garage. In the frame of the ladder, however, sets of simultaneous events lie on lines parallel to the x' axis; the ladder at any specific time is therefore represented by a cross section of such a line with the red band. One such example is the bold red line segment. We see that such line segments never lie fully inside the blue band; that is, the ladder never lies fully inside the garage. Shutting the ladder in the garage In a more complicated version of the paradox, we can physically trap the ladder once it is fully inside the garage. This could be done, for instance, by not opening the exit door again after we close it. In the frame of the garage, we assume the exit door is immovable, and so when the ladder hits it, we say that it instantaneously stops. By this time, the entrance door has also closed, and so the ladder is stuck inside the garage. As its relative velocity is now zero, it is not length contracted, and is now longer than the garage; it will have to bend, snap, or explode. Again, the puzzle comes from considering the situation from the frame of the ladder. In the above analysis, in its own frame, the ladder was always longer than the garage. So how did we ever close the doors and trap it inside? It is worth noting here a general feature of relativity: we have deduced, by considering the frame of the garage, that we do indeed trap the ladder inside the garage. This must therefore be true in any frame - it cannot be the case that the ladder snaps in one frame but not in another. From the ladder's frame, then, we know that there must be some explanation for how the ladder came to be trapped; we must simply find the explanation. The explanation is that, although all parts of the ladder simultaneously decelerate to zero in the garage's frame, because simultaneity is relative, the corresponding decelerations in the frame of the ladder are not simultaneous. Instead, each part of the ladder decelerates sequentially, from front to back, until finally the back of the ladder decelerates, by which time it is already within the garage. As length contraction and time dilation are both controlled by the Lorentz transformations, the ladder paradox can be seen as a physical correlate of the twin paradox, in which instance one of a set of twins leaves earth, travels at speed for a period, and returns to earth a bit younger than the earthbound twin. As in the case of the ladder trapped inside the barn, if neither frame of reference is privileged — each is moving only relative to the other — how can it be that it's the traveling twin and not the stationary one who is younger (just as it's the ladder rather than the barn which is shorter)? In both instances it is the acceleration-deceleration that differentiates the phenomena: it's the twin, not the earth (or the ladder, not the barn) that undergoes the force of deceleration in returning to the temporal (or physical, in the case of the ladder-barn) inertial frame. Ladder paradox and transmission of force What if the back door (the door the ladder exits out of) is closed permanently and does not open? Suppose that the door is so solid that the ladder will not penetrate it when it collides, so it must stop. Then, as in the scenario described above, in the frame of reference of the garage, there is a moment when the ladder is completely within the garage (i.e., the back of the ladder is inside the front door), before it collides with the back door and stops. However, from the frame of reference of the ladder, the ladder is too big to fit in the garage, so by the time it collides with the back door and stops, the back of the ladder still has not reached the front door. This seems to be a paradox. The question is, does the back of the ladder cross the front door or not? The difficulty arises mostly from the assumption that the ladder is rigid (i.e., maintains the same shape). Ladders seem rigid in everyday life. But being completely rigid requires that it can transfer force at infinite speed (i.e., when you push one end the other end must react immediately, otherwise the ladder will deform). This contradicts special relativity, which states that information can travel no faster than the speed of light (which is too fast for us to notice in real life, but is significant in the ladder scenario). So objects cannot be perfectly rigid under special relativity. In this case, by the time the front of the ladder collides with the back door, the back of the ladder does not know it yet, so it keeps moving forwards (and the ladder "compresses"). In both the frame of the garage and the inertial frame of the ladder, the back end keeps moving at the time of the collision, until at least the point where the back of the ladder comes into the light cone of the collision (i.e., a point where force moving backwards at the speed of light from the point of the collision will reach it). At this point the ladder is actually shorter than the original contracted length, so the back end is well inside the garage. Calculations in both frames of reference will show this to be the case. What happens after the force reaches the back of the ladder (the "green" zone in the diagram) is not specified. Depending on the physics, the ladder could break; or, if it were sufficiently elastic, it could bend and re-expand to its original length. At sufficiently high speeds, any realistic material would violently explode into a plasma. Man falling into grate variation This early version of the paradox was originally proposed and solved by Wolfgang Rindler and involved a fast walking man, represented by a rod, falling into a grate. It is assumed that the rod is entirely over the grate in the grate frame of reference before the downward acceleration begins simultaneously and equally applied to each point in the rod. From the perspective of the grate, the rod undergoes a length contraction and fits into the grate. However, from the perspective of the rod, it is the grate undergoing a length contraction, through which it seems the rod is then too long to fall. The downward acceleration of the rod, which is simultaneous in the grate's frame of reference, is not simultaneous in the rod's frame of reference. In the rod's frame of reference, the front of the rod is first accelerated downward (shown in cell 3 of the drawing), and as time goes by, more and more of the rod is subjected to the downward acceleration, until finally the back of the rod is accelerated downward. This results in a bending of the rod in the rod's frame of reference. Since this bending occurs in the rod's rest frame, it is a true physical distortion of the rod which will cause stresses to occur in the rod. For this non-rigid behaviour of the rod to become apparent, both the rod itself and the grate must be of such a scale that the traversal time is measurable. Bar and ring paradox A problem very similar but simpler than the rod and grate paradox, involving only inertial frames, is the "bar and ring" paradox. The rod and grate paradox is complicated: it involves non-inertial frames of reference since at one moment the man is walking horizontally, and a moment later he is falling downward; and it involves a physical deformation of the man (or segmented rod), since the rod is bent in one frame of reference and straight in another. These aspects of the problem introduce complications involving the stiffness of the rod which tends to obscure the real nature of the "paradox". The "bar and ring" paradox is free of these complications: a bar, which is slightly larger in length than the diameter of a ring, is moving upward and to the right with its long axis horizontal, while the ring is stationary and the plane of the ring is also horizontal. If the motion of the bar is such that the center of the bar coincides with the center of the ring at some point in time, then the bar will be Lorentz-contracted due to the forward component of its motion, and it will pass through the ring. The paradox occurs when the problem is considered in the rest frame of the bar. The ring is now moving downward and to the left, and will be Lorentz-contracted along its horizontal length, while the bar will not be contracted at all. How can the bar pass through the ring? The resolution of the paradox again lies in the relativity of simultaneity. The length of a physical object is defined as the distance between two simultaneous events occurring at each end of the body, and since simultaneity is relative, so is this length. This variability in length is just the Lorentz contraction. Similarly, a physical angle is defined as the angle formed by three simultaneous events, and this angle will also be a relative quantity. In the above paradox, although the rod and the plane of the ring are parallel in the rest frame of the ring, they are not parallel in the rest frame of the rod. The uncontracted rod passes through the Lorentz-contracted ring because the plane of the ring is rotated relative to the rod by an amount sufficient to let the rod pass through. In mathematical terms, a Lorentz transformation can be separated into the product of a spatial rotation and a "proper" Lorentz transformation which involves no spatial rotation. The mathematical resolution of the bar and ring paradox is based on the fact that the product of two proper Lorentz transformations (horizontal and vertical) may produce a Lorentz transformation which is not proper (diagonal) but rather includes a spatial rotation component.
Physical sciences
Theory of relativity
Physics
1208416
https://en.wikipedia.org/wiki/Plating
Plating
Plating is a finishing process in which a metal is deposited on a surface. Plating has been done for hundreds of years; it is also critical for modern technology. Plating is used to decorate objects, for corrosion inhibition, to improve solderability, to harden, to improve wearability, to reduce friction, to improve paint adhesion, to alter conductivity, to improve IR reflectivity, for radiation shielding, and for other purposes. Jewelry typically uses plating to give a silver or gold finish. Thin-film deposition has plated objects as small as an atom, therefore plating finds uses in nanotechnology. There are several plating methods, and many variations. In one method, a solid surface is covered with a metal sheet, and then heat and pressure are applied to fuse them (a version of this is Sheffield plate). Other plating techniques include electroplating, vapor deposition under vacuum and sputter deposition. Recently, plating often refers to using liquids. Metallizing refers to coating metal on non-metallic objects. Electroplating In electroplating, an ionic metal is supplied with electrons to form a non-ionic coating on a substrate. A common system involves a chemical solution with the ionic form of the metal, an anode (positively charged) which may consist of the metal being plated (a soluble anode) or an insoluble anode (usually carbon, platinum, titanium, lead, or steel), and finally, a cathode (negatively charged) where electrons are supplied to produce a film of non-ionic metal. Electroless deposition Electroless deposition, also known as chemical or auto-catalytic plating, is a non-galvanic plating method that involves several simultaneous reactions in an aqueous solution, which occur without the use of external electrical power. The reaction is accomplished when hydrogen is released by a reducing agent, normally sodium hypophosphite (Note: the hydrogen leaves as a hydride ion) or thiourea, and oxidized, thus producing a negative charge on the surface of the part. The most common electroless deposition method is electroless nickel plating, although silver, gold and copper layers can also be applied in this manner, as in the technique of angel gilding. Specific cases Gold plating Gold plating is a method of depositing a thin layer of gold on the surface of glass or metal, most often copper or silver. Gold plating is often used in electronics, to provide a corrosion-resistant electrically conductive layer on copper, typically in electrical connectors and printed circuit boards. With direct gold-on-copper plating, the copper atoms have the tendency to diffuse through the gold layer, causing tarnishing of its surface and formation of an oxide/sulfide layer. Therefore, a layer of a suitable barrier metal, usually nickel, has to be deposited on the copper substrate, forming a copper-nickel-gold sandwich. Metals and glass may also be coated with gold for ornamental purposes, using a number of different processes usually referred to as gilding. Sapphires, plastics, and carbon fiber are some other materials that are able to be plated using advance plating techniques. The substrates that can be used are almost limitless. Silver plating Silver plating has been used since the 18th century to provide cheaper versions of household items that would otherwise be made of solid silver, including cutlery, vessels of various kinds, and candlesticks. In the UK the assay offices, and silver dealers and collectors, use the term "silver plate" for items made from solid silver, derived long before silver plating was invented from the Spanish word for silver "plata", seizures of silver from Spanish ships carrying silver from America being a large source of silver at the time. This can cause confusion when talking about silver items; plate or plated. In the UK it is illegal to describe silver-plated items as "silver". It is not illegal to describe silver-plated items as "silver plate", although this is ungrammatical. The earliest form of silver plating was Sheffield Plate, where thin sheets of silver are fused to a layer or core of base metal, but in the 19th century new methods of production (including electroplating) were introduced. Britannia metal is an alloy of tin, antimony and copper developed as a base metal for plating with silver. Another method that can be used to apply a thin layer of silver to objects such as glass, is to place Tollens' reagent in a glass, add glucose/dextrose, and shake the bottle to promote the reaction. For applications in electronics, silver is sometimes used for plating copper, as its electrical resistance is lower (see Resistivity of various materials); more so at higher frequencies due to the skin effect. Variable capacitors are considered of the highest quality when they have silver-plated plates. Similarly, silver-plated, or even solid silver cables, are prized in audiophile applications; however some experts consider that in practice the plating is often poorly implemented, making the result inferior to similarly priced copper cables. Care should be used for parts exposed to high humidity environments because in such environments, when the silver layer is porous or contains cracks, the underlying copper undergoes rapid galvanic corrosion, flaking off the plating and exposing the copper itself; a process known as red plague. Silver plated copper maintained in a moisture-free environment will not undergo this type of corrosion. Copper plating Copper plating is the process of electrolytically forming a layer of copper on the surface of an item. It is commonly used as an even cheaper alternative to silver plating as it is much cheaper than silver. Rhodium plating Rhodium plating is occasionally used on white gold, silver or copper and its alloys. A barrier layer of nickel is usually deposited on silver first, though in this case it is not to prevent migration of silver through rhodium, but to prevent contamination of the rhodium bath with silver and copper, which slightly dissolve in the sulfuric acid usually present in the bath composition. Chrome plating Chrome plating is a finishing treatment using the electrolytic deposition of chromium. The most common form of chrome plating is the thin, decorative bright chrome, which is typically a 10-μm layer over an underlying nickel plate. When plating on iron or steel, an underlying plating of copper allows the nickel to adhere. The pores (tiny holes) in the nickel and chromium layers work to alleviate stress caused by thermal expansion mismatch but also hurt the corrosion resistance of the coating. Corrosion resistance relies on what is called the passivation layer, which is determined by the chemical composition and processing, and is damaged by cracks and pores. In a special case, micropores can help distribute the electrochemical potential that accelerates galvanic corrosion between the layers of nickel and chromium. Depending on the application, coatings of different thicknesses will require different balances of the aforementioned properties. Thin, bright chrome imparts a mirror-like finish to items such as metal furniture frames and automotive trim. Thicker deposits, up to 1000 μm, are called hard chrome and are used in industrial equipment to reduce friction and wear. The traditional solution used for industrial hard chrome plating is made up of about 250 g/L of CrO3 and about 2.5 g/L of SO4−. In solution, the chrome exists as chromic acid, known as hexavalent chromium. A high current is used, in part to stabilize a thin layer of chromium(+2) at the surface of the plated work. Acid chrome has poor throwing power, fine details or holes are further away and receive less current resulting in poor plating. Zinc plating Zinc coatings prevent oxidation of the protected metal by forming a barrier and by acting as a sacrificial anode if this barrier is damaged. Zinc oxide is a fine white dust that (in contrast to iron oxide) does not cause a breakdown of the substrate's surface integrity as it is formed. Indeed, the zinc oxide, if undisturbed, can act as a barrier to further oxidation, in a way similar to the protection afforded to aluminum and stainless steels by their oxide layers. The majority of hardware parts are zinc-plated, rather than cadmium-plated. Zinc-nickel plating Zinc-nickel plating is one of the best corrosion resistant finishes available offering over 5 times the protection of conventional zinc plating and up to 1,500 hours of neutral salt spray test performance. This plating is a combination of a high-nickel zinc-nickel alloy (10–15% nickel) and some variation of chromate. The most common mixed chromates include hexavalent iridescent, trivalent or black trivalent chromate. Used to protect steel, cast iron, brass, copper, and other materials, this acidic plating is an environmentally safe option. Hexavalent chromate has been classified as a human carcinogen by the EPA and OSHA. Tin plating The tin-plating process is used extensively to protect both ferrous and nonferrous surfaces. Tin is a useful metal for the food processing industry since it is non-toxic, ductile and corrosion resistant. The excellent ductility of tin allows a tin coated base metal sheet to be formed into a variety of shapes without damage to the surface tin layer. It provides sacrificial protection for copper, nickel and other non-ferrous metals, but not for steel. Tin is also widely used in the electronics industry because of its ability to protect the base metal from oxidation thus preserving its solderability. In electronic applications, 3% to 7% lead may be added to improve solderability and to prevent the growth of metallic "whiskers" in compression stressed deposits, which would otherwise cause electrical shorting. However, RoHS (Restriction of Hazardous Substances) regulations enacted beginning in 2006 require that no lead be added intentionally and that the maximum percentage not exceed 1%. Some exemptions have been issued to RoHS requirements in critical electronics applications due to failures which are known to have occurred as a result of tin whisker formation. Alloy plating In some cases, it is desirable to co-deposit two or more metals resulting in an electroplated alloy deposit. Depending on the alloy system, an electroplated alloy may be solid solution strengthened or precipitation hardened by heat treatment to improve the plating's physical and chemical properties. Nickel-Cobalt is a common electroplated alloy. Composite plating Metal matrix composite plating can be manufactured when a substrate is plated in a bath containing a suspension of ceramic particles. Careful selection of the size and composition of the particles can fine-tune the deposit for wear resistance, high temperature performance, or mechanical strength. Tungsten carbide, silicon carbide, chromium carbide, and aluminum oxide (alumina) are commonly used in composite electroplating. Cadmium plating Cadmium plating is under scrutiny because of the environmental toxicity of the cadmium metal. Cadmium plating is widely used in some applications in the aerospace, military, and aviation fields. However, it is being phased out due to its toxicity. Military and Aerospace components manufacturers, such as Amphenol Aerospace, have recently been exploring drop-in electroplating replacements for use with currently fielded equipment in order to support the phaseout of the dangerous finish. Cadmium plating (or cad. plating) offers a long list of technical advantages such as excellent corrosion resistance even at relatively low thickness and in salt atmospheres, softness and malleability, freedom from sticky and/or bulky corrosion products, galvanic compatibility with aluminum, freedom from stick-slip thus allowing reliable torquing of plated threads, can be dyed to many colors and clear, has good lubricity and solderability, and works well either as a final finish or as a paint base. If environmental concerns matter, in most aspects cadmium plating can be directly replaced with gold plating as it shares most of the material properties, but gold is more expensive and cannot serve as a paint base. Nickel plating Nickel is electroplated by using a Watts bath, an electrolytic cell having a nickel anode and electrolyte containing nickel sulfate, nickel chloride, and boric acid. Other nickel salts such as nickel ammonium sulfate are sometimes used instead of nickel sulfate. Electroless nickel plating Electroless nickel plating, also known as enickel and NiP, offers many advantages: uniform layer thickness over most complicated surfaces, direct plating of ferrous metals (steel), superior wear and corrosion resistance compared to electroplated nickel or chrome. Much of the chrome plating done in aerospace industry can be replaced with electroless nickel plating, again environmental costs, costs of hexavalent chromium waste disposal and notorious tendency of uneven current distribution favor electroless nickel plating. Electroless nickel plating is self-catalyzing process, the resultant nickel layer is NiP compound, with 7–11% phosphorus content. Properties of the resultant layer hardness and wear resistance are greatly altered with bath composition and deposition temperature, which should be regulated with 1 °C precision, typically at 91 °C. During bath circulation, any particles in it will become also nickel-plated; this effect is used to advantage in processes which deposit plating with particles like silicon carbide (SiC) or polytetrafluoroethylene (PTFE). While superior compared to many other plating processes, it is expensive because the process is complex. Moreover, the process is lengthy even for thin layers. When only corrosion resistance or surface treatment is of concern, very strict bath composition and temperature control is not required and the process is used for plating many tons in one bath at once. Electroless nickel plating layers are known to provide extreme surface adhesion when plated properly. Electroless nickel plating is non-magnetic and amorphous. Electroless nickel plating layers are not easily solderable, nor do they seize with other metals or another electroless nickel-plated workpiece under pressure. This effect benefits electroless nickel-plated screws made out of malleable materials like titanium. Electrical resistance is higher compared to pure metal plating. Aluminum plating "Aluminum plating" can refer to either plating on aluminum or the plating of aluminum on other materials.
Technology
Metallurgy
null
1208420
https://en.wikipedia.org/wiki/Three-body%20problem
Three-body problem
In physics, specifically classical mechanics, the three-body problem is to take the initial positions and velocities (or momenta) of three point masses that orbit each other in space and calculate their subsequent trajectories using Newton's laws of motion and Newton's law of universal gravitation. Unlike the two-body problem, the three-body problem has no general closed-form solution, meaning there is no equation that always solves it. When three bodies orbit each other, the resulting dynamical system is chaotic for most initial conditions. Because there are no solvable equations for most three-body systems, the only way to predict the motions of the bodies is to estimate them using numerical methods. The three-body problem is a special case of the -body problem. Historically, the first specific three-body problem to receive extended study was the one involving the Earth, the Moon, and the Sun. In an extended modern sense, a three-body problem is any problem in classical mechanics or quantum mechanics that models the motion of three particles. Mathematical description The mathematical statement of the three-body problem can be given in terms of the Newtonian equations of motion for vector positions of three gravitationally interacting bodies with masses : where is the gravitational constant. As astronomer Juhan Frank describes, "These three second-order vector differential equations are equivalent to 18 first order scalar differential equations." As June Barrow-Green notes with regard to an alternative presentation, if represent three particles with masses , distances = , and coordinates (i,j = 1,2,3) in an inertial coordinate system ... the problem is described by nine second-order differential equations. The problem can also be stated equivalently in the Hamiltonian formalism, in which case it is described by a set of 18 first-order differential equations, one for each component of the positions and momenta : where is the Hamiltonian: In this case, is simply the total energy of the system, gravitational plus kinetic. Restricted three-body problem In the restricted three-body problem formulation, in the description of Barrow-Green,two... bodies revolve around their centre of mass in circular orbits under the influence of their mutual gravitational attraction, and... form a two body system... [whose] motion is known. A third body (generally known as a planetoid), assumed massless with respect to the other two, moves in the plane defined by the two revolving bodies and, while being gravitationally influenced by them, exerts no influence of its own. Per Barrow-Green, "[t]he problem is then to ascertain the motion of the third body." That is to say, this two-body motion is taken to consist of circular orbits around the center of mass, and the planetoid is assumed to move in the plane defined by the circular orbits. (That is, it is useful to consider the effective potential.) With respect to a rotating reference frame, the two co-orbiting bodies are stationary, and the third can be stationary as well at the Lagrangian points, or move around them, for instance on a horseshoe orbit. The restricted three-body problem is easier to analyze theoretically than the full problem. It is of practical interest as well since it accurately describes many real-world problems, the most important example being the Earth–Moon–Sun system. For these reasons, it has occupied an important role in the historical development of the three-body problem. Mathematically, the problem is stated as follows. Let be the masses of the two massive bodies, with (planar) coordinates and , and let be the coordinates of the planetoid. For simplicity, choose units such that the distance between the two massive bodies, as well as the gravitational constant, are both equal to . Then, the motion of the planetoid is given by: where . In this form the equations of motion carry an explicit time dependence through the coordinates ; however, this time dependence can be removed through a transformation to a rotating reference frame, which simplifies any subsequent analysis. Solutions General solution There is no general closed-form solution to the three-body problem. In other words, it does not have a general solution that can be expressed in terms of a finite number of standard mathematical operations. Moreover, the motion of three bodies is generally non-repeating, except in special cases. However, in 1912 the Finnish mathematician Karl Fritiof Sundman proved that there exists an analytic solution to the three-body problem in the form of a Puiseux series, specifically a power series in terms of powers of . This series converges for all real , except for initial conditions corresponding to zero angular momentum. In practice, the latter restriction is insignificant since initial conditions with zero angular momentum are rare, having Lebesgue measure zero. An important issue in proving this result is the fact that the radius of convergence for this series is determined by the distance to the nearest singularity. Therefore, it is necessary to study the possible singularities of the three-body problems. As is briefly discussed below, the only singularities in the three-body problem are binary collisions (collisions between two particles at an instant) and triple collisions (collisions between three particles at an instant). Collisions of any number are somewhat improbable, since it has been shown that they correspond to a set of initial conditions of measure zero. But there is no criterion known to be put on the initial state in order to avoid collisions for the corresponding solution. So Sundman's strategy consisted of the following steps: Using an appropriate change of variables to continue analyzing the solution beyond the binary collision, in a process known as regularization. Proving that triple collisions only occur when the angular momentum vanishes. By restricting the initial data to , he removed all real singularities from the transformed equations for the three-body problem. Showing that if , then not only can there be no triple collision, but the system is strictly bounded away from a triple collision. This implies, by Cauchy's existence theorem for differential equations, that there are no complex singularities in a strip (depending on the value of ) in the complex plane centered around the real axis (related to the Cauchy–Kovalevskaya theorem). Find a conformal transformation that maps this strip into the unit disc. For example, if (the new variable after the regularization) and if , then this map is given by This finishes the proof of Sundman's theorem. The corresponding series converges extremely slowly. That is, obtaining a value of meaningful precision requires so many terms that this solution is of little practical use. Indeed, in 1930, David Beloriszky calculated that if Sundman's series were to be used for astronomical observations, then the computations would involve at least 10 terms. Special-case solutions In 1767, Leonhard Euler found three families of periodic solutions in which the three masses are collinear at each instant. In 1772, Lagrange found a family of solutions in which the three masses form an equilateral triangle at each instant. Together with Euler's collinear solutions, these solutions form the central configurations for the three-body problem. These solutions are valid for any mass ratios, and the masses move on Keplerian ellipses. These four families are the only known solutions for which there are explicit analytic formulae. In the special case of the circular restricted three-body problem, these solutions, viewed in a frame rotating with the primaries, become points called Lagrangian points and labeled L1, L2, L3, L4, and L5, with L4 and L5 being symmetric instances of Lagrange's solution. In work summarized in 1892–1899, Henri Poincaré established the existence of an infinite number of periodic solutions to the restricted three-body problem, together with techniques for continuing these solutions into the general three-body problem. In 1893, Meissel stated what is now called the Pythagorean three-body problem: three masses in the ratio 3:4:5 are placed at rest at the vertices of a 3:4:5 right triangle, with the heaviest body at the right angle and the lightest at the smaller acute angle. Burrau further investigated this problem in 1913. In 1967 Victor Szebehely and C. Frederick Peters established eventual escape of the lightest body for this problem using numerical integration, while at the same time finding a nearby periodic solution. In the 1970s, Michel Hénon and Roger A. Broucke each found a set of solutions that form part of the same family of solutions: the Broucke–Hénon–Hadjidemetriou family. In this family, the three objects all have the same mass and can exhibit both retrograde and direct forms. In some of Broucke's solutions, two of the bodies follow the same path. In 1993, physicist Cris Moore at the Santa Fe Institute found a zero angular momentum solution with three equal masses moving around a figure-eight shape. In 2000, mathematicians Alain Chenciner and Richard Montgomery proved its formal existence. The solution has been shown numerically to be stable for small perturbations of the mass and orbital parameters, which makes it possible for such orbits to be observed in the physical universe. But it has been argued that this is unlikely since the domain of stability is small. For instance, the probability of a binary–binary scattering event resulting in a figure-8 orbit has been estimated to be a small fraction of a percent. In 2013, physicists Milovan Šuvakov and Veljko Dmitrašinović at the Institute of Physics in Belgrade discovered 13 new families of solutions for the equal-mass zero-angular-momentum three-body problem. In 2015, physicist Ana Hudomal discovered 14 new families of solutions for the equal-mass zero-angular-momentum three-body problem. In 2017, researchers Xiaoming Li and Shijun Liao found 669 new periodic orbits of the equal-mass zero-angular-momentum three-body problem. This was followed in 2018 by an additional 1,223 new solutions for a zero-angular-momentum system of unequal masses. In 2018, Li and Liao reported 234 solutions to the unequal-mass "free-fall" three-body problem. The free-fall formulation starts with all three bodies at rest. Because of this, the masses in a free-fall configuration do not orbit in a closed "loop", but travel forward and backward along an open "track". In 2023, Ivan Hristov, Radoslava Hristova, Dmitrašinović and Kiyotaka Tanikawa published a search for "periodic free-fall orbits" three-body problem, limited to the equal-mass case, and found 12,409 distinct solutions. Numerical approaches Using a computer, the problem may be solved to arbitrarily high precision using numerical integration. There have been attempts of creating computer programs that numerically solve the three-body problem (and by extension, the n-body problem) involving both electromagnetic and gravitational interactions, and incorporating modern theories of physics such as special relativity. In addition, using the theory of random walks, an approximate probability of different outcomes may be computed. History The gravitational problem of three bodies in its traditional sense dates in substance from 1687, when Isaac Newton published his Philosophiæ Naturalis Principia Mathematica, in which Newton attempted to figure out if any long term stability is possible especially for such a system like that of the Earth, the Moon, and the Sun, after having solved the two-body problem. Guided by major Renaissance astronomers Nicolaus Copernicus, Tycho Brahe and Johannes Kepler, Newton introduced later generations to the beginning of the gravitational three-body problem. In Proposition 66 of Book 1 of the Principia, and its 22 Corollaries, Newton took the first steps in the definition and study of the problem of the movements of three massive bodies subject to their mutually perturbing gravitational attractions. In Propositions 25 to 35 of Book 3, Newton also took the first steps in applying his results of Proposition 66 to the lunar theory, the motion of the Moon under the gravitational influence of Earth and the Sun. Later, this problem was also applied to other planets' interactions with the Earth and the Sun. The physical problem was first addressed by Amerigo Vespucci and subsequently by Galileo Galilei, as well as Simon Stevin, but they did not realize what they contributed. Though Galileo determined that the speed of fall of all bodies changes uniformly and in the same way, he did not apply it to planetary motions. Whereas in 1499, Vespucci used knowledge of the position of the Moon to determine his position in Brazil. It became of technical importance in the 1720s, as an accurate solution would be applicable to navigation, specifically for the determination of longitude at sea, solved in practice by John Harrison's invention of the marine chronometer. However the accuracy of the lunar theory was low, due to the perturbing effect of the Sun and planets on the motion of the Moon around Earth. Jean le Rond d'Alembert and Alexis Clairaut, who developed a longstanding rivalry, both attempted to analyze the problem in some degree of generality; they submitted their competing first analyses to the Académie Royale des Sciences in 1747. It was in connection with their research, in Paris during the 1740s, that the name "three-body problem" () began to be commonly used. An account published in 1761 by Jean le Rond d'Alembert indicates that the name was first used in 1747. From the end of the 19th century to early 20th century, the approach to solve the three-body problem with the usage of short-range attractive two-body forces was developed by scientists, which offered P. F. Bedaque, H.-W. Hammer and U. van Kolck an idea to renormalize the short-range three-body problem, providing scientists a rare example of a renormalization group limit cycle at the beginning of the 21st century. George William Hill worked on the restricted problem in the late 19th century with an application of motion of Venus and Mercury. At the beginning of the 20th century, Karl Sundman approached the problem mathematically and systematically by providing a functional theoretical proof to the problem valid for all values of time. It was the first time scientists theoretically solved the three-body problem. However, because there was not a qualitative enough solution of this system, and it was too slow for scientists to practically apply it, this solution still left some issues unresolved. In the 1970s, implication to three-body from two-body forces had been discovered by V. Efimov, which was named the Efimov effect. In 2017, Shijun Liao and Xiaoming Li applied a new strategy of numerical simulation for chaotic systems called the clean numerical simulation (CNS), with the use of a national supercomputer, to successfully gain 695 families of periodic solutions of the three-body system with equal mass. In 2019, Breen et al. announced a fast neural network solver for the three-body problem, trained using a numerical integrator. In September 2023, several possible solutions have been found to the problem according to reports. Other problems involving three bodies The term "three-body problem" is sometimes used in the more general sense to refer to any physical problem involving the interaction of three bodies. A quantum-mechanical analogue of the gravitational three-body problem in classical mechanics is the helium atom, in which a helium nucleus and two electrons interact according to the inverse-square Coulomb interaction. Like the gravitational three-body problem, the helium atom cannot be solved exactly. In both classical and quantum mechanics, however, there exist nontrivial interaction laws besides the inverse-square force that do lead to exact analytic three-body solutions. One such model consists of a combination of harmonic attraction and a repulsive inverse-cube force. This model is considered nontrivial since it is associated with a set of nonlinear differential equations containing singularities (compared with, e.g., harmonic interactions alone, which lead to an easily solved system of linear differential equations). In these two respects it is analogous to (insoluble) models having Coulomb interactions, and as a result has been suggested as a tool for intuitively understanding physical systems like the helium atom. Within the point vortex model, the motion of vortices in a two-dimensional ideal fluid is described by equations of motion that contain only first-order time derivatives. I.e. in contrast to Newtonian mechanics, it is the velocity and not the acceleration that is determined by their relative positions. As a consequence, the three-vortex problem is still integrable, while at least four vortices are required to obtain chaotic behavior. One can draw parallels between the motion of a passive tracer particle in the velocity field of three vortices and the restricted three-body problem of Newtonian mechanics. The gravitational three-body problem has also been studied using general relativity. Physically, a relativistic treatment becomes necessary in systems with very strong gravitational fields, such as near the event horizon of a black hole. However, the relativistic problem is considerably more difficult than in Newtonian mechanics, and sophisticated numerical techniques are required. Even the full two-body problem (i.e. for arbitrary ratio of masses) does not have a rigorous analytic solution in general relativity. -body problem The three-body problem is a special case of the -body problem, which describes how objects move under one of the physical forces, such as gravity. These problems have a global analytical solution in the form of a convergent power series, as was proven by Karl F. Sundman for and by Qiudong Wang for (see -body problem for details). However, the Sundman and Wang series converge so slowly that they are useless for practical purposes; therefore, it is currently necessary to approximate solutions by numerical analysis in the form of numerical integration or, for some cases, classical trigonometric series approximations (see -body simulation). Atomic systems, e.g. atoms, ions, and molecules, can be treated in terms of the quantum -body problem. Among classical physical systems, the -body problem usually refers to a galaxy or to a cluster of galaxies; planetary systems, such as stars, planets, and their satellites, can also be treated as -body systems. Some applications are conveniently treated by perturbation theory, in which the system is considered as a two-body problem plus additional forces causing deviations from a hypothetical unperturbed two-body trajectory.
Mathematics
Dynamical systems
null
1209000
https://en.wikipedia.org/wiki/Electric%20flux
Electric flux
In electromagnetism, electric flux is the total electric field that crosses a given surface. The electric flux through a closed surface is equal to the total charge contained within that surface. The electric field E can exert a force on an electric charge at any point in space. The electric field is the gradient of the electric potential. Overview An electric charge, such as a single electron in space, has an electric field surrounding it. In pictorial form, this electric field is shown as "lines of flux" being radiated from a dot (the charge). These are called Gauss lines. Note that field lines are a graphic illustration of field strength and direction and have no physical meaning as isolated lines. The density of these lines corresponds to the electric field strength, which could also be called the electric flux density: the number of "lines" per unit area. Electric flux is directly proportional to the total number of electric field lines going through a surface. For simplicity in calculations it is often convenient to consider a surface perpendicular to the flux lines. If the electric field is uniform, the electric flux passing through a surface of vector area is where is the electric field (having the unit ), is its magnitude, is the area of the surface, and is the angle between the electric field lines and the normal (perpendicular) to . For a non-uniform electric field, the electric flux through a small surface area is given by (the electric field, , multiplied by the component of area perpendicular to the field). The electric flux over a surface is therefore given by the surface integral: where is the electric field and is an infinitesimal area on the surface with an outward facing surface normal defining its direction. For a closed Gaussian surface, electric flux is given by: where is the electric field, is an infinitesimal area on the closed surface, is the total electric charge inside the surface, is the electric constant (a universal constant, also called the permittivity of free space) () This relation is known as Gauss's law for electric fields in its integral form and it is one of Maxwell's equations. While the electric flux is not affected by charges that are not within the closed surface, the net electric field, can be affected by charges that lie outside the closed surface. While Gauss's law holds for all situations, it is most useful for "by hand" calculations when high degrees of symmetry exist in the electric field. Examples include spherical and cylindrical symmetry. The SI unit of electric flux is the volt-meter (), or, equivalently, newton-meter squared per coulomb (). Thus, the unit of electric flux expressed in terms of SI base units is . Its dimensional formula is .
Physical sciences
Electrostatics
Physics
1209183
https://en.wikipedia.org/wiki/Colobinae
Colobinae
The Colobinae or leaf-eating monkeys are a subfamily of the Old World monkey family that includes 61 species in 11 genera, including the black-and-white colobus, the large-nosed proboscis monkey, and the gray langurs. Some classifications split the colobine monkeys into two tribes, while others split them into three groups. Both classifications put the three African genera Colobus, Piliocolobus, and Procolobus in one group; these genera are distinct in that they have stub thumbs (Greek κολοβός kolobós = "docked"). The various Asian genera are placed into another one or two groups. Analysis of mtDNA confirms the Asian species form two distinct groups, one of langurs and the other of the "odd-nosed" species, but are inconsistent as to the relationships of the gray langurs; some studies suggest that the gray langurs are not closely related to either of these groups, while others place them firmly within the langur group. Characteristics Colobines are medium-sized primates with long tails (except for the pig-tailed langur) and diverse colorations. The coloring of nearly all young animals differs remarkably from that of the adults. Most species are arboreal, although some live a more terrestrial life. They are found in many different habitats of different climate zones (rainforests, mangroves, mountain forests, and savannah), but not in deserts and other dry areas. They live in groups, but in social forms vary. Colobines are folivorous, though their diet may be supplemented with flowers, fruits and the occasional insect. To aid in digestion, particularly of hard-to-digest leaves, they have multichambered, complex stomachs, making them the only primates with foregut fermentation. Foregut fermenters use bacteria to detoxify plant compounds before reaching the intestine, where toxins can be absorbed. Foregut fermentation is also associated with higher protein extraction and efficient digestion of fiber; it is the dominant form of digestions in diverse herbivore taxa, including most Artiodactyla (e.g., deer, cattle, antelope), sloths, and kangaroos. In contrast, lower diversity howler monkeys in the New World rely on hindgut fermentation – occurring lower in the colon or cecum – much like horses and elephants. Unlike the other subfamily of Old World monkeys, the Cercopithecinae, they do not possess cheek pouches. Gestation averages six to seven months. Young are weaned at about one year and are mature at three to six years. Their life expectancy is approximately 20 years. Classification and evolution Colobinae is split into two tribes: Colobini, found in Africa, and Presbytini, found in Asia. Based on fossil records, the tribes split between 10 and 13 million years ago. The Colobini tribe contains three genera, black-and-white colobuses, red colobuses, and the olive colobus, all of whom are found in Africa. The Asian Presbytini comprises seven genera split into two clades, the odd-nosed group and the langur group. The discordant gene tree topologies and divergence age estimates suggest that hybridization, particularly involving female introgression from Piliocolobus/Procolobus into Colobus and male introgression from Semnopithecus into Trachypithecus, played a prominent role in shaping the phylogenetic relationships of African and Asian colobine monkeys during their evolutionary history. The earliest remains of Colobinae are known from the Tugen Hills of Kenya, dating to 12.5 million years ago. The earliest fossils of the genus in Eurasia are those of Mesopithecus found in Greece, dating to around 8.2 million years ago. Family Cercopithecidae Subfamily Cercopithecinae Subfamily Colobinae Tribe Colobini Genus Colobus - black-and-white colobus monkeys Genus Piliocolobus - red colobus monkeys Genus Procolobus - olive colobus Genus Cercopithecoides Tribe Presbytini Langur (leaf monkey) group Genus Trachypithecus - lutungs Genus Presbytis - surilis Genus Semnopithecus - gray langurs Odd-nosed group Genus Pygathrix - doucs Genus Rhinopithecus - snub-nosed monkeys Genus Nasalis - proboscis monkey Genus Simias - pig-tailed langur Genus Mesopithecus Hybrids Intergeneric hybrids are known to occur within the subfamily Colobinae. In India, gray langurs (Semnopithecus spp.) are known to hybridize with Nilgiri langurs (Trachypithecus johnii).
Biology and health sciences
Old World monkeys
Animals
1209545
https://en.wikipedia.org/wiki/Head
Head
A head is the part of an organism which usually includes the ears, brain, forehead, cheeks, chin, eyes, nose, and mouth, each of which aid in various sensory functions such as sight, hearing, smell, and taste. Some very simple animals may not have a head, but many bilaterally symmetric forms do, regardless of size. Heads develop in animals by an evolutionary trend known as cephalization. In bilaterally symmetrical animals, nervous tissue concentrate at the anterior region, forming structures responsible for information processing. Through biological evolution, sense organs and feeding structures also concentrate into the anterior region; these collectively form the head. Human head The human head is an anatomical unit that consists of the skull, hyoid bone and cervical vertebrae. The term "skull" collectively denotes the mandible (lower jaw bone) and the cranium (upper portion of the skull that houses the brain). Sculptures of human heads are generally based on a skeletal structure that consists of a cranium, jawbone, and cheekbone. Though the number of muscles making up the face is generally consistent between sculptures, the shape of the muscles varies widely based on the function, development, and expressions reflected on the faces of the subjects. Proponents of identism believe that the mind is identical to the brain. Philosopher John Searle asserts his identist beliefs, stating "the brain is the only thing in the human head". Similarly, Dr. Henry Bennet-Clark has stated that the head encloses billions of "miniagents and microagents (with no single Boss)". Other animals The evolution of a head is associated with the cephalization that occurred in Bilateria some 555 million years ago. Arthropods In some arthropods, especially trilobites, the cephalon, or cephalic region, is the region of the head which is a collective of "fused segments". Insects A typical insect head is composed of eyes, antennae, and components of mouth. As these components differ substantially from insect to insect, they form important identification links. Eyes in the head found, in several types of insects, are in the form of a pair of compound eyes with multiple faces. In many other types of insects, the compound eyes are seen in a "single facet or group of single facets". In some cases, the eyes may be seen as marks on the dorsal or located near or toward the head, two or three ocelli (single faceted organs). Antennae on the insect's head is found in the form of segmented attachments, in pairs, that are usually located between the eyes. These are in varying shapes and sizes, in the form of filaments or in different enlarged or clubbed form. Insects have mouth parts in various shapes depending on their feeding habits. Labrum is the "upper lip" which is in the front area of the head and is the most exterior part. A pair of mandibles is found on the backside of the labrum flanking the side of the mouth, succeeded by a pair of maxillae each of which is known as maxilliary palp. At the back side of the mouth is the labium or lower lip. There is also an extra mouth part in some insects which is termed as hypopharynx which is usually located between the maxillac. Vertebrates and the "new head hypothesis" Though invertebrate chordates – such as the tunicate larvae or the lancelets – have heads, there has been a question of how the vertebrate head, characterized by a bony skull clearly separated from the main body, might have evolved from the head structures of these animals. According to Hyman (1979), the evolution of the head in the vertebrates has occurred by the fusion of a fixed number of anterior segments, in the same manner as in other "heteronomously segmented animals". In some cases, segments or a portion of the segments disappear. The head segments also lose most of their systems, except for the nervous system. With the progressive development of cephalization, "the head incorporates more and more of the adjacent segments into its structure, so that in general it may be said that the higher the degree of cephalization the greater is the number of segments composing the head". In the 1980s, the "new head hypothesis" was proposed, suggesting that the vertebrate head is an evolutionary novelty resulting from the emergence of neural crest and cranial placodes. In 2014, a transient larva tissue of the lancelet was found to be virtually indistinguishable from the neural crest-derived cartilage which forms the vertebrate skull, suggesting that persistence of this tissue and expansion into the entire headspace could be a viable evolutionary route to formation of the vertebrate head. In society and culture Heraldry The heads of humans and other animals are commonly recurring charges in heraldry. Heads of humans are sometimes blazoned simply as a "man's head", but are far more frequently described in greater detail, either characteristic of a particular race or nationality (such as Moors' heads, Saxons' heads, Egyptians' heads or Turks' heads), or specifically identified (such as the head of Moses in the crest of Hilton, or the head of St. John the Baptist in the crest of the London Company of Tallowchandlers). Several varieties of women's heads also occur, including maidens' heads (often couped under the bust, with hair disheveled), ladies' heads, nuns' heads (often veiled), and occasionally queens' heads. The arms of Devaney of Norfolk include "three nun's heads veiled couped at the shoulders proper," and the bust of a queen occurs in the arms of Queenborough, Kent. Infants' or children's heads are often couped at the shoulders with a snake wrapped around the neck (e.g. "Argent, a boy's head proper, crined or, couped below the shoulders, vested gules, tarnished gold," in the arms of Boyman). Art One of the ways of drawing sketches of heads—as Jack Hamm advises—is to develop it in six well-defined steps, starting with the shape of the head in the shape of an egg. The female head, in particular, is sketched in a double circle design procedure with proportions considered as an ideal of a female head. In the first circle, the division is made of five sections on the diameter, each section of five eyes width. It is then developed over a series of ten defined steps, with the smaller circle imposed partially over the larger circle at the lower end at the fourth stage. Eyes and chins are fitted in various shapes to form the head. Leonardo da Vinci, considered one of the world's greatest artists, drew sketches of human anatomy using grid structures. His image of the face drawn on the grid structure principle is in perfect proportion. In this genre, using the technique of pen and ink, Leonardo created a sketch which is a "Study on the proportions of head and eyes" (pictured). Idiomatic expressions An idiom is a phrase or a fixed expression that has a figurative, or sometimes literal, meaning. "To be big-headed" - to be overly full of oneself "To come to a head" – to reach a critical stage and require immediate action "To bite someone's head off" – to criticize someone strongly "Can't make head or tail of something" – cannot understand something "A head start" – an early start that provides an advantage over others "Head and shoulders above someone or something" – better than someone or something in some way "To want someone's head on a platter" – to want someone severely punished "To bang your head against a brick wall" – to continually try to achieve something without success "To have one's head in the clouds" – to not pay attention to what is happening around one because one is so absorbed by one's own thoughts Engineering and scientific fields The head's function and appearance play an analogous role in the etymology of many technical terms. Cylinder head, pothead, and weatherhead are three such examples. Gallery
Biology and health sciences
Animal: General
null
1209585
https://en.wikipedia.org/wiki/Dwarf%20elephant
Dwarf elephant
Dwarf elephants are prehistoric members of the order Proboscidea which, through the process of allopatric speciation on islands, evolved much smaller body sizes (around shoulder height) in comparison with their immediate ancestors. Dwarf elephants are an example of insular dwarfism, the phenomenon whereby large terrestrial vertebrates (usually mammals) that colonize islands evolve dwarf forms, a phenomenon attributed to adaptation to resource-poor environments and lack of predation and competition. Fossil remains of dwarf elephants have been found on the Mediterranean islands of Cyprus, Malta, Crete, Sicily, Sardinia, the Cyclades Islands and the Dodecanese Islands, which are mostly members of the genus Palaeoloxodon, descending from the large tall straight-tusked elephant (Palaeoloxodon antiquus) of mainland Europe, though two species represent dwarf mammoths. Dwarf species of elephants and Stegodon have been found on the islands of Indonesia and the Philippines, with dwarfed species of Stegodon also having been found in Japan. The Channel Islands of California once supported the pygmy mammoth, a dwarf species descended from Columbian mammoths, while the woolly mammoths that existed on Wrangel Island north of Siberia were once considered dwarfs, but are not anymore. The Mediterranean islands Dwarf elephants first inhabited the Mediterranean islands during the Pleistocene, including all the major islands with the apparent exception of Corsica and the Balearics. Mediterranean dwarf elephants have generally been considered as members of the genus Palaeoloxodon, derived from the continental straight-tusked elephant, Palaeoloxodon antiquus (Falconer & Cautley, 1847), Syn.: Elephas antiquus. An exception is the dwarf Middle-Late Pleistocene Sardinian mammoth, Mammuthus lamarmorai (Major, 1883), the first endemic elephant of the Mediterranean islands recognized as belonging to the mammoth line. Mammuthus creticus from the Early Pleistocene of Crete, formerly considered a member of Palaeoloxodon, is now also considered to be a mammoth, and approaches the size of the smallest dwarf elephants. During low sea levels, the Mediterranean islands were colonised again and again, giving rise, sometimes on the same island, to several species (or subspecies) of different body sizes. As the Ice Age came to an end, sea levels rose, stranding elephants on the island. The island of Sicily appears to have been colonised by proboscideans in at least three separate waves of colonisation. These endemic dwarf elephants were taxonomically different on each island or group of very close islands, like the Cyclades archipelago. There are many uncertainties about the time of colonisation, the phylogenetic relationships and the taxonomic status of dwarf elephants on the Mediterranean islands. Extinction of the insular dwarf elephants has not been correlated with the arrival of humans to the islands. Furthermore, it has been suggested by the palaeontologist Othenio Abel in 1914, that the finding of skeletons of such elephants sparked the idea that they belonged to giant one-eyed monsters, because the center nasal opening was thought to be the socket of a single eye, and thus perhaps were, for example, the origin of the one-eyed Cyclopes of Greek mythology. Italy and Malta Sicily and Malta were inhabited by two successive waves of dwarf elephants derived from P. antiquus, which first arrived on the islands at least 500,000 years ago. The first of these species is P. falconeri , which is one of the smallest dwarf elephant species at around tall, and was strongly modified from its ancestor in numerous aspects, which lived in a depauperate fauna with no other large mammal species. Later, around 200,000 years ago, this species was replaced by a second colonisation by P. antiquus, which gave rise to the larger (though still considerably dwarfed) tall species P. mnaidriensis, which on Sicily lived alongside a number other large mammal species, including herbivores and carnivores. The youngest records of this species on Sicily date to around 20,000 years ago, close to the time of arrival of modern humans on Sicily. The dwarf mammoth species Mammuthus lamarmorai descended from steppe mammoths (Mammuthus trogontherii) that colonised Sardinia sometime after 450,000 years ago. It is suggested to have survived into the Last Glacial Period, until at least 60-30,000 years ago. Greece Crete Mammuthus creticus is known from remains probably dating to the Early Pleistocene. It likely descends from Mammuthus meridionalis. It is the smallest mammoth and is among the smallest dwarf elephants known, with a shoulder height of about and a weight of about . Palaeoloxodon creutzburgi from the Middle Pleistocene and Late Pleistocene is significantly larger, with an estimated body mass comparable to living Asian elephant, around 40% the size of its mainland ancestor. Cyclades Remains of dwarf elephants have been briefly reported from Paros, Milos and Serifos in historical publications, but these lack any detailed information. On Kýthnos, the remains of a dwarf elephant were reported in a 1975 publication to be found associated with lithic artefacts. The age of the find was considered to be uncertain, likely older than 9,000 years, but could not be dated precisely due to a lack of collagen. Additionally, an isolated tusk was reported from the northwest of the island. On Delos, an indeterminate dwarf elephant known from a third molar was reported in 1908. This specimen clearly belongs to a dwarf species, but it is difficult to quantify its size precisely. On Naxos the species Palaeoloxodon lomolinoi has been described based on a partial skull including the maxilla bones and third molar teeth found near the Trypiti river, of probable Late Pleistocene age. It is estimated to be around 8% the size of P. antiquus, and had a smaller body size than that represented by the dwarf elephant from Delos. The Eastern Cyclades islands of Delos, Naxos, and Paros were connected during the Last Glacial Period, which suggests that the Delos species and P. lomolinoi were not contemporaneous, with the former possibly being the ancestor of the latter, though nothing can be said for certain. Dodecanese On Rhodes, bones of an unnamed endemic dwarf elephant have been discovered in cave deposits on the east coast. This elephant was similar in size to Palaeoloxodon mnaidriensis, around 20% the size of its mainland ancestor. The remains, though temporally poorly constrained, are suggested to be Late Pleistocene age. Possible tracks produced by these dwarf elephants have been reported from the southwest of the island. On Tilos, the species Palaeoloxodon tiliensis has been described from remains found in Charkadio cave. This species was medium-sized, around 10% the size of P. antiquus, with a shoulder height of up to , with a body mass of . Remains of the species are suggested to date to Late Pleistocene. Radiocarbon dating done in the 1970s suggested that the species survived until around 3,500 years ago, which would make the latest surviving Palaeoloxodon species and the youngest elephant in Europe, but these dates are tentative and await corroboration by other research. On Astypalaia, a single tusk of a dwarf elephant of unknown age was excavated in the late 1990s. Due to the isolated status of the island it very likely represents an endemic species. Though the size of the animal is difficult to constrain precisely, it was probably similar in size to P. tiliensis. On Kasos, which during the Pleistocene was connected with the islands of Karpathos and Saria, a single dwarf Palaeoloxodon molar has been found. Due to the tooth closely resembling those of the species P. creutzburgi from Crete (which is adjacent to Kasos) in size and shape, it has been referred to as P. aff. creutzburgi. Cyprus The Cyprus dwarf elephant (Palaeoloxodon cypriotes) survived at least until 12,000 years ago, around the time of arrival of modern humans to Cyprus (who may have hunted it), making it one of the latest surviving dwarf elephants. It is also one of the smallest dwarf elephant species, comparable in size to P. falconeri, with an estimated shoulder height of . The species likely evolved from the earlier larger (though still strongly dwarfed) Palaeoloxodon xylophagou known from fossils dating to around 200,000 years ago. Remains of the species were first discovered and recorded by Dorothea Bate in a cave in the Kyrenia hills of northern Cyprus in 1902 and reported in 1903. The Channel Islands of California A population of the Columbian mammoth (Mammuthus columbi) arrived on the northern Channel Islands of California during the late Middle Pleistocene, around 250-150,000 years ago, giving rise to a dwarfed species, the pygmy mammoth (Mammuthus exilis). Channel Islands mammoths ranged from in shoulder height. These mammoths became extinct around 13,000 years ago, around the time of arrival of modern humans to the islands. Indonesia and the Philippines In Indonesia and the Philippines, evidence of a succession of distinct endemic island faunas has been found, including dwarfed elephants and species of Stegodon. Flores During the late Early Pleistocene, Flores was inhabited by the dwarf species Stegodon sondaarii, around 15% of the size of mainland Stegodon species, which was around tall at the shoulder and weighed about . This species became extinct around 1 million years ago, being replaced by Stegodon florensis. Stegodon florensis shows a progressive size reduction with time, with the earlier Middle Pleistocene subspecies Stegodon florensis florensis estimated to be around 50% the size of mainland Stegodon species with a shoulder height of around and a body mass of around 1.7 tons, while the later Stegodon florensis insularis from the Late Pleistocene is estimated to be around 17% the size of mainland Stegodon species, with a shoulder height of around , and a body mass of about Stegodon florensis became extinct about 50,000 years ago, around the time of the arrival of modern humans to Flores. Sulawesi During the Late Pliocene-Early Pleistocene on Sulawesi, two species of dwarf proboscidean coinhabited the island, the elephant Stegoloxodon celebensis, and Stegodon sompoensis. The former was about tall, while the latter was around 32% the size of mainland Stegodon species, with an estimated body mass of about a ton. Later in the Pleistocene, these animals were replaced by larger-sized species of Stegodon and elephants, with an indeterminate Stegodon species from the Middle Pleistocene of Sulawesi being around 57% the size of mainland species, with an estimated bodymass of about 2 tons. Java The species Stegodon trigonocephalus is known from the Early-Middle Pleistocene of Java. A population from the Trinil H.K locality, which likely dates to the Middle Pleistocene, is around 65% the size of mainland Stegodon species. Large individuals are estimated to have reached around at the shoulders, with a body mass of around 5 tons. Other smaller unnamed Stegodon species are also known from the Early Pleistocene on the island. The extinct dwarf elephant species Stegoloxodon indonesicus is also known from the Early Pleistocene of Java, which is probably closely related to S. celebensis from Sulawesi, but whose relationships to other elephants are obscure. Sumba The species Stegodon sumbaensis of an uncertain Middle-Late Pleistocene age from Sumba is one the smallest known species, at around 8% of the size of its mainland ancestor, with an estimated body mass of around . Timor The species Stegodon timorensis is known from the Middle Pleistocene of Timor. It is a small-sized species, only slightly larger than S. sondaarii, and around 23% the size of mainland species, with an estimated body mass of around . Luzon On Luzon the dwarf Stegodon luzonensis is known from remains found in the Manila Basin of an uncertain Pleistocene age, as well as remains found near the early Middle Pleistocene Nesorhinus butchery site dating to around 700,000 years ago. It is around 40% the size of mainland Stegodon species, with a body mass of around 1.3 tons. Though the temporal span of Stegodon on Luzon is not well constrained due to the limited number of finds, remains are suggested to span from at least around 1-0.8 million years ago to around 400,000 years ago. The extinct dwarf elephant Elephas beyeri is also known from the island of an unknown (probably Pleistocene) age, which is estimated to have been about in shoulder height. Mindanao On the island of Mindanao, the dwarf Stegodon species Stegodon mindanensis was present at some point in the Pleistocene. It has an estimated body mass of around . Japan Some species of the stegodontid Stegolophodon from the Middle Miocene of Japan around 16 million years ago have been suggested to exhibit insular dwarfism, appearing to exhibit size reduction over time, which would make them the oldest known proboscideans to do so. During Pliocene-Early Pleistocene (from around 4-1 million years ago), a succession of endemic dwarf species of Stegodon, probably representing a single lineage lived in the Japanese archipelago, probably derived from the mainland Chinese S. zydanskyi. In chronological succession these species are Stegodon miensis (4-3 million years ago) Stegodon protoaurorae (3-2 million years ago) and Stegodon aurorae, (2-1 million years ago) which show a progressive size reduction through time, possibly as a result of reducing land area of the Japanese archipelago. The latest and smallest species S. aurorae is estimated to be 25% the size of its mainland ancestor with a body mass of around . During the late Middle Pleistocene to Late Pleistocene around 330,000-24,000 years ago, the Japanese archipelago was inhabited by the elephant species Palaeoloxodon naumanni. This species was only modestly dwarfed compared to its large continental ancestor, having a reconstructed shoulder height of , for males and around for females. Wrangel Island During the Holocene, woolly mammoths (Mammuthus primigenius) lived on Wrangel Island in the Arctic Ocean, surviving thousands of years after the extinction of mainland woolly mammoths until around 2000 BCE, the most recent survival of any known mammoth population. Wrangel Island is thought to have become separated from the mainland by 12000 BCE. It was assumed that Wrangel Island mammoths ranged from in shoulder height and were for a time considered "dwarf mammoths". However this classification has been re-evaluated and since the Second International Mammoth Conference in 1999, these mammoths are no longer considered to be true "dwarf mammoths", as their size falls within the range of that of mainland Siberian woolly mammoths.
Biology and health sciences
Proboscidea
Animals
1209760
https://en.wikipedia.org/wiki/Natural%20product
Natural product
A natural product is a natural compound or substance produced by a living organism—that is, found in nature. In the broadest sense, natural products include any substance produced by life. Natural products can also be prepared by chemical synthesis (both semisynthesis and total synthesis) and have played a central role in the development of the field of organic chemistry by providing challenging synthetic targets. The term natural product has also been extended for commercial purposes to refer to cosmetics, dietary supplements, and foods produced from natural sources without added artificial ingredients. Within the field of organic chemistry, the definition of natural products is usually restricted to organic compounds isolated from natural sources that are produced by the pathways of primary or secondary metabolism. Within the field of medicinal chemistry, the definition is often further restricted to secondary metabolites. Secondary metabolites (or specialized metabolites) are not essential for survival, but nevertheless provide organisms that produce them an evolutionary advantage. Many secondary metabolites are cytotoxic and have been selected and optimized through evolution for use as "chemical warfare" agents against prey, predators, and competing organisms. Secondary or specialized metabolites are often unique to specific species, whereas primary metabolites are commonly found across multiple kingdoms. Secondary metabolites are marked by chemical complexity which is why they are of such interest to chemists. Natural sources may lead to basic research on potential bioactive components for commercial development as lead compounds in drug discovery. Although natural products have inspired numerous drugs, drug development from natural sources has received declining attention in the 21st century by pharmaceutical companies, partly due to unreliable access and supply, intellectual property, cost, and profit concerns, seasonal or environmental variability of composition, and loss of sources due to rising extinction rates. Despite this, natural products and their derivatives still accounted for about 10% of new drug approvals between 2017 and 2019. Classes The broadest definition of natural product is anything that is produced by life, and includes the likes of biotic materials (e.g. wood, silk), bio-based materials (e.g. bioplastics, cornstarch), bodily fluids (e.g. milk, plant exudates), and other natural materials (e.g. soil, coal). Natural products may be classified according to their biological function, biosynthetic pathway, or source. Depending on the sources, the number of known natural product molecules ranges between 300,000 and 400,000. Function Following Albrecht Kossel's original proposal in 1891, natural products are often divided into two major classes, the primary and secondary metabolites. Primary metabolites have an intrinsic function that is essential to the survival of the organism that produces them. Secondary metabolites in contrast have an extrinsic function that mainly affects other organisms. Secondary metabolites are not essential to survival but do increase the competitiveness of the organism within its environment. For instance, alkaloids like morphine and nicotine act as defense chemicals against herbivores, while flavonoids attract pollinators, and terpenes such as menthol serve to repel insects. Because of their ability to modulate biochemical and signal transduction pathways, some secondary metabolites have useful medicinal properties. Natural products especially within the field of organic chemistry are often defined as primary and secondary metabolites. A more restrictive definition limiting natural products to secondary metabolites is commonly used within the fields of medicinal chemistry and pharmacognosy. Primary metabolites Primary metabolites, as defined by Kossel, are essential components of basic metabolic pathways required for life. They are associated with fundamental cellular functions such as nutrient assimilation, energy production, and growth and development. These metabolites have a wide distribution across many phyla and often span more than one kingdom. Primary metabolites include the basic building blocks of life: carbohydrates, lipids, amino acids, and nucleic acids. Primary metabolites involved in energy production include enzymes essential for respiratory and photosynthetic processes. These enzymes are composed of amino acids and often require non-peptidic cofactors for proper function. The basic structures of cells and organisms are also built from primary metabolites, including components such as cell membranes (e.g., phospholipids), cell walls (e.g., peptidoglycan, chitin), and cytoskeletons (proteins). Enzymatic cofactors that are primary metabolites include several members of the vitamin B family. For instance, Vitamin B1 (thiamine diphosphate), synthesized from 1-deoxy-D-xylulose 5-phosphate, serves as a coenzyme for enzymes such as pyruvate dehydrogenase, 2-oxoglutarate dehydrogenase, and transketolase—all involved in carbohydrate metabolism. Vitamin B2 (riboflavin), derived from ribulose 5-phosphate and guanosine triphosphate, is a precursor to FMN and FAD, which are crucial for various redox reactions. Vitamin B3 (nicotinic acid or niacin), synthesized from tryptophan, is an essential part of the coenzymes NAD and NADP, necessary for electron transport in the Krebs cycle, oxidative phosphorylation, and other redox processes. Vitamin B5 (pantothenic acid), derived from α,β-dihydroxyisovalerate (a precursor to valine) and aspartic acid, is a component of coenzyme A, which plays a vital role in carbohydrate and amino acid metabolism, as well as fatty acid biosynthesis. Vitamin B6 (pyridoxol, pyridoxal, and pyridoxamine, originating from erythrose 4-phosphate), functions as pyridoxal 5′-phosphate and acts as a cofactor for enzymes, particularly transaminases, involved in amino acid metabolism. Vitamin B12 (cobalamins) contains a corrin ring structure, similar to porphyrin, and serves as a coenzyme in fatty acid catabolism and methionine synthesis. Other primary metabolite vitamins include retinol (vitamin A), synthesized in animals from plant-derived carotenoids via the mevalonate pathway, and ascorbic acid (vitamin C), which is synthesized from glucose in the liver of animals, though not in humans. DNA and RNA, which store and transmit genetic information, are synthesized from primary metabolites, specifically nucleic acids and carbohydrates. First messengers are signaling molecules that regulate metabolism and cellular differentiation. These include hormones and growth factors composed of peptides, biogenic amines, steroid hormones, auxins, and gibberellins. These first messengers interact with cellular receptors, which are protein-based, and trigger the activation of second messengers to relay the extracellular signal to intracellular targets. Second messengers often include primary metabolites such as cyclic nucleotides and diacyl glycerol. Secondary metabolites Secondary in contrast to primary metabolites are dispensable and not absolutely required for survival. Furthermore, secondary metabolites typically have a narrow species distribution. Secondary metabolites have a broad range of functions. These include pheromones that act as social signaling molecules with other individuals of the same species, communication molecules that attract and activate symbiotic organisms, agents that solubilize and transport nutrients (siderophores etc.), and competitive weapons (repellants, venoms, toxins etc.) that are used against competitors, prey, and predators. For many other secondary metabolites, the function is unknown. One hypothesis is that they confer a competitive advantage to the organism that produces them. An alternative view is that, in analogy to the immune system, these secondary metabolites have no specific function, but having the machinery in place to produce these diverse chemical structures is important and a few secondary metabolites are therefore produced and selected for. General structural classes of secondary metabolites include alkaloids, phenylpropanoids, polyketides, and terpenoids. Biosynthesis The biosynthetic pathways leading to the major classes of natural products are described below. Carbohydrates Carbohydrates are organic molecules essential for energy storage, structural support, and various biological processes in living organisms. They are produced through photosynthesis in plants or gluconeogenesis in animals and can be converted into larger polysaccharides: Photosynthesis or gluconeogenesis → monosaccharides → polysaccharides (cellulose, chitin, glycogen, etc.) Carbohydrates serve as a primary energy source for most life forms. Additionally, polysaccharides derived from simpler sugars are vital structural components, forming the cell walls of bacteria and plants. During photosynthesis, plants initially produce , a three-carbon triose. This can be converted into glucose (a six-carbon sugar) or various pentoses (five-carbon sugars) through the Calvin cycle. In animals, three-carbon precursors like lactate or glycerol are converted into pyruvate, which can then be synthesized into carbohydrates in the liver. Fatty acids and polyketides Fatty acids and polyketides are synthesized via the acetate pathway, which starts from basic building blocks derived from sugars: Sugars → acetate pathway → fatty acids and polyketides During glycolysis, sugars are broken down into acetyl-CoA. In an ATP-dependent enzymatic reaction, acetyl-CoA is carboxylated to form malonyl-CoA. Acetyl-CoA and malonyl-CoA then undergo a Claisen condensation, releasing carbon dioxide to form acetoacetyl-CoA which is used by the mevalonate pathway to produce steroids. In fatty acid synthesis, one molecule of acetyl-CoA (the "starter unit") and several molecules of malonyl-CoA (the "extender units") are condensed by fatty acid synthase. After each round of elongation, the keto group is reduced, the intermediate alcohol dehydrated, and resulting enoyl-CoAs are reduced to acyl-CoAs. Fatty acids are essential components of lipid bilayers that form cell membranes and serve as energy storage in the form of fat in animals. The plant-derived fatty acid linoleic acid is converted in animals through elongation and desaturation into arachidonic acid, which is then transformed into various eicosanoids, including leukotrienes, prostaglandins, and thromboxanes. These eicosanoids act as signaling molecules, playing key roles in inflammation and immune responses. Alternatively the intermediates from additional condensation reactions are left unreduced to generate poly-β-keto chains, which are subsequently converted into various polyketides. The polyketide class of natural products has diverse structures and functions and includes important compounds such as macrolide antibiotics. Aromatic amino acids and phenylpropanoids The shikimate pathway is a key metabolic route responsible for the production of aromatic amino acids and their derivatives in plants, fungi, bacteria, and some protozoans: Shikimate pathway → aromatic amino acids and phenylpropanoids The shikimate pathway leads to the biosynthesis of aromatic amino acids (AAAs) — phenylalanine, tyrosine, and tryptophan. This pathway is vital as it connects primary metabolism to specialized metabolic processes, directing an estimated 20-50% of all fixed carbon through its reactions. It begins with the condensation of phosphoenolpyruvate (PEP) and erythrose-4-phosphate (E4P), leading through several enzymatic steps to form chorismate, the precursor for all three AAAs. From chorismate, biosynthesis branches out to produce the individual AAAs. In plants, unlike in bacteria, the production of phenylalanine and tyrosine typically occurs via the intermediate arogenate. Phenylalanine serves as the starting point for the phenylpropanoid pathway, which leads to a diverse array of secondary metabolites. Beyond protein synthesis, AAAs and their derivatives have crucial roles in plant physiology, including pigment production, hormone synthesis, cell wall formation, and defense against various stresses. Because animals cannot synthesize these amino acids, the shikimate pathway has also become a target for herbicides, most notably glyphosate, which inhibits one of the key enzymes in this pathway. Terpenoids and steroids The biosynthesis of terpenoids and steroids involves two primary pathways, which produce essential building blocks for these compounds: Mevalonate pathway and methylerythritol phosphate pathway → terpenoids and steroids The mevalonate (MVA) and methylerythritol phosphate (MEP) pathways produce the five-carbon units isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP), which are the building blocks for all terpenoids. The MVA pathway, discovered in the 1950s, functions in eukaryotes, some bacteria, and plants. It converts acetyl-CoA to IPP via HMG-CoA and mevalonate, and is essential for steroid biosynthesis. Statins, which lower cholesterol, work by inhibiting HMG-CoA reductase in this pathway. The MEP pathway, found in bacteria, some parasites, and plant chloroplasts, starts with pyruvate and glyceraldehyde 3-phosphate to produce IPP and DMAPP. This pathway is crucial for the synthesis of plastid terpenoids like carotenoids and chlorophylls. Both pathways converge at IPP and DMAPP, which combine to form longer prenyl diphosphates like geranyl (C10), farnesyl (C15), and geranylgeranyl (C20). These compounds serve as precursors for a wide range of terpenoids, including monoterpenes, sesquiterpenes, and triterpenes. The diversity of terpenoids arises from modifications such as cyclization, oxidation, and glycosylation, enabling them to play roles in plant defense, pollinator attraction, and signaling. Steroids, primarily synthesized via the MVA pathway, are derived from farnesyl diphosphate through intermediates like squalene and lanosterol, which are precursors to cholesterol and other steroid molecules. Alkaloids Alkaloids are nitrogen-containing organic compounds produced by plants through complex biosynthetic pathways, starting from amino acids. The biosynthesis of alkaloids from amino acids is essential for producing many biologically active compounds in plants. These compounds range from simple cycloaliphatic amines to complex polycyclic nitrogen heterocycles. Alkaloid biosynthesis generally follows four key steps: (i) synthesis of an amine precursor, (ii) synthesis of an aldehyde precursor, (iii) formation of an iminium cation, and (iv) a Mannich-like reaction. These steps form the core structure of many alkaloids and represent the initial committed steps in their production. Amino acids such as tryptophan, tyrosine, lysine, arginine, and ornithine serve as essential precursors. Their accumulation is facilitated by mechanisms like increased gene expression, gene duplication, or the evolution of enzymes with broader substrate specificities. The biosynthesis of the tropane alkaloid cocaine follows this general pathway. A key reaction in alkaloid biosynthesis is the Pictet-Spengler reaction, which is crucial for forming the β-carboline structure found in many alkaloids. This reaction involves the condensation of an aldehyde with an amine, as seen in the biosynthesis of strictosidine, a precursor to numerous monoterpene indole alkaloids. Oxidoreductases, including cytochrome P450s and flavin-containing monooxygenases, play a vital role in modifying the core alkaloid structures through oxidation, contributing to their structural diversity and bioactivity. For instance, in the biosynthesis of morphine, oxidative coupling is essential for forming the complex polycyclic structures typical of these alkaloids. The biosynthetic pathways of alkaloids involve numerous enzymatic steps. For example, tropane alkaloids, derived from ornithine, undergo processes such as decarboxylation, oxidation, and cyclization. Similarly, the biosynthesis of isoquinoline alkaloids from tyrosine involves complex transformations, including the formation of (S)-reticuline, a key intermediate in the pathway. Peptides, proteins, and other amino acid derivatives Biosynthesis of peptides, proteins, and other amino acid derivatives assembles amino acids into biologically active molecules, producing compounds like peptide hormones, modified peptides, and plant-derived substances. Peptides and proteins are synthesized through protein synthesis or translation, a process involving transcription of DNA into messenger RNA (mRNA). The mRNA serves as a template for protein assembly on ribosomes. During translation, transfer RNA (tRNA) carries specific amino acids to match with mRNA codons, forming peptide bonds to create the protein chain. Peptide hormones, such as oxytocin and vasopressin, are short amino acid chains that regulate physiological processes, including social bonding and water retention. Modified peptides include antibiotics like penicillins and cephalosporins, characterized by their β-lactam ring structure, which is essential for their antibacterial activity. These compounds undergo complex enzymatic modifications during biosynthesis. Cyanogenic glycosides are amino acid derivatives in plants that can release hydrogen cyanide when tissues are damaged, serving as a defense mechanism. Their biosynthesis involves converting amino acids into cyanohydrins, which are then glycosylated. Glucosinolates are sulfur-containing compounds in cruciferous vegetables like broccoli and mustard. Their biosynthesis starts with amino acids such as methionine or tryptophan and involves adding sulfur and glucose groups. When tissues are damaged, glucosinolates break down into isothiocyanates, which contribute to the pungent flavors of these vegetables and offer potential health benefits. Sources Natural products may be extracted from the cells, tissues, and secretions of microorganisms, plants and animals. A crude (unfractionated) extract from any one of these sources will contain a range of structurally diverse and often novel chemical compounds. Chemical diversity in nature is based on biological diversity, so researchers collect samples from around the world to analyze and evaluate in drug discovery screens or bioassays. This effort to search for biologically active natural products is known as bioprospecting. Pharmacognosy provides the tools to detect, isolate and identify bioactive natural products that could be developed for medicinal use. When an "active principle" is isolated from a traditional medicine or other biological material, this is known as a "hit". Subsequent scientific and legal work is then performed to validate the hit (e.g. elucidation of mechanism of action, confirmation that there is no intellectual property conflict). This is followed by the hit to lead stage of drug discovery, where derivatives of the active compound are produced in an attempt to improve its potency and safety. In this and related ways, modern medicines can be developed directly from natural sources. Although traditional medicines and other biological material are considered an excellent source of novel compounds, the extraction and isolation of these compounds can be a slow, expensive and inefficient process. For large scale manufacture therefore, attempts may be made to produce the new compound by total synthesis or semisynthesis. Because natural products are generally secondary metabolites with complex chemical structures, their total/semisynthesis is not always commercially viable. In these cases, efforts can be made to design simpler analogues with comparable potency and safety that are amenable to total/semisynthesis. Prokaryotic Bacteria The serendipitous discovery and subsequent clinical success of penicillin prompted a large-scale search for other environmental microorganisms that might produce anti-infective natural products. Soil and water samples were collected from all over the world, leading to the discovery of streptomycin (derived from Streptomyces griseus), and the realization that bacteria, not just fungi, represent an important source of pharmacologically active natural products. This, in turn, led to the development of an impressive arsenal of antibacterial and antifungal agents including amphotericin B, chloramphenicol, daptomycin and tetracycline (from Streptomyces spp.), the polymyxins (from Paenibacillus polymyxa), and the rifamycins (from Amycolatopsis rifamycinica). Antiparasitic and antiviral drugs have similarly been derived from bacterial metabolites. Although most of the drugs derived from bacteria are employed as anti-infectives, some have found use in other fields of medicine. Botulinum toxin (from Clostridium botulinum) and bleomycin (from Streptomyces verticillus) are two examples. Botulinum, the neurotoxin responsible for botulism, can be injected into specific muscles (such as those controlling the eyelid) to prevent muscle spasm. Also, the glycopeptide bleomycin is used for the treatment of several cancers including Hodgkin's lymphoma, head and neck cancer, and testicular cancer. Newer trends in the field include the metabolic profiling and isolation of natural products from novel bacterial species present in underexplored environments. Examples include symbionts or endophytes from tropical environments, subterranean bacteria found deep underground via mining/drilling, and marine bacteria. Archaea Because many Archaea have adapted to life in extreme environments such as polar regions, hot springs, acidic springs, alkaline springs, salt lakes, and the high pressure of deep ocean water, they possess enzymes that are functional under quite unusual conditions. These enzymes are of potential use in the food, chemical, and pharmaceutical industries, where biotechnological processes frequently involve high temperatures, extremes of pH, high salt concentrations, and / or high pressure. Examples of enzymes identified to date include amylases, pullulanases, cyclodextrin glycosyltransferases, cellulases, xylanases, chitinases, proteases, alcohol dehydrogenase, and esterases. Archaea represent a source of novel chemical compounds also, for example isoprenyl glycerol ethers 1 and 2 from Thermococcus S557 and Methanocaldococcus jannaschii, respectively. Eukaryotic Fungi Several anti-infective medications have been derived from fungi including penicillin and the cephalosporins (antibacterial drugs from Penicillium rubens and Cephalosporium acremonium, respectively) and griseofulvin (an antifungal drug from Penicillium griseofulvum). Other medicinally useful fungal metabolites include lovastatin (from Pleurotus ostreatus), which became a lead for a series of drugs that lower cholesterol levels, cyclosporin (from Tolypocladium inflatum), which is used to suppress the immune response after organ transplant operations, and ergometrine (from Claviceps spp.), which acts as a vasoconstrictor, and is used to prevent bleeding after childbirth. Asperlicin (from Aspergillus alliaceus) is another example. Asperlicin is a novel antagonist of cholecystokinin, a neurotransmitter thought to be involved in panic attacks, and could potentially be used to treat anxiety. Plants Plants are a major source of complex and highly structurally diverse chemical compounds (phytochemicals), this structural diversity attributed in part to the natural selection of organisms producing potent compounds to deter herbivory (feeding deterrents). Major classes of phytochemical include phenols, polyphenols, tannins, terpenes, and alkaloids. Though the number of plants that have been extensively studied is relatively small, many pharmacologically active natural products have already been identified. Clinically useful examples include the anticancer agents paclitaxel and omacetaxine mepesuccinate (from Taxus brevifolia and Cephalotaxus harringtonii, respectively), the antimalarial agent artemisinin (from Artemisia annua), and the acetylcholinesterase inhibitor galantamine (from Galanthus spp.), used to treat Alzheimer's disease. Other plant-derived drugs, used medicinally and/or recreationally include morphine, cocaine, quinine, tubocurarine, muscarine, and nicotine. Animals Animals also represent a source of bioactive natural products. In particular, venomous animals such as snakes, spiders, scorpions, caterpillars, bees, wasps, centipedes, ants, toads, and frogs have attracted much attention. This is because venom constituents (peptides, enzymes, nucleotides, lipids, biogenic amines etc.) often have very specific interactions with a macromolecular target in the body (e.g. α-bungarotoxin from cobras). As with plant feeding deterrents, this biological activity is attributed to natural selection, organisms capable of killing or paralyzing their prey and/or defending themselves against predators being more likely to survive and reproduce. Because of these specific chemical-target interactions, venom constituents have proved important tools for studying receptors, ion channels, and enzymes. In some cases, they have also served as leads in the development of novel drugs. For example, teprotide, a peptide isolated from the venom of the Brazilian pit viper Bothrops jararaca, was a lead in the development of the antihypertensive agents cilazapril and captopril. Also, echistatin, a disintegrin from the venom of the saw-scaled viper Echis carinatus was a lead in the development of the antiplatelet drug tirofiban. In addition to the terrestrial animals and amphibians described above, many marine animals have been examined for pharmacologically active natural products, with corals, sponges, tunicates, sea snails, and bryozoans yielding chemicals with interesting analgesic, antiviral, and anticancer activities. Two examples developed for clinical use include ω-conotoxin (from the marine snail Conus magus) and ecteinascidin 743 (from the tunicate Ecteinascidia turbinata). The former, ω-conotoxin, is used to relieve severe and chronic pain, while the latter, ecteinascidin 743 is used to treat metastatic soft tissue sarcoma. Other natural products derived from marine animals and under investigation as possible therapies include the antitumour agents discodermolide (from the sponge Discodermia dissoluta), eleutherobin (from the coral Erythropodium caribaeorum), and the bryostatins (from the bryozoan Bugula neritina). Medical uses Natural products sometimes have pharmacological activity that can be of therapeutic benefit in treating diseases. Moreover, synthetic analogs of natural products with improved potency and safety can be prepared, and therefore, natural products are often used as starting points for drug discovery. Natural product constituents have inspired numerous drug discovery efforts that eventually gained approval as new drugs. Modern natural product-derived drugs Many prescribed drugs have been either directly derived from or inspired by natural products. Approximately 35% of the annual global market of medicine is either from natural products or related drugs. This breaks down as 25% from plants, 13% from microorganisms, and 3% from animal sources. Between 1981 and 2019, the FDA approved 1,881 new chemical entities, of which 65 (3.5%) were unaltered natural products, 99 (5.3%) were defined mixture botanical drugs, 178 (9.5%) were natural product derivatives, and 164 (8.7%) were synthetic compounds containing natural product pharmacophores. Altogether, this accounts for 506 (26.9%) of all new approved drugs. Additionally, natural products and their derivatives often show higher success rates in later clinical trial phases and may have lower toxicity profiles compared to synthetic compounds. Some of the oldest natural product based drugs are analgesics. The bark of the willow tree has been known since antiquity to have pain-relieving properties due to the natural product salicin, which in turn may be hydrolyzed into salicylic acid. A synthetic derivative acetylsalicylic acid better known as aspirin is a widely used pain reliever. Its mechanism of action is inhibition of the cyclooxygenase (COX) enzyme. Another notable example is opium extracted from the latex of Papaver somniferous (a flowering poppy plant). The most potent narcotic component of opium is the alkaloid morphine, which acts as an opioid receptor agonist. The N-type calcium channel blocker ziconotide is an analgesic based on a cyclic peptide cone snail toxin (ω-conotoxin MVIIA) from the species Conus magus. Numerous anti-infectives are based on natural products. The first antibiotic to be discovered, penicillin, was isolated from the mold Penicillium. Penicillin and related beta lactams work by inhibiting the DD-transpeptidase enzyme that is required by bacteria to cross link peptidoglycan to form the cell wall. Several natural product drugs target tubulin, which is a component of the cytoskeleton. These include the tubulin polymerization inhibitor colchicine isolated from the Colchicum autumnale (autumn crocus flowering plant), which is used to treat gout. Colchicine is biosynthesized from the amino acids phenylalanine and tryptophan. Paclitaxel, in contrast, is a tubulin polymerization stabilizer and is used as a chemotherapeutic drug. Paclitaxel is based on the terpenoid natural product taxol, which is isolated from Taxus brevifolia (the pacific yew tree). A class of drugs widely used to lower cholesterol are the HMG-CoA reductase inhibitors, for example atorvastatin. These were developed from mevastatin, a polyketide produced by the fungus Penicillium citrinum. Finally, a number natural product drugs are used to treat hypertension and congestive heart failure. These include the angiotensin-converting enzyme inhibitor captopril. Captopril is based on the peptidic bradykinin potentiating factor isolated from venom of the Brazilian arrowhead viper (Bothrops jararaca). Limiting and enabling factors Numerous challenges limit the use of natural products for drug discovery, resulting in 21st century preference by pharmaceutical companies to dedicate discovery efforts toward high-throughput screening of pure synthetic compounds with shorter timelines to refinement. Natural product sources are often unreliable to access and supply, have a high probability of duplication, inherently create intellectual property concerns about patent protection, vary in composition due to sourcing season or environment, and are susceptible to rising extinction rates. The biological resource for drug discovery from natural products remains abundant, with small percentages of microorganisms, plant species, and insects assessed for bioactivity. In enormous numbers, bacteria and marine microorganisms remain unexamined. As of 2008, the field of metagenomics was proposed to examine genes and their function in soil microbes, but most pharmaceutical firms have not exploited this resource fully, choosing instead to develop "diversity-oriented synthesis" from libraries of known drugs or natural sources for lead compounds with higher potential for bioactivity. Isolation and purification All natural products begin as mixtures with other compounds from the natural source, often very complex mixtures, from which the product of interest must be isolated and purified. The isolation of a natural product refers, depending on context, either to the isolation of sufficient quantities of pure chemical matter for chemical structure elucidation, derivitzation/degradation chemistry, biological testing, and other research needs, Structure determination refers to methods applied to determine the chemical structure of an isolated, pure natural product. For instance, the chemical structure of penicillin was determined by Dorothy Crowfoot Hodgkin in 1945, work for which she later received a Nobel Prize in Chemistry (1964). Modern structure determination often involves a combination of advanced analytical techniques. Nuclear magnetic resonance (NMR) spectroscopy and X-ray crystallography are commonly used as primary tools for structure elucidation. High-resolution tandem mass spectrometry (MS/MS) also plays a crucial role, providing information on molecular formula and fragmentation patterns. For complex structures, computational methods are increasingly employed to assist in structure determination. This may include computer-assisted structure elucidation (CASE) platforms and in silico fragmentation prediction tools. Determination of the absolute configuration often relies on a combination of NMR data (coupling constants and nuclear Overhauser effect (NOE), chemical derivatization methods (e.g., Mosher's ester analysis), and spectroscopic techniques like vibrational circular dichroism (VCD), and optical rotatory dispersion (ORD). In cases where traditional methods are insufficient, especially for novel compounds with unprecedented molecular skeletons, advanced computational chemistry approaches are used to predict and compare spectral data, helping to elucidate the complete structure including stereochemistry. Synthesis Many natural products have complex structures. The complexity is determined by factors like molecular mass, arrangement of substructures (e.g., functional groups, rings), number and density of these groups, their stability, stereochemical elements, and physical properties, as well as the novelty of the structure and prior synthetic efforts. Less complex natural products can often be cost-effectively synthesized from simpler chemical ingredients through total synthesis. However, not all natural products are suitable for total synthesis. The most complex ones are often impractical to synthesize on a large scale due to high costs. In these cases, isolation from natural sources may be sufficient if it provides adequate quantities, as seen with drugs like penicillin, morphine, and paclitaxel, which were obtained at commercial scales without significant synthetic chemistry. Semisynthesis Isolating a natural product from its source can be costly in terms of time and materials, and may impact the availability of the natural resource or have ecological consequences. For example, it is estimated that harvesting enough paclitaxel for a single dose of therapy would require the bark of an entire yew tree (Taxus brevifolia). Additionally, the number of structural analogues available for structure–activity analysis (SAR) is limited by the biology of the organism, and thus beyond experimental control. When the desired product is difficult to obtain or modify to create analogs, a middle-to-late stage biosynthetic precursor or analog can sometimes be used to produce the final target. This approach, called semisynthesis or partial synthesis, involves extracting a biosynthetic intermediate and converting it into the final product using conventional chemical synthesis techniques. This strategy offers two advantages. First, the intermediate may be easier to extract and yield higher amounts than the final product. For instance, paclitaxel can be produced by extracting 10-deacetylbaccatin III from T. brevifolia needles, followed by a four-step synthesis. Second, the semisynthetic process allows for the creation of analogues of the final product, as seen in the development of newer generation semisynthetic penicillins. Total synthesis In general, the total synthesis of natural products is a non-commercial research activity, aimed at deeper understanding of the synthesis of particular natural product frameworks, and the development of fundamental new synthetic methods. Even so, it is of tremendous commercial and societal importance. By providing challenging synthetic targets, for example, it has played a central role in the development of the field of organic chemistry. Prior to the development of analytical chemistry methods in the twentieth century, the structures of natural products were affirmed by total synthesis (so-called "structure proof by synthesis"). Early efforts in natural products synthesis targeted complex substances such as cobalamin (vitamin B12), an essential cofactor in cellular metabolism. Biomimetic synthesis Biomimetic synthesis is a branch of organic chemistry which aims at designing and preparing natural product compounds in the laboratory using the biosynthetic pathways as a blueprint. This method is based on the mechanisms used by the living organisms for the synthesis of various compounds, which is usually done in a stereoselective and regioselective manner. Biomimetic synthetic strategies have emerged due to their ability to simplify the synthesis of complex structures, especially those containing unusual moieties like spiro-ring systems or quaternary carbon atoms. These approaches mainly involve reactions such as Diels-Alder dimerizations, photocycloadditions, cyclizations, oxidative and radical reactions and these reactions can be used to efficiently construct complex molecular frameworks. Thus, mimicking the biosynthetic processes, chemists have been able to design more effective and economical processes for the synthesis of natural products that are of interest in drug discovery and chemical biology. Symmetry Examination of dimerized and trimerized natural products has shown that an element of bilateral symmetry is often present. Bilateral symmetry refers to a molecule or system that contains a C2, Cs, or C2v point group identity. C2 symmetry tends to be much more abundant than other types of bilateral symmetry. This finding sheds light on how these compounds might be mechanistically created, as well as providing insight into the thermodynamic properties that make these compounds more favorable. Density functional theory (DFT), the Hartree–Fock method, and semiempirical calculations also show some favorability for dimerization in natural products due to evolution of more energy per bond than the equivalent trimer or tetramer. This is proposed to be due to steric hindrance at the core of the molecule, as most natural products dimerize and trimerize in a head-to-head fashion rather than head-to-tail. Research and teaching Research and teaching activities related to natural products fall into a number of diverse academic areas, including organic chemistry, medicinal chemistry, pharmacognosy, ethnobotany, traditional medicine, and ethnopharmacology. Other biological areas include chemical biology, chemical ecology, chemogenomics, systems biology, molecular modeling, chemometrics, and chemoinformatics. Chemistry Natural products chemistry is a distinct area of chemical research which was important in the development and history of chemistry. Isolating and identifying natural products has been important to source substances for early preclinical drug discovery research, to understand traditional medicine and ethnopharmacology, and to find pharmacologically useful areas of chemical space. To achieve this, many technological advances have been made, such as the evolution of technology associated with chemical separations, and the development of modern methods in chemical structure determination such as NMR. Early attempts to understand the biosynthesis of natural products, saw chemists employ first radiolabelling and more recently stable isotope labeling combined with NMR experiments. In addition, natural products are prepared by organic synthesis, to provide confirmation of their structure, or to give access to larger quantities of natural products of interest. In this process, the structure of some natural products have been revised, and the challenge of synthesising natural products has led to the development of new synthetic methodology, synthetic strategy, and tactics. In this regard, natural products play a central role in the training of new synthetic organic chemists, and are a principal motivation in the development of new variants of old chemical reactions (e.g., the Evans aldol reaction), as well as the discovery of completely new chemical reactions (e.g., the Woodward cis-hydroxylation, Sharpless epoxidation, and Suzuki–Miyaura cross-coupling reactions). History Foundations of organic and natural product chemistry The concept of natural products dates back to the early 19th century, when the foundations of organic chemistry were laid. Organic chemistry was regarded at that time as the chemistry of substances that plants and animals are composed of. It was a relatively complex form of chemistry and stood in stark contrast to inorganic chemistry, the principles of which had been established in 1789 by the Frenchman Antoine Lavoisier in his work Traité Élémentaire de Chimie. Isolation Lavoisier showed at the end of the 18th century that organic substances consisted of a limited number of elements: primarily carbon and hydrogen and supplemented by oxygen and nitrogen. He quickly focused on the isolation of these substances, often because they had an interesting pharmacological activity. Plants were the main source of such compounds, especially alkaloids and glycosides. It was long been known that opium, a sticky mixture of alkaloids (including codeine, morphine, noscapine, thebaine, and papaverine) from the opium poppy (Papaver somniferum), possessed a narcotic and at the same time mind-altering properties. By 1805, morphine had already been isolated by the German chemist Friedrich Sertürner and in the 1870s it was discovered that boiling morphine with acetic anhydride produced a substance with a strong pain suppressive effect: heroin. In 1815, Eugène Chevreul isolated cholesterol, a crystalline substance, from animal tissue that belongs to the class of steroids, and in 1819 strychnine, an alkaloid was isolated. Synthesis A second important step was the synthesis of organic compounds. While the synthesis of inorganic substances had been known for a long time, creating organic substances was a major challenge. In 1827, the Swedish chemist Jöns Jacob Berzelius argued that a vital force or life force was essential for synthesizing organic compounds. This idea, known as vitalism, had many supporters well into the 19th century, even after the introduction of atomic theory. Vitalism also aligned with traditional medicine, which often viewed disease as a result of imbalances in vital energies that distinguish life from nonlife. The first significant challenge to vitalism came in 1828 when German chemist Friedrich Wöhler synthesized urea, a natural product found in urine, by heating ammonium cyanate, an inorganic substance: This reaction demonstrated that a life force was not needed to create organic substances. Initially, this idea faced skepticism, but it gained acceptance 20 years later when Adolph Wilhelm Hermann Kolbe synthesized acetic acid from carbon disulfide. Since then, organic chemistry has developed into a distinct field focused on studying carbon-containing compounds, which were found to be prevalent in nature. Structural theories The third key development was the structure elucidation of organic substances. While the elemental composition of pure organic compounds could be determined accurately, their molecular structures remained unclear. This issue became evident in a dispute between Friedrich Wöhler and Justus von Liebig, who studied silver salts with identical compositions but different properties. Wöhler examined silver cyanate, a harmless compound, while von Liebig investigated the explosive silver fulminate. Elemental analysis showed both salts had the same amounts of silver, carbon, oxygen, and nitrogen, yet their properties differed, contradicting the prevailing view that composition alone determined properties. This discrepancy was explained by Berzelius's theory of isomers, which proposed that not only the number and type of elements but also the arrangement of atoms affects a compound's properties. This insight led to the development of structural theories, such as the radical theory of Jean-Baptiste Dumas and the substitution theory of Auguste Laurent. A definitive structure theory was proposed in 1858 by August Kekulé, who suggested that carbon is tetravalent and can bond to itself, forming chains found in natural products. Expanding the concept The concept of natural product, which initially based on organic compounds that could be isolated from plants, was extended to include animal material in the middle of the 19th century by the German Justus von Liebig. Hermann Emil Fischer in 1884, turned his attention to the study of carbohydrates and purines, work for which he was awarded the Nobel Prize in 1902. He also succeeded to make synthetically in the laboratory in a variety of carbohydrates, including glucose and mannose. After the discovery of penicillin by Alexander Fleming in 1928, fungi and other micro-organisms were added to the arsenal of sources of natural products. Milestones By the 1930s, several major classes of natural products had been identified and studied extensively. Key milestones in the field of natural product research include: Terpenes: First systematically studied by Otto Wallach (Nobel Prize 1910) and later by Leopold Ružička (Nobel Prize 1939). Porphyrin-based dyes: Including chlorophyll and heme, investigated by Richard Willstätter (Nobel Prize 1915) and Hans Fischer (Nobel Prize 1930). These tetrapyrrole compounds play essential roles in various biological processes (including photosynthesis, respiration, electron transfer, and catalysis) and have been the subject of extensive research. Steroids: Researched by Heinrich Otto Wieland (Nobel Prize 1927) and Adolf Windaus (Nobel Prize 1928). Their work contributed significantly to our understanding of sterol biosynthesis and structure. Carotenoids: Studied by Paul Karrer (Nobel Prize 1937). These pigments are important for their antioxidant properties and roles in photosynthesis and vision. Vitamins: Investigated by numerous scientists, including Paul Karrer, Robert R. Williams, Adolf Windaus (Nobel Prize 1928), Norman Haworth (Nobel Prize 1937), Richard Kuhn (Nobel Prize 1938), and Albert Szent-Györgyi (Nobel Prize 1937). The discovery and characterization of vitamins revolutionized our understanding of nutrition and health. Steroid hormones: Studied by Adolf Butenandt (Nobel Prize 1939) and Edward Calvin Kendall (Nobel Prize 1950). Their work on steroid hormones paved the way for modern endocrinology. Alkaloids and anthocyanins: Researched by Robert Robinson (Nobel Prize 1947) and others. These compounds, particularly alkaloids, have been crucial in the development of many pharmaceuticals. Polypeptide hormones: Investigated by Vincent du Vigneaud (Nobel Prize 1955) who completed the first total synthesis of the natural polypeptide oxytocin and vasopressin. Total synthesis of natural products: Robert Burns Woodward was awarded a Nobel Prize in 1965 for synthesizing compounds including quinine, cholesterol, cortisone, strychnine, reserpine, chlorophyll, and vitamin B12. Elias James Corey received a Nobel Prize in 1990 for similar achievements, such as the synthesis of gibberellic acid, ginkgolide, and prostaglandins. These pioneering studies laid the foundation for our understanding of natural product chemistry and biochemistry, leading to numerous Nobel Prizes in Chemistry and Physiology or Medicine. The field of natural products has continued to evolve, with recent research focusing on the evolutionary and ecological roles of these compounds.
Physical sciences
Substance
Chemistry
1209826
https://en.wikipedia.org/wiki/Security%20token
Security token
A security token is a peripheral device used to gain access to an electronically restricted resource. The token is used in addition to, or in place of, a password. Examples of security tokens include wireless key cards used to open locked doors, a banking token used as a digital authenticator for signing in to online banking, or signing transactions such as wire transfers. Security tokens can be used to store information such as passwords, cryptographic keys used to generate digital signatures, or biometric data (such as fingerprints). Some designs incorporate tamper resistant packaging, while others may include small keypads to allow entry of a PIN or a simple button to start a generation routine with some display capability to show a generated key number. Connected tokens utilize a variety of interfaces including USB, near-field communication (NFC), radio-frequency identification (RFID), or Bluetooth. Some tokens have audio capabilities designed for those who are vision-impaired. Password types All tokens contain some secret information used to prove identity. There are four different ways in which this information can be used: Static password token The device contains a password that is physically hidden (not visible to the possessor), but is transmitted for each authentication. This type is vulnerable to replay attacks. Synchronous dynamic password token A timer is used to rotate through various combinations produced by a cryptographic algorithm. The token and the authentication server must have synchronized clocks. Asynchronous password token A one-time password is generated without the use of a clock, either from a one-time pad or cryptographic algorithm. Challenge–response token Using public key cryptography, it is possible to prove possession of a private key without revealing that key. The authentication server encrypts a challenge (typically a random number, or at least data with some random parts) with a public key; the device proves it possesses a copy of the matching private key by providing the decrypted challenge. Time-synchronized, one-time passwords change constantly at a set time interval; e.g., once per minute. To do this, some sort of synchronization must exist between the client's token and the authentication server. For disconnected tokens, this time-synchronization is done before the token is distributed to the client. Other token types do the synchronization when the token is inserted into an input device. The main problem with time-synchronized tokens is that they can, over time, become unsynchronized. However, some such systems, such as RSA's SecurID, allow the user to re-synchronize the server with the token, sometimes by entering several consecutive passcodes. Most also cannot have replaceable batteries and only last up to 5 years before having to be replaced – so there is an additional cost. Another type of one-time password uses a complex mathematical algorithm, such as a hash chain, to generate a series of one-time passwords from a secret shared key. Each password is unique, even when previous passwords are known. The open-source OATH algorithm is standardized; other algorithms are covered by US patents. Each password is observably unpredictable and independent of previous ones, whereby an adversary would be unable to guess what the next password may be, even with knowledge of all previous passwords. Physical types Tokens can contain chips with functions varying from very simple to very complex, including multiple authentication methods. The simplest security tokens do not need any connection to a computer. The tokens have a physical display; the authenticating user simply enters the displayed number to log in. Other tokens connect to the computer using wireless techniques, such as Bluetooth. These tokens transfer a key sequence to the local client or to a nearby access point. Alternatively, another form of token that has been widely available for many years is a mobile device which communicates using an out-of-band channel (like voice, SMS, or USSD). Still other tokens plug into the computer and may require a PIN. Depending on the type of the token, the computer OS will then either read the key from the token and perform a cryptographic operation on it, or ask the token's firmware to perform this operation. A related application is the hardware dongle required by some computer programs to prove ownership of the software. The dongle is placed in an input device and the software accesses the I/O device in question to authorize the use of the software in question. Commercial solutions are provided by a variety of vendors, each with their own proprietary (and often patented) implementation of variously used security features. Token designs meeting certain security standards are certified in the United States as compliant with FIPS 140, a federal security standard. Tokens without any kind of certification are sometimes viewed as suspect, as they often do not meet accepted government or industry security standards, have not been put through rigorous testing, and likely cannot provide the same level of cryptographic security as token solutions which have had their designs independently audited by third-party agencies. Disconnected tokens Disconnected tokens have neither a physical nor logical connection to the client computer. They typically do not require a special input device, and instead use a built-in screen to display the generated authentication data, which the user enters manually themselves via a keyboard or keypad. Disconnected tokens are the most common type of security token used (usually in combination with a password) in two-factor authentication for online identification. Connected tokens Connected tokens are tokens that must be physically connected to the computer with which the user is authenticating. Tokens in this category automatically transmit the authentication information to the client computer once a physical connection is made, eliminating the need for the user to manually enter the authentication information. However, in order to use a connected token, the appropriate input device must be installed. The most common types of physical tokens are smart cards and USB tokens (also called security keys), which require a smart card reader and a USB port respectively. Increasingly, FIDO2 tokens, supported by the open specification group FIDO Alliance have become popular for consumers with mainstream browser support beginning in 2015 and supported by popular websites and social media sites. Older PC card tokens are made to work primarily with laptops. Type II PC Cards are preferred as a token as they are half as thick as Type III. The audio jack port is a relatively practical method to establish connection between mobile devices, such as iPhone, iPad and Android, and other accessories. The most well known device is called Square, a credit card reader for iOS and Android devices. Some use a special purpose interface (e.g. the crypto ignition key deployed by the United States National Security Agency). Tokens can also be used as a photo ID card. Cell phones and PDAs can also serve as security tokens with proper programming. Smart cards Many connected tokens use smart card technology. Smart cards can be very cheap (around ten cents) and contain proven security mechanisms (as used by financial institutions, like cash cards). However, computational performance of smart cards is often rather limited because of extreme low power consumption and ultra-thin form-factor requirements. Smart-card-based USB tokens which contain a smart card chip inside provide the functionality of both USB tokens and smart cards. They enable a broad range of security solutions and provide the abilities and security of a traditional smart card without requiring a unique input device. From the computer operating system's point of view such a token is a USB-connected smart card reader with one non-removable smart card present. Contactless tokens Unlike connected tokens, contactless tokens form a logical connection to the client computer but do not require a physical connection. The absence of the need for physical contact makes them more convenient than both connected and disconnected tokens. As a result, contactless tokens are a popular choice for keyless entry systems and electronic payment solutions such as Mobil Speedpass, which uses RFID to transmit authentication info from a keychain token. However, there have been various security concerns raised about RFID tokens after researchers at Johns Hopkins University and RSA Laboratories discovered that RFID tags could be easily cracked and cloned. Another downside is that contactless tokens have relatively short battery lives; usually only 5–6 years, which is low compared to USB tokens which may last more than 10 years. Some tokens however do allow the batteries to be changed, thus reducing costs. Bluetooth tokens The Bluetooth Low Energy protocols provide long lasting battery lifecycle of wireless transmission. The transmission of inherent Bluetooth identity data is the lowest quality for supporting authentication. A bidirectional connection for transactional data interchange serves for the most sophisticated authentication procedures. Although, the automatic transmission power control attempts for radial distance estimates. The escape is available apart from the standardised Bluetooth power control algorithm to provide a calibration on minimally required transmission power. Bluetooth tokens are often combined with a USB token, thus working in both a connected and a disconnected state. Bluetooth authentication works when closer than . When the Bluetooth link is not properly operable, the token may be inserted into a USB input device to function. Another combination is with a smart card to store locally larger amounts of identity data and process information as well. Another is a contactless BLE token that combines secure storage and tokenized release of fingerprint credentials. In the USB mode of operation sign-off requires care for the token while mechanically coupled to the USB plug. The advantage with the Bluetooth mode of operation is the option of combining sign-off with distance metrics. Respective products are in preparation, following the concepts of electronic leash. NFC tokens Near-field communication (NFC) tokens combined with a Bluetooth token may operate in several modes, thus working in both a connected and a disconnected state. NFC authentication works when closer than . The NFC protocol bridges short distances to the reader while the Bluetooth connection serves for data provision with the token to enable authentication. Also when the Bluetooth link is not connected, the token may serve the locally stored authentication information in coarse positioning to the NFC reader and relieves from exact positioning to a connector. Single sign-on software tokens Some types of single sign-on (SSO) solutions, like enterprise single sign-on, use the token to store software that allows for seamless authentication and password filling. As the passwords are stored on the token, users need not remember their passwords and therefore can select more secure passwords, or have more secure passwords assigned. Usually most tokens store a cryptographic hash of the password so that if the token is compromised, the password is still protected. Programmable tokens Programmable tokens are marketed as "drop-in" replacement of mobile applications such as Google Authenticator (miniOTP). They can be used as mobile app replacement, as well as in parallel as a backup. Vulnerabilities Loss and theft The simplest vulnerability with any password container is theft or loss of the device. The chances of this happening, or happening unaware, can be reduced with physical security measures such as locks, electronic leash, or body sensor and alarm. Stolen tokens can be made useless by using two factor authentication. Commonly, in order to authenticate, a personal identification number (PIN) must be entered along with the information provided by the token the same time as the output of the token. Attacking Any system which allows users to authenticate via an untrusted network (such as the Internet) is vulnerable to man-in-the-middle attacks. In this type of attack, an attacker acts as the "go-between" of the user and the legitimate system, soliciting the token output from the legitimate user and then supplying it to the authentication system themselves. Since the token value is mathematically correct, the authentication succeeds and the fraudster is granted access. In 2006, Citibank was the victim of an attack when its hardware-token-equipped business users became the victims of a large Ukrainian-based man-in-the-middle phishing operation. Breach of codes In 2012, the Prosecco research team at INRIA Paris-Rocquencourt developed an efficient method of extracting the secret key from several PKCS #11 cryptographic devices. These findings were documented in INRIA Technical Report RR-7944, ID hal-00691958, and published at CRYPTO 2012. Digital signature Trusted as a regular hand-written signature, the digital signature must be made with a private key known only to the person authorized to make the signature. Tokens that allow secure on-board generation and storage of private keys enable secure digital signatures, and can also be used for user authentication, as the private key also serves as a proof of the user's identity. For tokens to identify the user, all tokens must have some kind of number that is unique. Not all approaches fully qualify as digital signatures according to some national laws. Tokens with no on-board keyboard or another user interface cannot be used in some signing scenarios, such as confirming a bank transaction based on the bank account number that the funds are to be transferred to.
Technology
Computer security
null
1210385
https://en.wikipedia.org/wiki/Ruffed%20lemur
Ruffed lemur
The ruffed lemurs of the genus Varecia are strepsirrhine primates and are the largest extant lemurs within the family Lemuridae. Like all living lemurs, they are found only on the island of Madagascar. Formerly considered to be a monotypic genus, two species are now recognized: the black-and-white ruffed lemur, with its three subspecies, and the red ruffed lemur. Ruffed lemurs are diurnal and arboreal quadrupeds, often observed leaping through the upper canopy of the seasonal tropical rainforests in eastern Madagascar. They are also the most frugivorous of the Malagasy lemurs, and they are very sensitive to habitat disturbance. Ruffed lemurs live in multi-male/multi-female groups and have a complex and flexible social structure, described as fission-fusion. They are highly vocal and have loud, raucous calls. Ruffed lemurs are seasonal breeders and highly unusual in their reproductive strategy. They are considered an "evolutionary enigma" in that they are the largest of the extant species in Lemuridae, yet exhibit reproductive traits more common in small, nocturnal lemurs, such as short gestation periods (~102 days) and relatively large average litter sizes (~2–3). Ruffed lemurs also build nests for their newborns (the only primates that do so), carry them by mouth, and exhibit an absentee parental system by stashing them while they forage. Infants are altricial, although they develop relatively quickly, traveling independently in the wild after 70 days and attaining full adult size by six months. Threatened by habitat loss and hunting, ruffed lemurs are facing extinction in the wild. However, they reproduce readily in captivity and have been gradually re-introduced into the wild since 1997. Organizations that are involved in ruffed lemur conservation include the Durrell Wildlife Conservation Trust, the Lemur Conservation Foundation (LCF), the Madagascar Fauna Group (MFG), Monkeyland Primate Sanctuary in South Africa, Wildlife Trust, and the Duke Lemur Center (DLC). Evolutionary history Lemurs are not known in the fossil record on Madagascar until the Pleistocene and Holocene epochs. Consequently, little is known about the evolution of ruffed lemurs, let alone the entire lemur clade, which comprises the endemic primate population of the island. Although there is still much debate about the origins of lemurs on Madagascar, it is generally accepted that a single rafting event, similar to the one that brought New World monkeys to South America, occurred around 50–80 million years ago and allowed ancestral lemurs to cross the Mozambique Channel and colonize the island, which had already split from Africa (while it was joined to the Indian subcontinent), approximately 160 million years ago. The resulting founder effect and either non-existent or inferior competition resulted in speciation as the lemur ancestors radiated out to fill open or insufficiently guarded niches. Today, the endemic primate fauna of Madagascar contains over three-quarters of the extant species of the suborder Strepsirrhini, which had been abundant throughout Laurasia and Africa during the Paleocene and Eocene epochs. Taxonomic classification The ruffed lemur genus, Varecia, is a member of the family Lemuridae. The extinct genus, Pachylemur most closely resembled the ruffed lemurs but died out after the arrival of humans. The genus Varecia contains two species, red ruffed lemurs and black-and-white ruffed lemurs, the latter having three subspecies. Family Lemuridae Genus Eulemur: true lemurs Genus Hapalemur: lesser bamboo lemurs Genus Lemur: the ring-tailed lemur Genus †Pachylemur Genus Varecia: ruffed lemurs Black-and-white ruffed lemur, Varecia variegata Variegated black-and-white ruffed lemur, Varecia variegata variegata Southern black-and-white ruffed lemur, Varecia variegata editorum Northern black-and-white ruffed lemur, Varecia variegata subcincta Red ruffed lemur, Varecia rubra Changes in taxonomy Ruffed lemurs, along with several species of brown lemur were once included in the genus Lemur. In 1962, the ruffed lemurs were reassigned to the genus Varecia. The red ruffed lemur and the black-and-white ruffed lemur were formerly recognized as subspecies, Varecia variegata rubra and Varecia variegata variegata respectively. In 2001 both were elevated to species status, a decision that was later supported by genetic research. Three subspecies of black-and-white ruffed lemur, which had been published decades earlier, were also recognized as variegata, editorum, and subcincta, although studies have not been entirely conclusive. Subfossil remains of two extinct lemur species were previously classified under the genus Varecia. Found at sites in central and southwestern Madagascar, Varecia insignis and V. jullyi were very similar to modern ruffed lemurs, but more robust and assumed to be more terrestrial, and thus more prone to predation by early human settlers. More recent studies have shown that these extinct species had a diet similar to that of modern ruffed lemurs and that they were also arboreal in nature. Enough differences were demonstrated to merit a separate genus, Pachylemur. These close relatives of ruffed lemurs are now named Pachylemur insignis and P. jullyi. Anatomy and physiology Ruffed lemurs are the largest extant members of the family Lemuridae, with an average head-body length between and a total length from , while ranging in weight from . The thick, furry tail is longer than the body, averaging in length and is used primarily for balance while moving through the trees. Ruffed lemurs exhibit neither sexual dimorphism nor sexual dichromatism, and females have three pairs of mammary glands. Ruffed lemurs are characterized by their long, canine-like muzzle, which includes a significant overbite. The face is mostly black, with furry "ruffs" running from the ears to the neck. Depending on the species, these ruffs are either white (V. variegata) or deep reddish (V. rubra). Likewise, the coloration of the fluffy fur also varies by species, while the coloration pattern varies by subspecies in the black-and-white ruffed lemur. There are also intermediates in color variation between the two species. As with all lemurs, the ruffed lemur has special adaptations for grooming, including a toilet-claw on its second toe, and a toothcomb. Locomotion Ruffed lemurs are considered arboreal quadrupeds, with the most common type of movement being above-branch quadrupedalism. While in the canopy leaping, vertical clinging, and suspensory behavior, are also common, while bridging, bimanual movement, and bipedalism are infrequently seen. When moving from tree to tree, ruffed lemurs will look over the shoulder while clinging, launch themselves into the air, and twist mid-air so that their ventral surface lands on the new tree or limb. Suspensory behavior is more common in ruffed lemurs than in other lemur species. When ruffed lemurs come down to the ground, they continue to move quadrupedally, running with bounding hops and the tail held high. Ecology Being highly arboreal and the most frugivorous of the lemurs, they thrive only in primary forest with large fruiting trees, where they spend most of their time in the upper canopy. By spending the majority of their time in the crown of tall forest trees, they are relatively safe from predators such as the fossa. Ruffed lemurs are active primarily during the day (diurnal), during which time they feed primarily on fruits and nectar, often adopting suspensory postures while feeding. The seeds of the fruit they eat pass through their digestive tract and are propagated throughout the rainforests in their feces, helping to ensure new plant growth and a healthy forest ecosystem. These lemurs are also significant pollinators of the traveler's tree (Ravenala madagascariensis). Without destroying the inflorescence, they lick the nectar from deep inside the flower using their long muzzles and tongues, collecting and transferring pollen on their snouts and fur from plant to plant. This relationship is thought to be a result of co-evolution. Geographic range and habitat Like all lemurs, this genus is found only on the island of Madagascar off the southeastern coast of Africa. Confined to the island's seasonal eastern tropical rainforests, it is uncommon to rare throughout its range, which historically ran from the Masoala Peninsula in the northeast to the Mananara River in the south. Today, the black-and-white ruffed lemur has a much larger range than the red ruffed lemur, although it is very patchy, extending from slightly northwest of Maroantsetra, on Antongil Bay, in the north down the coast to the Mananara River near Vangaindrano in the south. Additionally, a concentrated population of black-and-white ruffed lemurs, of the subspecies Varecia variegata subcincta, can also be found on the island reserve of Nosy Mangabe in Antongil Bay. It is suspected that this population was introduced to the island in the 1930s. The red ruffed lemur, on the other hand, has a very restricted range on the Masoala Peninsula. Historically, the confluence of the Vohimara and Antainambalana Rivers may have been a zone of hybridization between these two species, although no conclusive results have indicated current interbreeding. In general, the Antainambalana River appears to isolate the red ruffed lemurs from the neighboring subspecies of black-and-white ruffed lemur, V. v. subcincta. The subspecies V. v. variegata can be found further south, and V. v. editorum is the southernmost subspecies. The ranges of these two southern subspecies overlap and intermediate forms are reported to exist, although this has not been confirmed. The rainforests in which these animals live are seasonal, with two primary seasons: the hot, wet season (November through April), and the cool, dry season (May through October). The primary habitat for both species, at any season, is in the crowns of trees, where they spend the majority of their time above ground. With the seasonal availability of resources being similar regardless of location, there is little to no difference in tree usage between species. From September through April, more fruit is available, so females prefer the lianas in the crowns of trees. Both sexes prefer the lower, major branches during the hot, rainy season. The tree crowns are predominantly used from May through August when young leaves and flowers are in abundance. Sympatric relations The following lemur species can be found within the same geographic range as ruffed lemurs: Greater dwarf lemur (Cheirogaleus major) Eastern lesser bamboo lemur (Hapalemur griseus griseus) Weasel sportive lemur (Lepilemur mustelinus) Diademed sifaka (Propithecus diadema) Common brown lemur (Eulemur fulvus) Red-bellied lemur (Eulemur rubriventer) Eastern woolly lemur (Avahi laniger) Indri (Indri indri) Brown mouse lemur (Microcebus rufus) Aye-aye (Daubentonia madagascariensis) White-headed lemur (Eulemur albifrons) Ruffed lemurs either demonstrate feeding dominance or divide resources by using different forest strata. They are dominant over red-bellied lemurs, while eastern lesser bamboo lemurs avoid encountering them altogether. White-headed lemurs, on the other hand, prefer the understory and lower canopy, below , while the ruffed lemurs mainly keep to the upper canopy, above . Play has even been observed between infant ruffed lemurs and white-headed lemurs. Behavior Ruffed lemurs, on average, spend 28% of the day feeding, 53% resting, and 19% traveling, although differences in resting and feeding durations have been observed between males and females, with females resting less and feeding more. They are diurnal; although peak activity occurs during the early morning and late afternoon or evening, resting usually occurs around midday. When resting, ruffed lemurs often sit hunched or upright. They are also frequently seen lying prone over a branch or sunbathing in a supine position with the limbs outstretched. When feeding, they will often hang upside-down by their hind feet, a type of suspensory behavior, which allows them to reach fruits and flowers. Being highly arboreal, they spend the majority of their time in the high canopy throughout the day. Ruffed lemurs spend the majority of their time between above the forest floor, followed by up, and are least frequently seen at . During the hot season, they will relocate to the lower canopy to help regulate their body temperature. In the cold season, ruffed lemurs are least active and may dedicate 2% of their resting time to sunbathing in order to warm up. Long-term field research has shown that range size, group size, social systems, and territorial behavior vary widely, and may be greatly affected by food distribution and quality. It is generally agreed that the ruffed lemur social system is multi-male/multi-female with a fission-fusion society, although some populations of black-and-white ruffed lemur have been reported as monogamous. This social flexibility is suspected to improve survivability despite an inflexible feeding ecology. Diet Being the most frugivorous members of the family Lemuridae, consuming an average of 74–90% fruit, ruffed lemurs also consume nectar (4–21%), and supplement the rest of their diet with young leaves (3–6%), mature leaves (1%), flowers (3–6%), and some seeds. Ruffed lemurs have also been reported to come to the ground to eat fungi and exhibit geophagy. The majority of their diet is made up of relatively few common plant species, with a few species providing more than 50% of the diet. Fig species of the genus Ficus, for example, account for 78% of the fruit consumed by red ruffed lemurs on the Masoala Peninsula. Although plant species and diets vary by location, the most common food plants reported from the field include the following: Canarium Cryptocarya Ocotea Ravensara (family Lauraceae) Ficus Eugenia/Syzygium Grewia Fruit trees do not appear to be selected by species but by availability and accessibility of edible fruit. And despite predominance of a few plant species in the ruffed lemur diet, the remainder of their diet consists of between 80 and 132 other species from 36 plant families. The availability of food reflects the seasonal nature of the forests in which they live. During the hot season, fruit, flowers, and young leaves are more abundant, whereas the cold, wet season offers more young leaves and flowers. Despite this, the diet changes little between seasons, except that females will consume more high-protein, low-fiber items, such as young leaves and flowers, during pregnancy and lactation in order to offset the energy costs of reproduction. Nectar is only available sporadically, yet constitutes a major food source when the flowers bloom. The nectar of the traveler's palm (Ravenala madagascariensis) is a favorite among ruffed lemurs. Social systems The social organization of ruffed lemurs is widely variable in both group organization and group composition, although no notable difference can be seen between the two species. Ruffed lemurs are typically described as multi-male groups with a fission-fusion social structure, although this can vary by season and locality. In a study done at Masoala Peninsula on red ruffed lemurs three levels of organization were identified and defined: communities, core groups, and subgroups. Communities are individuals that affiliated regularly with each other, but rarely with conspecifics outside of the community. Although the entire multi-male/multi-female community lives within a discrete home range, all individuals are never seen in the same location at the same time. Instead, individuals form dispersed social networks, known as core groups, within the community. Core groups are individuals that shared the same core area within a community territory throughout the year. Core groups typically consist of two reproductive females, as well as reproductive males and subadults, ranging in size from two individuals to nine. Females within the groups are cooperative, but male encounters are often agonistic. Subgroups, on the other hand, vary daily in size, composition, and duration, and consist of associated individuals from either the same core group or different core groups, depending on the season. It is from the consistent, daily changes in these subgroups that occur throughout the year, as well as the seasonal formations of core groups in core areas, that demonstrate the fission-fusion nature of ruffed lemur social structure. In another study done at Nosy Mangabe on black-and-white ruffed lemurs a fourth level or organization was defined: affiliates. Affiliates were individuals with more persistent social bonds and more frequent interactions, usually within a core group, but sometimes also between core groups within a subgroup. Adult females typically had many affiliates, whereas adult males rarely interacted with conspecifics, living a more solitary existence. Past studies have reported other social organizations in ruffed lemurs including monogamous pair bonding. This may have been due to the use of short-term, seasonal field studies instead of yearlong studies that take into consideration the effects that changing seasons have on ruffed lemur communities. For instance, during the cold, rainy season, which corresponds with the breeding season, interactions between core groups within a community are significantly reduced. During this time small subgroups form consisting of a mature female, a mature male, and sometimes offspring. This can be misinterpreted as monogamous pair bonding. Ranging behavior can also exhibit seasonal variability. During the hot, wet season, females range widely, either alone or in groups of up to six individuals. In the cool, dry season, smaller core groups stabilize in order to occupy concentrated areas. Therefore, during seasons when fruit is abundant, subgroups are larger while scarcity is met with more solitary behavior. This suggests that although their feeding ecology is inflexible, being tied to widely distributed, patchy, and sometimes scarce fruit, ruffed lemurs instead adapt the social system in order to survive. In terms of dominance, the ruffed lemur's social structure is not as clear-cut as other lemur societies where female dominance is the norm. Although it is historically reported that "males were subordinate to females," especially with captive and free-ranging ruffed lemur populations demonstrating this, wild populations cannot be definitively labeled as matriarchal due to inter-group variation. There are also social differences between males and females. Females typically have many affiliates and bond strongly with other females both within and outside their core areas, but do not affiliate with individuals outside the community range, except during mating season. Males, on the other hand, are more solitary, interact with only a couple of conspecifics, have weak social bonds with other males, and rarely associate with others outside their core group. Furthermore, field studies suggest that only females play a role in communal home range defense. Males may scent-mark and remain relatively silent, but otherwise show little involvement during disputes. Community range or territory size can vary widely, from while group size can range from a single pair to 31 individuals. Population density is also noticeably variable. These wide ranges can be attributed to differing levels of protection and degree of environmental degradation, with better protection and a less degraded environment resulting in higher population density and more moderately sized community ranges. (The duration and seasonality of the studies involved may also have contributed to low group size estimates and community ranges. A study at the Betampona Reserve, for instance, observed monogamous pairs with two to five infants maintaining ranges of .) Core areas at Ambatonikonilahy constituted approximately 10% of the overall community range and showed a close relationship with the location of the largest fruiting trees. The average daily traveling distance for ruffed lemurs varies between , averaging per day. Activity patterns within the community range vary by gender and season. Males generally stay within a core area all year, whereas females only confine themselves to a core area during the cold wet season, then expand their range throughout the community range during the hot, rainy season. Females expand their traveling range slightly after giving birth, still staying within the core area, but gradually range further in December when they begin stashing their infants with other community members while they look for food. Females range the furthest later during the hot, rainy season. Both activity level and reproductive activity can be summarized in the following table. Although males demonstrate little involvement in territorial disputes between neighboring communities, and ruffed lemur communities lack cohesiveness, females communally defend the community range against females of other communities. These disputes occur mostly during the hot, rainy season, when resources are more abundant and occur near the boundaries of community ranges. Spacing is maintained by scent marking and vocal communication. Ruffed lemurs are known for their loud, raucous calls that are answered by neighboring communities and subgroups within the same community. During agonistic encounters between communities, chasing, scent-marking, calling, and occasional physical contact can be seen. Other social behaviors appear to vary between wild and captive ruffed lemurs, as illustrated by the following table. Some affiliative behaviors are seasonal or gender-specific, such as the male squeal approach and anogenital inspections performed during the mating season. Another example is the female greeting behavior, where two females will use their anogenital scent glands to mark each other's backs, jump over one another, writhe together, and emit squealing vocalizations. This behavior is not seen during the end of the cool, dry season or around gestation. The frequency of other affiliative behaviors can be affected by age. All ruffed lemurs over five months of age allogroom, and, in captivity, subadults participate in play more frequently than adults. Cognitive abilities Historically, relatively few studies of learning and cognition have been performed on strepsirrhine primates, including ruffed lemurs. However, a study at the Myakka City Lemur Reserve demonstrated that ruffed lemurs, along with several other members of the family Lemuridae, could understand the outcome of simple arithmetic operations. Communication Olfactory communication As with all strepsirrhine primates, olfactory communication is used extensively by ruffed lemurs – scent marking in territorial defense and disputes, as well as female greeting displays. The scents communicate the sex, location, and identity of their owner. Females predominantly scent mark with their anogenital scent glands, by squatting to rub their anogenital region along horizontal surfaces, such as tree limbs. Males, on the other hand, favor using the glands on their neck, muzzle, and chest, by embracing horizontal and vertical surfaces and rubbing themselves over them. Both sexes will occasionally scent mark in ways characteristic of the opposite sex. In greeting displays, female ruffed lemurs will leap over one another, scent marking the other individual's back in the process. Auditory communication Ruffed lemurs are highly vocal, with an extensive vocal repertoire with calls being used in multiple contexts. Calls can also vary seasonally. During the hot, rainy season, the loud, raucous calls that are a hallmark of ruffed lemurs allow groups to remain in contact and maintain spacing. These loud calls can be heard up to away. Ruffed lemurs use alarm calls that differentiate between ground and aerial predators. For instance an abrupt roar or huff alerts the group to an avian predator, and a pulsed squawk or growl-snort communicates the existence of a mammalian ground predator. When sounding these calls, such as the pulsed squawk, adults direct them at the predator after moving to a safe position. Once the alarm call is sounded by one individual, the resulting chorus can even reach the furthest ranging community members. In captivity, ruffed lemur vocalizations have been studied and divided into three general groups: high-, medium-, and low-amplitude calls. The well-known roar/shriek chorus is spontaneous, occurring most often during period of high activity, as well as being contagious, involving communal participation including infants three to four months old. Abrupt roars are also more common during high activity and aside from alerting group members to the presence of an avian predator, they probably also help maintain contact with individuals outside of visual range or indicate an aggressive/defensive response to a disturbance. In the wild, both of these calls are emitted more during the hot, rainy season due to heighten activity. All high-amplitude calls are delivered with from a "taut" body posture. Medium-amplitude calls operate over a shorter range or often involve moderately arousing situations, such as frustration or submission. Low-amplitude calls also generally operate over a short range, yet also cover a wider range of aggravation levels. Whines are highly variable between individual ruffed lemurs. Cough, grumble, squeak, and squeal have only been observed and researched in the wild. The calls of ruffed lemurs vary only slightly between the two species. In fact, in captivity, it has been documented that red ruffed lemurs understand and even join in the alarm calls of black-and-white ruffed lemurs. One minor difference between the vocal repertoires of these two species is in the pulse rate and frequency of the pulsed squawk, which is much faster and higher in red ruffed lemurs than in black-and-white ruffed lemurs. The difference in this vocalization is only interspecific, showing no signs of significant sexual dimorphism within each species. In black-and-white ruffed lemurs, pulsed squawks sometimes slow down as the group calms down, and integrate with the wail, creating pulsed squawk-wail intermediates . Breeding and reproduction Contrary to initial reports of monogamy, ruffed lemurs in the wild exhibit seasonal polygamous breeding behavior, with both males and females mating with more than one partner within a single season. Mating is not restricted to just community members, but also involves members of neighboring communities. Females mate primarily with males with whom they had affiliative relations prior to the mating season, although some matings occurred with roaming males from other communities. Shortly before mating season begins, females exhibit swelling of the sex skin, which reaches its peak around the middle of their 14.8 day estrous cycle. Male sexual physiology also undergoes its own change, with testicular volume increasing during mating season and peaking around the time of breeding. Aggression also increases during the mating season, both between members of the same sex and by the female towards the male attempting to mate with her. Females have been observed grappling, cuffing, and biting males during copulation. Either sex may approach the other when the female is in estrus. Initially they may roar-shriek with each other. When a male approaches a female, he often lowers his head and squeals, inspecting the female's genitalia by licking or sniffing, scent-marking, and offering a submissive chattering vocalization. When a female approaches a male, she may posture herself for mounting. Mating pairs often copulate many times during the course of a mating bout. The mating season lasts from May through July, during the cold, rainy season, resulting in birth and peak lactation coinciding with the time that fruit is the most plentiful. The gestation period of ruffed lemurs is the shortest of the family Lemuridae, averaging 102 days (with a range of 90 to 106 days). Gestation in the wild last slightly longer than in captivity, averaging 106 days. Just like the mating season, parturition is also seasonal, synchronized to the end of the cold, dry season and the start of the productive hot, rainy season. In addition to an abnormally short gestation period, ruffed lemurs share another feature with small, nocturnal lemurs by producing the largest litters of the family Lemuridae. Litters typically include two or three infants, although up to five have been reported. Birth weights in captivity average between and range from . Ruffed lemur infants are altricial, and are born with their eyes open and a full coat of fur. Ruffed lemurs are the only known primates to build arboreal nests, used exclusively for birth and for the first week or two of life. Starting three weeks prior to birth, females begin constructing the nest from twigs, branches, leaves, and vines, locating it within her core area and above ground. The nests have only one apparent entry point, and are shallow and dish-shaped. During the first couple of weeks, the mother is mostly solitary and does not travel far from the nest, spending as much as 70–90% of her time with the newborns (in captivity). In order to find food, she will leave the infants alone in the nest or, after the first couple of weeks, will carry them in her mouth and stash them in concealed locations in the canopy while she forages. Since this early developmental period corresponds with the end of the cold, dry season, which offers the least amount of fruit, energy is conserved for lactation while travel is limited. As the hot, rainy season begins, fruit availability rises, lactation demands rise as well, and females increase their travel distance in search of food. Unlike other diurnal primates, which usually carry their infants with them, ruffed lemur mothers will stash their young by concealing them in the canopy foliage, leaving them to rest and sit quietly for several hours while she forages and performs other activities. Mothers continue to transport their offspring by mouth, moving them one at a time by grasping the infant's belly crosswise. This form of transport usually stops around 2.5 months of age when the infants become too heavy to carry. Ruffed lemurs are cooperative breeders, with parental care being shared by all community members. For example, mothers will stash their offspring with other mothers or leave them to be guarded by other community members, including non-breeding individuals of both genders. While the mother is away, community members will not only care for and guard them, but also sound alarm calls if danger is detected or if leaving the infant alone. They will also respond to alarm calls by others. These coordinated vigilance displays further involve communal transmission of the alarm call, with nearby community members repeating the alarm call, potentially summoning the mother back to her offspring. Infant transport by other members of the community has also been recorded. Females have been observed nursing infants of their close relatives, while close kin have adopted rejected infants, acting as foster parents. Male care for infants has been documented in ruffed lemur societies. During early development, adult males may guard the nests of multiple core group females, as well as help care for the infants that were likely fathered by other males. During the season when females practice infant stashing, males effectively lighten the reproductive burden of up to several mothers by guarding, huddling, grooming, travelling, playing with and feeding the young. Female ruffed lemurs produce relatively rich milk compared to other lemurs, and consequently, their young develop faster than those of other lemurs. Infants develop rapidly, attaining approximately 70–75% adult weight by the age of four months. They begin climbing and clinging at one month of age, advancing to the point of independently following their mother and group members through the canopy at heights of by two to three months. Full adult mobility is attained at three to four months of age. Socially, they begin exchanging contact calls with their mother at three weeks, and select their mother as their play partner 75–80% of the time during the first three months. Participation in greeting displays and more extensive vocalizations commences around four months, while scent marking does not start until six months of age. Infants begin testing solid food starting around 40 days to two months with weaning occurring between four and six months in the wild, although some individuals have continued to nurse until seven to eight months. Infant mortality is often high among ruffed lemurs, but can also be highly variable. In some seasons, as many as 65% are unable to reach three months of age, possibly due to falls and related injuries, although in some seasons infant mortality is as low as 0%. For those that do survive to adulthood, sexual maturity is attained at 18 to 20 months in females and 32 to 48 months in males. Sexual maturity may take longer to reach in the wild compared to captivity. For females, the inter-birth interval, or time between successive offspring, is typically one year, and in captivity, females can remain reproductively active until the age of 23. The life expectancy for both species of ruffed lemur is estimated at 36 years in captivity. Conservation status In a land where approximately 90% of the original island forest has been destroyed, ruffed lemurs cling to only a small fraction of their original range. Completely dependent upon large fruiting trees, neither species appears to be flexible with its habitat choice, with selective logging resulting in significantly lower population densities. Although they can survive in very disturbed habitats with lower population densities, they are still especially vulnerable to habitat disturbance. Decreased genetic diversity, in tandem with hunting, natural disasters, predation, and disease, can easily wipe out small populations. The black-and-white ruffed lemur was elevated by the IUCN to critically endangered (A2cd) status from endangered status in 2008. They cite that "the species is believed to have undergone a decline of 80% over a period of 27 years, due primarily to a decline in area and quality of habitat within the known range of the species and due to levels of exploitation." The total area of all known localities in which black-and-white ruffed lemurs exist is estimated at less than , while the total wild population is estimated between 1,000 and 10,000. The red ruffed lemur was downgraded to endangered status from critically endangered status by the IUCN in 2008. The justification given includes its limited range, its restriction to only the Masoala Peninsula, and its risk from ongoing habitat loss and hunting. This species occupies a range of no more than , while the total wild population is estimated between 29,000 and 52,000 individuals. Red ruffed lemurs are only protected within the boundaries of the Masoala National Park. Historically, this species has been considered more threatened due to its highly restricted range, compared to the widely distributed black-and-white ruffed lemur. However, its protection within the island's largest national park has slightly improved its chances at survival. Despite this, an assessment done in 2012 and published in 2014 reinstated the critically endangered status for the red ruffed lemur, largely due to the surge in illegal logging in Masoala National Park following the 2009 Malagasy political crisis. There are several organizations involved in ruffed lemur conservation, including the Durrell Wildlife Conservation Trust, the Lemur Conservation Foundation (LCF), the Madagascar Fauna Group (MFG), Monkeyland Primate Sanctuary in South Africa, Wildlife Trust, and the Duke Lemur Center (DLC). To conservation organizations, the ruffed lemurs are considered indicator, umbrella, and flagship species. Threats in the wild As with other primates, one of the principal threats to both ruffed lemur species is habitat loss due to slash-and-burn agriculture, logging, and mining. Both species appear to be very sensitive to logging, and are thought to be the most vulnerable of rainforest lemurs. The hardwoods that are favored for construction materials and selectively logged are also preferred by ruffed lemurs for their fruits and potentially affect their travel routes through the canopy. Deforestation, on the other hand, is a result of the need to provide firewood and to support subsistence agriculture and cash crops. For red ruffed lemurs, Slash-and-burn agriculture, known locally as tavy, is practiced seasonally on the Masoala peninsula between October and December, and its practice is expanding. Additionally, cattle are sometimes allowed to free-range over these former agricultural clearings, preventing forest re-growth. Another principal threat to the survival of ruffed lemurs is hunting. Local human populations still hunt and trap ruffed lemurs with traditional weapons, using them as a source of subsistence. Studies from villages in the Makira Forest have revealed that ruffed lemur meat is not only the desired food but is being hunted unsustainably. On the Masoala peninsula, the calls of red ruffed lemurs help hunters locate them. On this peninsula, firearms are used in addition to traditional traps, known as laly, which involve a strip of cleared forest with snares set on the few remaining branches that allow the lemurs to cross. Although hunting is illegal, the laws are generally not enforced and the local inhabitants show little concern about their hunting practices, which occur mostly from May to September. Hunting is the biggest concern in the Masoala peninsula because it is likely to continue, whereas logging and slash-and-burn agriculture could be curtailed. In other regions, hunters can scare away ruffed lemurs from their favorite food sources, even if they are hunting other prey. Lastly, these animals are taken from their natural habitats to display for tourists or are sold as exotic pets. Frequent cyclones also pose a threat, particularly to concentrated or small populations. In late January 1997, Cyclone Gretelle destroyed 80% of the Manombo forest canopy. With their habitat, including most of their food resources, effectively destroyed, the ruffed lemurs of the forest broadened their diet, remaining surprisingly frugivorous. Their body weights dropped and no births were reported for four years, but they managed to stave off starvation. This event demonstrated not only their flexibility in the face of natural disasters, which may highlight the evolutionary reasons behind their reproductive capacity and litter size but also the threat faced by already stressed populations. Predation in the wild appears to be very rare for ruffed lemurs, probably because living in the high canopy makes them challenging to catch. Evidence of predation by raptors, such as the Henst's goshawk (Accipiter henstii) suggests it occurs at a low rate. The fossa (Cryptoprocta ferox) could present a potential risk if it found an individual lower in the forest canopy, but no confirmation has been presented to indicate that they prey upon ruffed lemurs. Instead, only re-introduced, captive-bred ruffed lemurs have been killed by fossa, likely due to their inexperience with predators. Nesting behavior poses the biggest risk of predation, making them susceptible to carnivorous mammals, such as the ring-tailed mongoose (Galidia elegans) and brown-tailed mongoose (Salanoia concolor). Captive breeding and reintroductions Captive populations of both ruffed lemur species exist in American and European zoos, representing a safeguard against extinction. In the United States, captive breeding is managed by the Species Survival Plan (SSP), a program developed by the Association of Zoos and Aquariums (AZA). Although the populations are very limited in their genetic diversity, these species thrive in captivity, making them an ideal candidate for reintroduction into protected habitat, if it is available. Although reintroduction is seen as a last resort among conservationists, a combination of in situ conservation efforts, such as legal protection, public education, the spread of sustainable livelihoods, and reforestation offer hope for ruffed lemurs. In the meantime, reintroductions offer conservation research opportunities and allow the limited genetic diversity maintained by the SSP to improve the genetic diversity of dwindling Malagasy ruffed lemur populations. A captive release first occurred in November 1997, when five black-and-white ruffed lemurs (Varecia variegata variegata) born in the United States were returned to Madagascar for release in the Betampona Strict Nature Reserve in eastern Madagascar. Popularly known as the Carolina Five, these individuals had lived their entire lives in the Natural Habitat Enclosures at the Duke Lemur Center (DLC). Since then, two more groups totaling 13 captive-born ruffed lemurs have been reintroduced into the same reserve, once in November 1998 and again in January 2001. These latter two groups also received "boot camp training" in the DLC forested free-range enclosures prior to release. So far, the results have shown some success, with 10 surviving longer than one year, 3 individuals integrating into wild groups, and 4 offspring have been born to or sired by released lemurs, all of which were parent-raised. Saraph, a male released with the first group, was reported to be doing well seven years post-release, living in a social group with a wild female and their offspring. Research has been ongoing since the initial release, as illustrated in the 1998 BBC documentary In the Wild: Operation Lemur with John Cleese. The research has provided useful information about their adaptation to life in the wild.
Biology and health sciences
Strepsirrhini
Animals
1210638
https://en.wikipedia.org/wiki/Guri%20Dam
Guri Dam
The Simón Bolívar Hydroelectric Plant, also Guri Dam ( or Represa de Guri), previously known as the Raúl Leoni Hydroelectric Plant, is a concrete gravity and embankment dam in Bolívar State, Venezuela, on the Caroni River, built from 1963 to 1969. It is 7,426 metres long and 162 m high. It impounds the large Guri Reservoir (Embalse de Guri) with a surface area of . The Guri Reservoir that supplies the dam is one of the largest on earth. The hydroelectric power station was once the largest worldwide in terms of installed capacity, replacing Grand Coulee HPP, but was surpassed by Brazil and Paraguay's Itaipu. History and design Technical and economic feasibility studies were begun in 1961, conducted by the Harza Engineering Company. An international consortium of six firms was awarded the contract for the construction of the plant, including four United States companies participating under the Alliance for Progress. In 1963, construction began for the hydroelectric power station Guri in the Necuima Canyon, about 100 kilometers upstream from the mouth of the Caroní River in the Orinoco. By 1969, a 106 m high and 690 m long dam with the official name of Central Hidroeléctrica Simón Bolívar (previously named Central Hidroeléctrica Raúl Leoni from 1978 to 2000) had been built. It created a reservoir which is the largest fresh water body of water in Venezuela and one of the largest man-made blackwater lakes ever created, with its water level at 215 metres above sea level. The power station had a combined installed capacity of 1750 megawatts (MW). By 1978, the capacity had been upgraded to 2065 MW, generated by ten turbines. Because the electricity demand grew so fast, 1976 saw the beginning of a second building stage: a 1300 m long gravity dam was built, another spillway channel and a second powerhouse containing 10 turbines of 725 MW each. This increased the dam's dimensions to 162 m in height and to 7426 m (according to other sources 11,409 m) in crest length. The water level rose to 272 m and the reservoir grew in size and volume to a capacity of 138 billion cubic m for flood storage or floodwater evacuation. The structure was inaugurated on 8 November 1986. Since 2000, there is an ongoing refurbishment project to extend the operation of Guri Power Plant by 30 years. This project is to create 5 new runners and main components on Powerhouse II, and close to the end of 2007 is starting the rehabilitation of four units on Powerhouse I. Generating failures and blackouts 2010 Due to government policy in effect from the 1960s to minimize power production from fossil fuels in order to export as much oil as possible, 74% of Venezuela's electricity comes from renewable energy like hydroelectric power. the Guri Dam alone supplied more than a third of Venezuela's electricity Part of the power generated at Guri is exported to Colombia and Brazil. The risks of this strategy became apparent in 2010, when, due to a prolonged drought, water levels were too low to produce enough electricity to meet demand. In January 2010, the Venezuelan government imposed rolling blackouts to combat low water levels behind the dam due to drought. 2016 In April 2016, water levels again became low, and the government announced blackouts of 4 hours per day, for 40 days or until water levels stabilized. Government employees were told not to come to work on Fridays, president Maduro urged women not to use hair dryers, and the electricity supplied to fifteen shopping malls was rationed. Three days were added to the 2016 Easter national holiday, allowing for a one-week shutdown of public services and private businesses. 2019 On 7 March 2019, shortly before 17:00 local time, the Simón Bolívar Hydroelectric Plant failed, leaving most of Venezuela's 32 million citizens in darkness. In the days following the onset of the blackout, at least four attempts were made to restart the key San Gerónimo B substation, which distributes 80% of the country's electricity, but all failed, and no date was set for the plant's reactivation. Government officials claim the blackout was "an act of sabotage", while experts attributed the failure to aging infrastructure and insufficient maintenance.
Technology
Dams
null
442638
https://en.wikipedia.org/wiki/Homo%20heidelbergensis
Homo heidelbergensis
Homo heidelbergensis (also H. erectus heidelbergensis, H. sapiens heidelbergensis) is an extinct species or subspecies of archaic human which existed from around 600,000 to 300,000 years ago, during the Middle Pleistocene. Homo heidelbergensis was widely considered the most recent common ancestor of modern humans and Neanderthals, but this view has been increasingly disputed since the late 2010s. In the Middle Pleistocene, brain size and height were comparable to modern humans. Like Neanderthals, H. heidelbergensis had a wide chest and robust frame. Fire likely became an integral part of daily life after 400,000 years ago, and this roughly coincides with more permanent and widespread occupation of Europe (above 45°N), and the appearance of hafting technology to create spears. H. heidelbergensis may have been able to carry out coordinated hunting strategies, and consequently they seem to have had a higher dependence on meat. It is debated whether or not to constrain H. heidelbergensis to only Europe or to also include African and Asian specimens, and this is further confounded by the type specimen (Mauer 1) being a jawbone, because jawbones feature few diagnostic traits and are generally missing among Middle Pleistocene specimens. H. heidelbergensis was subsumed in 1950 as a subspecies of H. erectus but today it is more widely classified as its own species. H. heidelbergensis is regarded as a chronospecies, evolving from an African form of H. erectus (sometimes called H. ergaster). Taxonomy Research history The first fossil, Mauer 1 (a jawbone), was discovered by a worker in Mauer, southeast of Heidelberg, Germany, in 1907. It was formally described the next year by German anthropologist Otto Schoetensack, who made it the type specimen of a new species, Homo heidelbergensis. He split this off as a new species primarily because of the mandible's archaicness—in particular its enormous size—and it was the then-oldest human jaw in the European fossil record at 640,000 years old. The mandible is well preserved, missing only the left premolars, part of the first left molar, the tip of the left coronoid process (at the jaw hinge), and fragments of the mid-section: the jaw was found in two pieces and had to be glued together. It may have belonged to a young adult based on slight wearing on the 3rd molar. In 1921, the skull Kabwe 1 was discovered by Swiss miner Tom Zwiglaar in Kabwe, Zambia (at the time Broken Hill, Northern Rhodesia); it was assigned to a new species, "H. rhodesiensis", by English palaeontologist Arthur Smith Woodward. H. rhodesiensis and H. heidelbergensis were two of the many putative species of Middle Pleistocene Homo which were described throughout the first half of the 20th century. In the 1950s, Ernst Mayr entered the field of anthropology and, surveying a "bewildering diversity of names", decided to define only three species of Homo: "H. transvaalensis" (the australopithecines), H. erectus (including the Mauer mandible, and various putative African and Asian taxa) and Homo sapiens (including anything younger than H. erectus, such as modern humans and Neanderthals). Mayr defined them as a sequential lineage, with each species evolving into the next (chronospecies). Though later Mayr changed his opinion on the australopithecines (recognizing Australopithecus), his view of archaic human diversity became widely adopted in the subsequent decades. Though H. erectus is still maintained as a highly variable, widespread, and long-lasting species, it is still much debated whether or not sinking all Middle Pleistocene remains into it is justifiable. Mayr's lumping of H. heidelbergensis into H. erectus was first opposed by American anthropologist Francis Clark Howell in 1960. In 1974, British physical anthropologist Chris Stringer pointed out similarities between the Kabwe 1 and the Greek Petralona skulls to the skulls of modern humans (H. sapiens or H. s. sapiens) and Neanderthals (H. neanderthalensis or H. s. neanderthalensis). So, Stringer assigned them to Homo sapiens sensu lato ("in the broad sense"), as ancestral to modern humans and Neanderthals. In 1979, Stringer and Finnish anthropologist Björn Kurtén found that the Kabwe and Petralona skulls are associated with the Cromerian industry like the Mauer mandible, and thus postulated these three populations might be allied with each other. Though these fossils are poorly preserved and do not provide many comparable possible diagnostic traits (and likewise it was difficult at the time to properly define a unique species), they argued that at least these Middle Pleistocene specimens should be allocated to H. (s.?) heidelbergensis or "H. (s.?) rhodesiensis" (depending on, respectively, the inclusion or exclusion of the Mauer mandible) to formally recognize their similarity. Further work, most influentially by Stringer, palaeoanthropologist Ian Tattersall, and human evolutionary biologist Phillip Rightmire reported further differences between Middle Pleistocene Afro-European specimens and H. erectus sensu stricto ("in the strict sense", in this case, specimens from East Asia). Consequently, Afro-European remains from 600 to 300 thousand years ago—most notably from Kabwe, Petralona, Bodo and Arago—are often classified as H. heidelbergensis. In 2010, American physical anthropologist Jeffrey H. Schwartz and Tattersall suggested classifying all Middle Pleistocene European as well as Asian specimens—namely from Dali and Jinniushan in China—as H. heidelbergensis. This model is not as universally accepted. After the 2010 identification of the genetic code of some unique archaic human species in Siberia, termed "Denisovans" pending diagnostic fossil finds, it is postulated that the Asian remains could represent that same species. Thus, Middle Pleistocene Asian specimens, such as Dali Man or the Indian Narmada Man, remain enigmatic. The paleontology institute at Heidelberg University, where the Mauer mandible has been kept since 1908, changed the label from H. e. heidelbergensis to H. heidelbergensis in 2015. In 1976 at Sima de los Huesos (SH) in the Sierra de Atapuerca, Spain, Spanish paleontologists Emiliano Aguirre, José María Basabe and Trinidad Torres began to excavate archaic human remains. Their investigation of the site was prompted by the finding of several bear remains (Ursus deningeri) since the early 20th century by amateur cavers (which consequently destroyed some of the human remains in that section). By 1990, about 600 human remains were reported, and by 2004 the number had increased to roughly 4,000. These represent at least 28 individuals, of which possibly only one is a child, and the rest teenagers and young adults. The fossil assemblage is exceptionally complete, with whole corpses buried rapidly, and all bodily elements represented. In 1997, Spanish palaeoanthropologist Juan Luis Arsuaga assigned these to H. heidelbergensis, but in 2014, he retracted this, stating that Neanderthal-like features present in the Mauer mandible are missing in the SH humans. Classification In palaeoanthropology, the Middle Pleistocene is often termed the "muddle in the middle" because the species-level classification of archaic human remains from this time period has been heavily debated. The ancestors of modern humans (Homo sapiens or H. s. sapiens) and Neanderthals (H. neanderthalensis or H. s. neanderthalensis) diverged during this time period, and until the late 2010s H. heidelbergensis was considered the most likely last common ancestor (LCA), but this view is no longer generally accepted. It is much debated if the name H. heidelbergensis can be extended to Middle Pleistocene humans across the Old World, or if it is better to restrict it to just Europe. In the latter case, Middle Pleistocene African remains can be split off into "H. rhodesiensis". In the latter view, "H. rhodesiensis" can either be seen as the direct ancestor of modern humans, or of "H. helmei" which evolved into modern humans. In 2021, Canadian anthropologist Mirjana Roksandic and colleagues recommended the complete dissolution of H. heidelbergensis and "H. rhodesiensis", as the name rhodesiensis honours English diamond magnate Cecil Rhodes who disenfranchised the black population in southern Africa. They classified all European H. heidelbergensis as H. neanderthalensis, and synonymised H. rhodesiensis with a new species they named "H. bodoensis" which includes all African specimens, and potentially some from the Levant and the Balkans which have no Neanderthal-derived traits (namely Ceprano, Mala Balanica, HaZore'a and Nadaouiyeh Aïn Askar). H. bodoensis is supposed to represent the immediate ancestor of modern humans, but does not include the LCA of modern humans and Neanderthals. They suggested the confusing morphology of the Middle Pleistocene was caused by periodic H. bodoensis migration events into Europe following population collapses after glacial cycles, interbreeding with surviving indigenous populations. Their taxonomic recommendations were rejected by Stringer and others as they failed to explain how exactly their proposals would resolve anything, in addition to violating nomenclatural rules. Evolution H. heidelbergensis is thought to have descended from African H. erectus — sometimes classified as Homo ergaster — during the first early expansions of hominins out of Africa beginning roughly 2 million years ago. Those that dispersed across Europe and stayed in Africa evolved into H. heidelbergensis or speciated into H. heidelbergensis in Europe and "H. rhodesiensis" in Africa, and those that dispersed across East Asia evolved into H. erectus s. s. The exact derivation from an ancestor species is obfuscated by a long gap in the human fossil record near the end of the Early Pleistocene. In 2016, Antonio Profico and colleagues suggested that 875,000-year-old skull materials from the Gombore II site of the Melka Kunture Formation, Ethiopia, represent a transitional morph between H. ergaster and H. heidelbergensis, and thus postulated that H. heidelbergensis originated in Africa instead of Europe. According to genetic analysis, the LCA of modern humans and Neanderthal split into a modern human line, and a Neanderthal/Denisovan line, and the latter later split into Neanderthal and Denisovans. According to nuclear DNA analysis, the 430,000-year-old SH humans are more closely related to Neanderthals than Denisovans (and that the Neanderthal/Denisovan, and thus the modern human/Neanderthal split, had already occurred), suggesting the modern human/Neanderthal LCA had existed long before many European specimens typically assigned to H. heidelbergensis did, such as the Arago and Petralona materials. In 1997, Spanish archaeologist José María Bermúdez de Castro, Arsuaga, and colleagues described the roughly million-year-old H. antecessor from Gran Dolina, Sierra de Atapuerca, and suggested supplanting this species in the place of H. heidelbergensis for the LCA between modern humans and Neanderthals, with H. heidelbergensis descending from it and being a strictly European species ancestral to only Neanderthals. They later recanted. In 2020, Dutch molecular palaeoanthropologist Frido Welker and colleagues analysed ancient proteins collected from an H. antecessor tooth found that it was a member of a sister lineage to the LCA rather than being the LCA itself (that is, H. heidelbergensis did not derive from H. antecessor). Human dispersal beyond 45°N seems to have been quite limited during the Lower Palaeolithic, with evidence of short-lived dispersals northward beginning after a million years ago. Beginning 700,000 years ago, more permanent populations seem to have persisted across the line coinciding with the spread of hand axe technology across Europe, possibly associated with the dispersal of H. heidelbergensis and behavioural shifts to cope with the cold climate. Such occupation becomes much more frequent after 500,000 years ago. In 2023, a genomics analysis of over 3,000 living individuals indicated that Homo sapiens''' ancestral population was reduced to less than 1,300 individuals between 800,000 and 900,000 years ago. Prof Giorgio Manzi, an anthropologist at Sapienza University of Rome, suggested that this bottleneck could have triggered the evolution of Homo heidelbergensis. Anatomy Skull In comparison to Early Pleistocene H. erectus/ergaster, Middle Pleistocene humans have a much more modern human-like face. The nasal opening is set completely vertically in the skull, and the anterior nasal sill can be crested or sometimes a prominent spine. The incisive canals (on the roof of the mouth) open near the teeth, and are orientated like those of more recent human species. The frontal bone is broad, the parietal bone can be expanded, and the squamous part of temporal bone is high and arched, which could all be related to increasing brain size. The sphenoid bone features a spine extending downwards, and the articular tubercle on the underside of the skull can jut out prominently as the surface behind the jaw hinge is otherwise quite flat. In 2004, Rightmire estimated the brain volumes of ten Middle Pleistocene humans variously attributable to H. heidelbergensis—from Kabwe, Bodo, Ndutu, Dali, Jinniushan, Petralona, Steinheim, Arago, and two from SH. This set gives an average volume of about 1,206 cc, ranging from 1,100 to 1,390 cc. He also averaged the brain volumes of 30 H. erectus/ergaster specimens, spanning nearly 1.5 million years from across East Asia and Africa, as 973 cc, and thus concluded a significant jump in brain size, though conceded brain size was extremely variable ranging from 727 to 1,231 cc depending on the time period, geographic region, and even between individuals within the same population (the last one probably due to notable sexual dimorphism with males much bigger than females). In comparison, for modern humans, brain size averages 1,270 cc for males and 1,130 cc for females; and for Neanderthals 1,600 cc for males and 1,300 cc for females. In 2009, palaeontologists Aurélien Mounier, François Marchal and Silvana Condemi published the first differential diagnosis of H. heidelbergensis using the Mauer mandible, as well as material from Tighennif, Algeria; SH, Spain; Arago, France; and Montmaurin, France. They listed the diagnostic traits as: a reduced chin, a notch in the submental space (near the throat), parallel upper and lower boundaries of the mandible in side-view, several mental foramina (small holes for blood vessels) near the cheek teeth, a horizontal retromolar space (a gap behind the molars), a gutter between the molars and the ramus (which juts up to connect with the skull), an overall long jaw, a deep fossa (a depression) for the masseter muscle (which closes the jaw), a small gonial angle (the angle between the body of the mandible and the ramus), an extensive planum alveolare (the distance from the frontmost tooth socket to the back of the jaw), a developed planum triangulare (near the jaw hinge), and a mylohyoid line originating at the level of the third molar. Size Trends in body size through the Middle Pleistocene are obscured due to a general lack of limb bones and non-skull (post-cranial) remains. Based on the lengths of various long bones, the SH humans averaged roughly for males and for females, with maximums of respectively and . The height of a female partial skeleton from Jinniushan is estimated to have been quite tall at roughly in life, much taller than the SH females. A tibia from Kabwe is typically estimated to have been , among the tallest Middle Pleistocene specimens, but it is possible this individual was either unusually large or had a much longer tibia to femur ratio than expected. If these specimens are representative of their respective continents, they would suggest that above-medium to tall people were prevalent throughout the Middle Pleistocene Old World. If this is the case, then most all populations of any archaic human species would have generally averaged to in height. Early modern humans were notably taller, with the Skhul and Qafzeh remains averaging for males and for females, an average of , possibly to increase the energy-efficiency of long-distance travel with longer legs. A conspicuously massive proximal (upper half) femur was recovered from Berg Aukas Mine, Namibia, about east of Grootfontein. It was originally estimated to have been from a male as much as in life, but its exorbitant size is now proposed to be the consequence of an extraordinarily vigorous early-life activity level while an otherwise ordinary person was maturing. If so, the individual from the Berg Aukas Mine would probably have had proportions similar to Kabwe 1. Build The human body plan had evolved in H. ergaster, and characterises all later Homo species, but among the more derived members there are two distinct morphs: A narrow-chested and gracile build like modern humans, and a broader-chested and robust build like Neanderthals. It was once assumed that the Neanderthal build was unique to Neanderthals based on the gracile H. ergaster partial skeleton "KNM WT-15000" ("Turkana Boy"), but the discovery of some Middle Pleistocene skeletal elements (though generally fragmentary and few and far between) seems to suggest Middle Pleistocene humans overall featured a more Neanderthal morph. Thus, the modern human morph may be unique to modern humans, evolving quite recently. This is most clearly demonstrated in the exceptionally well-preserved SH assemblage. Based on skull robustness, it was assumed Middle Pleistocene humans featured a high degree of sexual dimorphism, but the SH humans demonstrate a modern humanlike level. The SH humans and other Middle Pleistocene Homo have a more basal pelvis and femur (more similar to earlier Homo than Neanderthals). The overall broad and elliptical pelvis is broader, taller and thicker (expanded anteroposteriorly) than those of Neanderthals or modern humans, and retains an anteriorly located acetabulocristal buttress (which supports the iliac crests during hip abduction), a well defined supraacetabular groove (between the hip socket and the ilium), and a thin and rectangular superior pubic ramus (as opposed to the thick, stout one in modern humans). The foot of all archaic humans has a taller trochlea of the ankle bone, making the ankle more flexible (specifically dorsiflexion and plantarflexion). Pathology On the left side of its face, an SH skull (Skull 5) presents the oldest-known case of orbital cellulitis (eye infection which developed from an abscess in the mouth). This probably caused sepsis, killing the individual. A male SH pelvis (Pelvis 1), based on joint degeneration, may have lived for more than 45 years, making him one of the oldest examples of this demographic in the human fossil record. The frequency of 45-plus individuals gradually increases with time, but has overall remained quite low throughout the Palaeolithic. He similarly had the age-related maladies lumbar kyphosis (excessive curving of the lumbar vertebrae of the lower back), L5–S1 spondylolisthesis (misalignment of the last lumbar vertebra with the first sacral vertebra), and Baastrup disease on L4 and 5 (enlargement of the spinous processes). These would have produced lower back pain, significantly limiting movement, and may be evidence of group care. An adolescent SH skull (Cranium 14) was diagnosed with lambdoid single suture craniosynostosis (immature closing of the left lambdoid suture, leading to skull deformities as development continued). This is a rare condition, occurring in less than 6 out of every 200,000 individuals in modern humans. The individual died around the age of 10, suggesting it was not abandoned due its deformity as has been done in historical times, and received the same quality of care as any other child. Enamel hypoplasia on the teeth is used to determine bouts of nutritional stress. At a rate of 40% for the SH humans, this is significantly higher than exhibited in the earlier South African hominin Paranthropus robustus at Swartkrans (30.6%) or Sterkfontein (12.1%). Nonetheless, Neanderthals suffered even higher rates and more intense bouts of hypoplasia, but it is unclear if this is because Neanderthals were less capable of exploiting natural resources, or because they lived in harsher environments. A peak at 3½ years of age may be correlated with weaning age. In Neanderthals this peak was at 4 years, and many modern hunter gatherers also wean at about 4 years of age. Culture Food Middle Pleistocene communities in general seem to have eaten big game at a higher frequency than predecessors, with meat becoming an essential dietary component. In Europe, Homo heidelbergensis is known to have consumed the largest megafauna species present in the region, the straight-tusked elephant (which has been found at numerous sites with cut marks and/or stone tools indicating butchery) and rhinoceroses belonging to the genus Stephanorhinus. At the Schöningen spear horizon in Germany, there is extensive evidence for the butchery of horses. At the Boxgrove site in England, there is evidence for the butchery of roe deer, horse and rhinoceros. The inhabitants of Terra Amata in France seem to have been mainly eating deer, but also elephants, boar, ibex, rhino and aurochs. African sites commonly yield bovine and horse bones. Though carcasses may have simply been scavenged, some Afro-European sites show specific targeting of a single species, which more likely indicates active hunting; for example: Olorgesailie, Kenya, which has yielded over 50 to 60 individual baboons (Theropithecus oswaldi); and Torralba and Ambrona in Spain which have an abundance of elephant bones (though also rhino and large hoofed mammals). The increase in meat subsistence could indicate the development of group hunting strategies in the Middle Pleistocene. For instance, at Torralba and Ambrona, the animals may have been run into swamplands before being killed, entailing encircling and driving by a large group of hunters in a coordinated and organised attack. Exploitation of aquatic environments is generally quite lacking, despite some sites being in close proximity to the ocean, lakes or rivers. Plants were probably also frequently consumed, including seasonally available ones, but the extent of their exploitation is unclear as they do not fossilise as well as animal bones. At the Schöningen site in Germany, it is estimated that over 200 plant species in the vicinity were either edible raw or when cooked, though relatively few have actually been found at the site itself. Art Upper Palaeolithic modern humans are well known for having etched engravings seemingly with symbolic value. As of 2018, only 27 Middle and Lower Palaeolithic objects have been postulated to have symbolic etching, out of which some have been refuted as having been caused by natural or otherwise non-symbolic phenomena (such as the fossilisation or excavation processes). The Lower Palaeolithic ones are: a 400,000 to 350,000 years old bone from Bilzingsleben, Germany; three 380,000-year-old pebbles from Terra Amata; a 250,000-year-old pebble from Markkleeberg, Germany; 18 roughly 200,000-year-old pebbles from Lazaret (near Terra Amata); a roughly 200,000-year-old lithic from Grotte de l'Observatoire, Monaco and a 200- to 130-thousand-year-old pebble from Baume Bonne, France. In the mid-19th century, French archaeologist Jacques Boucher de Crèvecœur de Perthes began excavation at St. Acheul, Amiens, France, (the area where the Acheulian was defined), and, in addition to hand axes, reported perforated sponge fossils (Porosphaera globularis) which he considered to have been decorative beads. This claim was completely ignored. In 1894, English archaeologist Worthington George Smith discovered 200 similar perforated fossils in Bedfordshire, England, and also speculated that their function was beads, though he made no reference to Boucher de Perthes' find, possibly because he was unaware of it. In 2005, Robert Bednarik reexamined the material, and concluded that—because all the Bedfordshire P. globularis fossils are sub-spherical and range in diameter, despite this species having a highly variable shape—they were deliberately chosen. They appear to have been bored through completely or almost completely by some parasitic creature (i. e., through natural processes), and were then percussed on what would have been the more closed-off end to fully open the hole. He also found wear facets which he speculated were begotten from clacking against other beads when they were strung together and worn as a necklace. In 2009, Solange Rigaud, Francisco d'Errico and colleagues noticed that the modified areas are lighter in colour than the unmodifed, suggesting they were inflicted much more recently such as during excavation. They were also unconvinced that the fossils could be confidently associated with the Acheulian artefacts from the sites, and suggested that—as an alternative to archaic human activity—apparent size-selection could have been caused by either natural geological processes or 19th-century collectors favouring this specific form. Early modern humans and late Neanderthals (the latter especially after 60,000 years ago) made wide use of red ochre for presumably symbolic purposes as it produces a blood-like colour, though ochre can also have a functional medicinal application. Beyond these two species, ochre usage is recorded at Olduvai Gorge, Tanzania, where two red ochre lumps have been found; Ambrona where an ochre slab was trimmed down into a specific shape; and Terra Amata where 75 ochre pieces were heated to achieve a wide colour range from yellow to red-brown to red. These may exemplify early and isolated instances of colour preference and colour categorisation, and such practices may not have been normalised yet. In 2006, Eudald Carbonell and Marina Mosquera suggested the Sima de los Huesos (SH) hominins were buried by people rather than being the victims of some catastrophic event such as a cave-in, because young children and infants are absent which would be unexpected if this were a single and complete family unit. The SH humans are conspicuously associated with only a single stone tool, a carefully crafted hand axe made of high-quality quartzite (rarely used in the region), and so Carbonell and Mosquera postulated this was purposefully and symbolically placed with the bodies as some kind of grave good. Supposed evidence of symbolic graves would not surface for another 300,000 years. Technology Stone tools The Lower Palaeolithic (Early Stone Age) comprises the Oldowan which was replaced by the Acheulian, which is characterised by the production of mostly symmetrical hand axes. The Acheulian has a timespan of about a million years, and such technological stagnation has typically been ascribed to comparatively limited cognitive abilities which significantly reduced innovative capacity, such as a deficit in cognitive fluidity, working memory, or a social system compatible with apprenticeship. Nonetheless, the Acheulian does seem to subtly change over time, and is typically split up into Early Acheulian and Late Acheulian, the latter becoming especially popular after 600 to 500 thousand years ago. Late Acheulian technology never crossed over east of the Movius Line into East Asia, which is generally believed to be due to either some major deficit in cultural transmission (namely smaller population size in the East) or simply preservation bias as far fewer stone tool assemblages are found east of the line. The transition is indicated by the production of smaller, thinner, and more symmetrical hand axes (though thicker, less refined ones were still produced). At the 500,000-year-old Boxgrove site in England—an exceptionally well-preserved site with abundance of tool remains—thinning may have been produced by striking the hand axe near-perpendicularly with a soft hammer, possible with the invention of prepared platforms for tool making. The Boxgrove knappers also left behind large lithic flakes leftover from making hand axes, possibly with the intention of recycling them into other tools later. Late Acheulian sites elsewhere pre-prepared lithic cores ("Large Flake Blanks", LFB) in a variety of ways before shaping them into tools, making prepared platforms unnecessary. LFB Acheulian spreads out of Africa into West and South Asia before a million years ago and is present in Southern Europe after 600,000 years ago, but northern Europe (and the Levant after 700,000 years ago) made use of soft hammers as they mainly made use of small, thick flint nodules. The first prepared platforms in Africa come from the 450,000-year-old Fauresmith industry, transitional between the Early Stone Age (Acheulian) and the Middle Stone Age. With either method, knappers (tool makers) would have had to have produced some item indirectly related to creating the desired product (hierarchical organisation), which could represent a major cognitive development. Experiments with modern humans have shown that platform preparation cannot be learned through purely observational learning, unlike earlier techniques, and could be indicative of well developed teaching methods as well as self-regulated learning. At Boxgrove, the knappers used not only stone but also bone and antler to make hammers, and the use of such a wide range of raw materials could speak to advanced planning capabilities as stoneworking requires a much different skillset to work and gather materials for than boneworking. The Kapthurin Formation, Kenya, has yielded the oldest evidence of blade and bladelet technology, dating to 545 to 509 thousand years ago. This technology is rare even in the Middle Palaeolithic, and is typically associated with Upper Palaeolithic modern humans. It is unclear if this is part of a long blade-making tradition, or if blade technology was lost and reinvented several times by multiple different human species. Fire and construction Despite apparent pushes into colder climates, evidence of fire is scarce in the archaeological record until 400 to 300 thousand years ago. Though it is possible fire remnants simply degraded, long and overall undisturbed occupation sequences such as at Arago or Gran Dolina conspicuously lack convincing evidence of fire usage. This pattern could possibly indicate the invention of ignition technology or improved fire maintenance techniques at this time, and that fire was not an integral part of people's lives before then in Europe. In Africa, on the other hand, humans may have been able to frequently scavenge fire as early as 1.6 million years ago from natural wildfires, which occur much more often in Africa, thus possibly (more or less) regularly using fire. The oldest established continuous fire site beyond Africa is the 780,000-year-old Gesher Benot Ya'aqov, Israel. In Europe, evidence of constructed dwelling structures—classified as firm surface huts with solid foundations built in areas mostly sheltered from the weather—has been recorded since the Cromerian Interglacial, the earliest example a 700,000-year-old stone foundation from Přezletice, Czech Republic. This dwelling probably featured a vaulted roof made of thick branches or thin poles, supported by a foundation of big rocks and earth. Other such dwellings have been postulated to have existed during or following the Holstein Interglacial (which began 424,000 years ago) in Bilzingsleben, Germany; Terra Amata, France; and Fermanville and Saint-Germain-des-Vaux in Normandy. These were probably occupied during the winter, and, averaging only in area, they were probably only used for sleeping in, while other activities (including firekeeping) seem to have been done outside. Less-permanent tent technology may have been present in Europe in the Lower Paleolithic. Spears The appearance of repeated fire usage—earliest in Europe from Beeches Pit, England, and Schöningen, Germany—roughly coincides with hafting technology (attaching stone points to spears) best exemplified by the Schöningen spears. These nine wooden spears and spear fragments—in addition to a lance, and a double-pointed stick—date to 300,000 years ago and were preserved along a lakeside. The spears vary from in diameter, and may have been long, overall similar to present day competitive javelins. The spears were made of soft spruce wood, except for spear 4 which was (also soft) pine wood. This contrasts with the Clacton spearhead from Clacton-on-Sea, England, perhaps roughly 100,000 years older, which was made of hard yew wood. The Schöningen spears may have had a range of up to , though would have been more effective short range within about , making them effective distance weapons either against prey or predators. Besides these two localities, the only other site which provides solid evidence of European spear technology is the 120,000-year-old Lehringen site, district of Verden, in Lower Saxony, Germany, where a yew spear was apparently lodged in an elephant. In Africa, 500,000-year-old points from Kathu Pan 1, South Africa, may have been hafted onto spears. Judging by indirect evidence, a horse scapula from the 500,000-year-old Boxgrove shows a puncture wound consistent with a spear wound. Evidence of hafting (in both Europe and Africa) becomes much more common after 300,000 years. Language The SH humans had a modern humanlike hyoid bone (which supports the tongue), and middle ear bones capable of finely distinguishing frequencies within the range of normal human speech. Judging by dental striations, they seem to have been predominantly right-handed, and handedness is related to the lateralisation of brain function, typically associated with language processing in modern humans. So, it is postulated that this population was speaking with some early form of language. Nonetheless, these traits do not absolutely prove the existence of language and humanlike speech, and its presence so early in time despite such anatomical arguments has been primarily opposed by cognitive scientist Philip Lieberman.
Biology and health sciences
Homo
Biology
442742
https://en.wikipedia.org/wiki/Pterygota
Pterygota
Pterygota ( ) is a subclass of insects that includes all winged insects and groups who lost them secondarily. Pterygota group comprises 99.9% of all insects. The orders not included are the Archaeognatha (jumping bristletails) and the Zygentoma (silverfishes and firebrats), two primitively wingless insect orders. Unlike Archaeognatha and Zygentoma, the pterygotes do not have styli or vesicles on their abdomen (also absent in some zygentomans), and with the exception of the majority of mayflies, are also missing the median terminal filament which is present in the ancestrally wingless insects. The oldest known representatives of the group appeared during the mid-Carboniferous, around 328–324 million years ago, and the group subsequently underwent rapid diversification. Claims that they originated substantially earlier during the Silurian or Devonian based on molecular clock estimates are unlikely based on the fossil record, and are likely analytical artefacts. Systematics Traditionally, this group was divided into the infraclasses Paleoptera and Neoptera. The former are nowadays strongly suspected of being paraphyletic, and better treatments (such as dividing or dissolving the group) are presently being discussed. In addition, it is not clear how exactly the neopterans are related among each other. The Exopterygota might be a similar assemblage of rather ancient hemimetabolous insects among the Neoptera like the Palaeoptera are among insects as a whole. The holometabolous Endopterygota seem to be very close relatives, indeed, but nonetheless appear to contain several clades of related orders, the status of which is not agreed upon. The following scheme uses finer divisions than the one above, which is not well-suited to correctly accommodating the fossil groups. Infraclass Palaeoptera (probably paraphyletic) Ephemeroptera (mayflies) Palaeodictyoptera †(extinct) Megasecoptera †(extinct) Archodonata †(extinct) Diaphanopterodea †(extinct) Protodonata or Meganisoptera †(extinct; sometimes included in Odonata) Protanisoptera †(extinct; sometimes included in Odonata) Triadophlebioptera †(extinct; sometimes included in Odonata) Protozygoptera or Archizygoptera †(extinct; sometimes included in Odonata) Odonata (dragonflies and damselflies) Infraclass Neoptera Superorder Exopterygota Caloneurodea †(extinct) Titanoptera †(extinct) Protorthoptera †(extinct) Plecoptera (stoneflies) Embioptera (webspinners) Zoraptera (angel insects) Dermaptera (earwigs) Orthoptera (grasshoppers, etc.) Phasmatodea (stick insects – tentatively placed here) Grylloblattodea (ice-crawlers – tentatively placed here) Mantophasmatodea (gladiators – tentatively placed here) Proposed superorder Dictyoptera Blattodea (cockroaches and termites) Mantodea (mantises) Alienoptera †(extinct) Proposed superorder Paraneoptera Psocoptera (booklice, barklice) Thysanoptera (thrips) Phthiraptera (lice) Hemiptera (true bugs) Superorder Endopterygota Hymenoptera (ants, bees, etc.) Coleoptera (beetles) Strepsiptera (twisted-winged parasites) Raphidioptera (snakeflies) Megaloptera (alderflies, etc.) Neuroptera (net-veined insects) Proposed superorder Mecopteroidea/Antliophora Mecoptera (scorpionflies, etc.) Siphonaptera (fleas) Diptera (true flies) Protodiptera †(extinct) Proposed superorder Amphiesmenoptera Trichoptera (caddisflies) Lepidoptera (butterflies, moths) Neoptera orders incertae sedis Glosselytrodea †(extinct) Miomoptera †(extinct)
Biology and health sciences
Insects and other hexapods
null
442831
https://en.wikipedia.org/wiki/Stomiiformes
Stomiiformes
Stomiiformes is an order of deep-sea ray-finned fishes of very diverse morphology. It includes, for example, dragonfishes, lightfishes (Gonostomatidae and Phosichthyidae), loosejaws, marine hatchetfishes and viperfishes. The order contains 4 families (5 according to some authors) with more than 50 genera and at least 410 species. As usual for deep-sea fishes, there are few common names for species of the order, but the Stomiiformes as a whole are often called dragonfishes and allies or simply stomiiforms. The scientific name means "Stomias-shaped", from Stomias (the type genus) + the standard fish order suffix "-formes". It ultimately derives from Ancient Greek stóma (στόμᾶ, "mouth") + Latin forma ("external form"), the former in reference to the huge mouth opening of these fishes. The earliest stomiiform is Paravinciguerria from the Cenomanian of Morocco and Italy. Description and ecology Members of this order are mostly pelagic fishes living in deep oceanic waters. Their distribution around the world's oceans is very wide, ranging from subtropical and temperate waters up to subarctic or even Antarctic ones. The smallest species of this order is the bristlemouth Cyclothone pygmaea. Native to the Mediterranean Sea, it reaches just 1.5 cm (0.6 in) as an adult. The largest species is the barbeled dragonfish Opostomias micripnus, widely found in the Atlantic, Indian and Pacific Oceans and measuring about in adult length. These fish have a highly unusual and often almost nightmarish appearance. They all have teeth on the premaxilla and maxilla. Their maxillary ligaments, as well as some muscles and certain bones in the branchial cavity, are specialized in a distinctive way. Most have large mouths extending back past the eyes. Some also have a chin barbel. The dorsal and/or pectoral fins are missing in some, but others have an adipose fin. The pelvic fin has 4–9 rays, and the stomiiformes possess 5–24 branchiostegal rays. Their scales are cycloid, delicate and easily sloughed off; some are scaleless. The coloration is typically dark brown or black; a few (mostly Gonostomatoidei) are silver, and photophores (light-producing organs) are common in this order. The teeth of stomiiformes are often transparent and non-reflective so that prey will be unlikely to see them in the light generated by bioluminescence. Research has revealed that the transparency of the teeth of Aristostomias scintillans is due to nanoscale structures composed hydroxyapatite and collagen and a lack dentin tubules, however a study from a decade prior had shown the teeth of Chauliodus sloani (which are also transparent) have dentin tubules. The reason behind difference in presence of dentin tubules in two species of the same family (Stomiidae) has yet to be addressed. Bioluminescence As common for deep-sea creatures, all members of Stomiiformes (except one) have photophores, whose structure is characteristic of the order. The light emitted can be more or less strong and its color can be light yellow, white, violet or red. The light coming from these fish is generally invisible to their prey. The lighting mechanism can be very simple – consisting of small gleaming points on the fish body – or very elaborate, involving lenses and refractors. The most common arrangement is one or two rows of photophores on the ventral aspect of the body. The rows run from the head down to the tip of the tail. Photophores are also present in chin barbels of the family Stomiidae. The light produced in these glandular organs is the product of an enzymatic reaction, a catylization of coelenterazine by calcium ions. Daily migration During the day, Stomiiformes stay in deep waters. When the sun sets, most of them follow the dimming sunlight up to near-surface waters, which are richer in animal life such as small fishes and planktonic invertebrates. During the night, these Stomiiformes hunt and feed on such organisms, swimming back to deeper waters when the sun rises. They apparently are able to measure the intensity of the sunlight that reaches them. They will thus move to stay always in the zone where light intensity is very low, though it is not entirely dark. This daily migration is well observed in quite a few species of stomiiforms. However, it is also performed by other fishes, while some larger Stomiiformes – among them the largest predators of the deep sea – stay in their habitat all the time and feed on smaller migrating fish that return from the surface. Reproduction Stomiiforms spawn generally in deep seas, but the eggs are light and float towards the ocean surface. They hatch in surface waters. When the larvae have completed their metamorphosis and look like adults, they descend to join the main population. Like many benthic fish species, certain members of the order – especially in the genera Cyclothone and Gonostoma – change their sex during their life. When they become sexually mature, they are males; later on they transform into females. Systematics The Stomiiformes are often placed in the teleost superorder Stenopterygii, usually together with the Ateleopodiformes (jellynoses), but sometimes on their own. Whether it is indeed justified to accept such a small group is doubtful; it may well be that the closest living relatives of the "Stenopterygii" are found among the superorder Protacanthopterygii, and that the former would need to be merged in the latter. In some classifications, the "Stenopterygii" are kept separate but included with the Protacanthopterygii and the monotypic superorder Cyclosquamata in an unranked clade called Euteleostei. That would probably require splitting two additional monotypic superorders out of the Protacanthopterygii, and thus result in a profusion of very small taxa. The Stomiiformes have also been considered close relatives of the Aulopiformes. The latter are otherwise placed in a monotypic superorder "Cyclosquamata" but also appear to be quite close to the Protacanthopterygii indeed. The relationships of these – and the Lampriformes or Myctophiformes, which are also usually treated as monotypic superorders – to the taxa mentioned before is still not well resolved at all, and regardless whether one calls them Protacanthopterygii sensu lato or Euteleostei, the phylogeny of this group of moderately-advanced Teleostei is in need of further study. The ancestral Stomiiformes probably had thin brownish bodies, rows of egg-shaped photophores adorning the lower body parts, and mouths with numerous teeth. From these, two lineages evolved, probably some time during the Late Cretaceous: Among the modern Stomiiformes, the Gonostomatidae and Phosichthyidae are phenetically very similar, but this is due to their being very plesiomorphic and retaining many traits of the original stomiiforms. Each of the two has characteristic synapomorphies with one of the more advanced stomiiform families – the Sternoptychidae and the Stomiidae, respectively. These two, in turn, are highly autapomorphic, and at a casual glance do not look as if they were as closely related to the other stomiiforms as they actually are. Thus, the classification of the suborders and families of the Stomiiformes is: Suborder Gonostomatoidei Family Gonostomatidae – bristlemouths, anglemouths, "lightfishes" (including Diplophidae) Family Sternoptychidae – marine hatchetfishes, bottlelights, constellationfishes, pearlsides Suborder Stomioidei Family Phosichthyidae – lightfishes Family Stomiidae – barbeled dragonfishes, loosejaws, stareaters Timeline of genera
Biology and health sciences
Stomiiformes
Animals
442884
https://en.wikipedia.org/wiki/Barracuda
Barracuda
A barracuda is a large, predatory, ray-finned, saltwater fish of the genus Sphyraena, the only genus in the family Sphyraenidae, which was named by Constantine Samuel Rafinesque in 1815. It is found in tropical and subtropical oceans worldwide ranging from the eastern border of the Atlantic Ocean to the Red Sea, on its western border the Caribbean Sea, and in tropical areas of the Pacific Ocean. Barracudas reside near the top of the water and near coral reefs and sea grasses. Barracudas are often targeted by sport-fishing enthusiasts. Etymology The common name "barracuda" is derived from Spanish, with the original word being of possibly Cariban origin. Description Barracuda are snake-like in appearance, with prominent, sharp-edged, fang-like teeth, much like piranha, all of different sizes, set in sockets of their large jaws. They have large, pointed heads with an underbite in many species. Their gill covers have no spines and are covered with small scales. Their two dorsal fins are widely separated, with the anterior fin having five spines, and the posterior fin having one spine and nine soft rays. The posterior dorsal fin is similar in size to the anal fin and is situated above it. The lateral line is prominent and extends straight from head to tail. The spinous dorsal fin is placed above the pelvic fins and is normally retracted in a groove. The caudal fin is moderately forked with its posterior edge double-curved and is set at the end of a stout peduncle. The pectoral fins are placed low on the sides. The swim bladder is large, allowing for minimal energy expenditure while cruising or remaining idle. In most cases, barracuda are dark gray, dark green, white, or blue on the upper body, with silvery sides and a chalky-white belly. Coloration varies somewhat between species. For some species, irregular black spots or a row of darker cross-bars occur on each side. Their fins may be yellowish or dusky. Barracudas live primarily in oceans, but certain species, such as the great barracuda, live in brackish water. Due to similarities, sometimes Barracuda is compared with freshwater pike, though the major difference between the two is that Barracuda has two separate dorsal fins with a forked tail, unlike the freshwater pike. Some species grow quite large (up to 65 inches or 165 cm in length), such as Sphyraena sphyraena, found in the Mediterranean Sea and eastern Atlantic; Sphyraena picudilla, ranging on the Atlantic coast of tropical America from North Carolina to Brazil and reaching Bermuda. Other barracuda species are found around the world. Examples are Sphyraena argentea, found from Puget Sound southwards to Cabo San Lucas, Sphyraena jello, from the seas of India and the Malay Peninsula and Archipelago. Species The barracuda genus Sphyraena contains 29 species: Sphyraena acutipinnis F. Day, 1876 (Sharpfin barracuda) Sphyraena afra W. K. H. Peters, 1844 (Guinean barracuda) Sphyraena arabiansis E. M. Abdussamad, Ratheesh, Thangaraja, Bineesh & D. Prakashan, 2015 (Arabian barracuda) Sphyraena argentea Girard, 1854 (Pacific barracuda) Sphyraena barracuda (G. Edwards, 1771) (Great barracuda) Sphyraena borealis DeKay, 1842 (Northern sennet) Sphyraena chrysotaenia Klunzinger, 1884 (Yellowstripe barracuda) Sphyraena ensis D. S. Jordan & C. H. Gilbert, 1882 (Mexican barracuda) Sphyraena flavicauda Rüppell, 1838 (Yellowtail barracuda) Sphyraena forsteri G. Cuvier, 1829 (Bigeye barracuda) Sphyraena guachancho G. Cuvier, 1829 (Guachanche barracuda) Sphyraena helleri O. T. Jenkins, 1901 (Heller's barracuda) Sphyraena iburiensis Doiuchi & Nakabo, 2005 Sphyraena idiastes Heller & Snodgrass, 1903 (Pelican barracuda) Sphyraena intermedia Pastore, 2009 Sphyraena japonica Bloch & J. G. Schneider, 1801 (Japanese barracuda) Sphyraena jello G. Cuvier, 1829 (Pickhandle barracuda) Sphyraena lucasana T. N. Gill, 1863 (Lucas barracuda) Sphyraena novaehollandiae Günther, 1860 (Australian barracuda) Sphyraena obtusata G. Cuvier, 1829 (Obtuse barracuda) Sphyraena picudilla Poey, 1860 (Southern sennet) Sphyraena pinguis Günther, 1874 (Red barracuda) Sphyraena putnamae D. S. Jordan & Seale, 1905 (Sawtooth barracuda) Sphyraena qenie Klunzinger, 1870 (Blackfin barracuda) Sphyraena sphyraena (Linnaeus, 1758) (European barracuda) Sphyraena tome Fowler, 1903 Sphyraena viridensis G. Cuvier, 1829 (Yellowmouth barracuda) Sphyraena waitii W. Ogilby, 1908 The following fossil species are also known: †"Sphyraena" amici Agassiz, 1843 †Sphyraena bognorensis Casier, 1966 †Sphyraena bolcensis Agassiz, 1844 †Sphyraena crassidens de Beaufort, 1926 †Sphyraena croatica Gorjanović-Kramberger, 1882 †Sphyraena cunhai da Silva Santos & Travassos, 1960 †Sphyraena egleri da Silva Santos & Travassos, 1960 †Sphyraena fajumensis (Dames, 1883) †Sphyraena hansfuchsi (Schubert, 1906) †Sphyraena intermedia Bassani, 1889 †Sphyraena kugleri Casier, 1966 †Sphyraena longimana Arambourg, 1966 †Sphyraena lugardi White, 1926 †Sphyraena major Leidy, 1855 †Sphyraena malembeensis Dartevelle & Casier, 1943 †Sphyraena pannonica Weiler, 1938 †Sphyraena senni Casier, 1966 †Sphyraena sternbergensis Winkler, 1875 †Sphyraena striata Casier, 1946 †Sphyraena substriata (Münster, 1846) †Sphyraena suessi Gorjanović-Kramberger, 1882 †Sphyraena tsengi Tao, 1993 †Sphyraena tyrolensis von Meyer, 1863 †Sphyraena viannai Dartevelle & Casier, 1949 †"Sphyraena" viennensis Steindachner, 1859 †Sphyraena weberi Leriche, 1954 †Sphyraena winkleri Lawley, 1876 A related fossil genus, Parasphyraena, is known from the Miocene of Azerbaijan. Behaviour and diet Barracudas are ferocious, opportunistic predators, relying on surprise and short bursts of speed, up to , to overtake their prey. Adults of most species are more or less solitary, while young and half-grown fish frequently congregate. Barracudas prey primarily on fish (which may include some as large as themselves). Common prey fish include jacks, grunts, groupers, snappers, small tunas, mullets, killifishes, herrings, and anchovies; often by simply biting them in half. They kill and consume larger prey by tearing chunks out of their prey. They also seem to consume smaller species of sustenance that are in front of them. Barracuda species are often seen competing against mackerel, needle fish and sometimes even dolphins for prey. Barracudas are usually found swimming in saltwater searching for schools of plankton-feeding fish. Their silver and elongated bodies make them difficult for prey to detect, especially when viewed head-on. Barracudas depend heavily on their eyesight when they are out hunting. When hunting, they tend to notice everything that has an unusual color, reflection, or movement. Once a barracuda targets an intended prey item, its long tail and matching anal and dorsal fins enable it to move with swift bursts of speed to attack its prey before it can escape. Barracudas generally attack schools of fish, speeding at them head first and biting at them with their jaws. When barracudas age, they tend to swim alone. However, there are times when they tend to stay with the pack. Barracudas will sometimes swim in groups. In this case, they can relocate schools of fish into compact areas or lead them into shallow water to more easily feed on them. Interactions with humans Some species of barracuda are reputed to be dangerous to swimmers. Barracudas are scavengers, and may mistake snorkelers for large predators, following them hoping to eat the remains of their prey. Swimmers have reported being bitten by barracudas, but such incidents are rare and possibly caused by poor visibility. Large barracudas can be encountered in muddy shallows on rare occasion. Barracudas may mistake things that glint and shine, like jewelry, for prey. One incident reported a barracuda jumping out of water and injuring a kayaker, but Jason Schratwieser, conservation director of the International Game Fish Association, said that the wound could have been caused by a houndfish. Fatalities are nevertheless rare. Deaths have been reported in 1947 in Florida, 1957 in North Carolina and 1960 in Florida, again. As food Barracudas are popular both as food and game fish. They are most often eaten as fillets or steaks. Larger species, such as the great barracuda, have been implicated in cases of ciguatera food poisoning. Those who have been diagnosed with this type of food poisoning display symptoms of gastrointestinal discomfort, limb weakness, and an inability to differentiate hot from cold effectively. West Africans smoke them for use in soups and sauces. Smoking protects the soft flesh from disintegrating in the broth and gives it a smoky flavour. Gallery
Biology and health sciences
Acanthomorpha
null
442916
https://en.wikipedia.org/wiki/Avian%20influenza
Avian influenza
Avian influenza, also known as avian flu or bird flu, is a disease caused by the influenza A virus, which primarily affects birds but can sometimes affect mammals including humans. Wild aquatic birds are the primary host of the influenza A virus, which is enzootic (continually present) in many bird populations. Symptoms of avian influenza vary according to both the strain of virus underlying the infection, and on the species of bird or mammal affected. Classification of a virus strain as either low pathogenic avian influenza (LPAI) or high pathogenic avian influenza (HPAI) is based on the severity of symptoms in domestic chickens and does not predict severity of symptoms in other species. Chickens infected with LPAI display mild symptoms or are asymptomatic, whereas HPAI causes serious breathing difficulties, significant drop in egg production, and sudden death. Domestic poultry may potentially be protected from specific strains of the virus by vaccination. Humans and other mammals can only become infected with avian influenza after prolonged close contact with infected birds. In mammals including humans, infection with avian influenza (whether LPAI or HPAI) is rare. Symptoms of infection vary from mild to severe, including fever, diarrhea, and cough. Influenza A virus is shed in the saliva, mucus, and feces of infected birds; other infected animals may shed bird flu viruses in respiratory secretions and other body fluids (e.g., cow milk). The virus can spread rapidly through poultry flocks and among wild birds. A particularly virulent strain, influenza A virus subtype H5N1 (A/H5N1) has the potential to decimate domesticated poultry stocks and an estimated half a billion farmed birds have been slaughtered in efforts to contain the virus. Highly pathogenic avian influenza Because of the impact of avian influenza on economically important chicken farms, a classification system was devised in 1981 which divided avian virus strains as either highly pathogenic (and therefore potentially requiring vigorous control measures) or low pathogenic. The test for this is based solely on the effect on chickens – a virus strain is highly pathogenic avian influenza (HPAI) if 75% or more of chickens die after being deliberately infected with it. The alternative classification is low pathogenic avian influenza (LPAI). This classification system has since been modified to take into account the structure of the virus' haemagglutinin protein. Other species of birds, especially water birds, can become infected with HPAI virus without experiencing severe symptoms and can spread the infection over large distances; the exact symptoms depend on the species of bird and the strain of virus. Classification of an avian virus strain as HPAI or LPAI does not predict how serious the disease might be if it infects humans or other mammals. Since 2006, the World Organization for Animal Health requires all LPAI H5 and H7 detections to be reported because of their potential to mutate into highly pathogenic strains. Virology Avian influenza is caused by the influenza A virus which principally affects birds but can also infect humans and other mammals. Influenza A is an RNA virus with a genome comprising a negative-sense, RNA segmented genome that encodes for 11 viral genes. The virus particle (also called the virion) is 80–120 nanometers in diameter and elliptical or filamentous in shape. There is evidence that the virus can survive for long periods in freshwater after being excreted in feces by its avian host, and can withstand prolonged freezing. There are two proteins on the surface of the viral envelope; hemagglutinin and neuraminidase. These are the major antigens of the virus against which neutralizing antibodies are produced. Influenza virus epidemics and epizootics are associated with changes in their antigenic structure. Hemagglutinin (H) is an antigenic glycoprotein which allows the virus to bind to and enter the host cell. Neuraminidase (N) is an antigenic glycosylated enzyme which facilitates the release of progeny viruses from infected cells. There are 18 known types of hemagglutinin, of which H1 thru H16 have been found in birds, and 11 types of neuraminidase. Subtypes Subtypes of influenza A are defined by the combination of H and N proteins in the viral envelope; for example, "H5N1" designates an influenza A subtype that has a type-5 hemagglutinin (H) protein and a type-1 neuraminidase (N) protein. The subtyping scheme only takes into account the two envelope proteins, not the other proteins coded by the virus' RNA. Almost all possible combinations of H (1 thru 16) and N (1 thru 11) have been isolated from wild birds. Further variations exist within the subtypes and can lead to very significant differences in the virus's ability to infect and cause disease. Influenza virus nomenclature To unambiguously describe a specific isolate of virus, researchers use the internationally accepted Influenza virus nomenclature, which describes, among other things, the species of animal from which the virus was isolated, and the place and year of collection. As an example, A/chicken/Nakorn-Patom/Thailand/CU-K2/04(H5N1): A stands for the genus of influenza (A, B or C) chicken is the animal species the isolate was found in (note: human isolates lack this component term and are thus identified as human isolates by default) Nakorn-Patom/Thailand is the place this specific virus was isolated CU-K2 is the laboratory reference number that identifies it from other influenza viruses isolated at the same place and year 04 represents the year of isolation 2004 H5 stands for the fifth of several known types of the protein hemagglutinin N1 stands for the first of several known types of the protein neuraminidase. Other examples include: A/duck/Hong Kong/308/78(H5N3), A/avian/NY/01(H5N2), A/chicken/Mexico/31381-3/94(H5N2), and A/shoveler/Egypt/03(H5N2). Genetic characterization Analysis of the virus' genome enables researchers to determine the order of its nucleotides. Comparison of the genome of a virus with that of a different virus can reveal differences between the two viruses. Genetic variations are important because they can change amino acids that make up the influenza virus’ proteins, resulting in structural changes to the proteins, and thereby altering properties of the virus. Some of these properties include the ability to evade immunity and the ability to cause severe disease. Genetic sequencing enables influenza strains to be further characterised by their clade or subclade, revealing links between different samples of virus and tracing the evolution of the virus over time. Species barrier Humans can become infected by the avian flu if they are in close contact with infected birds. Symptoms vary from mild to severe (including death), but as of December 2024 there have been no observed instances of sustained human-human transmission. There are a number of factors that generally prevent avian flu from causing epidemics in humans or other mammals. One of them is that the HA protein of avian influenza binds to alpha-2,3 sialic acid receptors, which are present in the respiratory tract and intestines of avian species, while human influenza HA binds to alpha-2,6 sialic acid receptors, which are present in the human upper respiratory tract. Other factors include the ability to replicate the viral RNA genome within the host cell nucleus, to evade host immune responses, and to transmit between individuals. Influenza viruses are constantly changing as small genetic mutations accumulate, a process known as antigenic drift. Over time, mutation may lead to a change in antigenic properties such that host antibodies (acquired through vaccination or prior infection) do not provide effective protection, causing a fresh outbreak of disease. The segmented genome of influenza viruses facilitates genetic reassortment. This can occur if a host is infected simultaneously with two different strains of influenza virus; then it is possible for the viruses to interchange genetic material as they reproduce in the host cells. Thus, an avian influenza virus can acquire characteristics, such as the ability to infect humans, from a different virus strain. The presence of both alpha 2,3 and alpha 2,6 sialic acid receptors in pig tissues allows for co-infection by avian influenza and human influenza viruses. This susceptibility makes pigs a potential "melting pot" for the reassortment of influenza A viruses. Epidemiology History Avian influenza (historically known as fowl plague) is caused by bird-adapted strains of the influenza type A virus. The disease was first identified by Edoardo Perroncito in 1878 when it was differentiated from other diseases that caused high mortality rates in birds; in 1955 it was established that the fowl plague virus was closely related to human influenza. In 1972, it became evident that many subtypes of avian flu were endemic in wild bird populations. Between 1959 and 1995, there were 15 recorded outbreaks of highly pathogenic avian influenza (HPAI) in poultry, with losses varying from a few birds on a single farm to many millions. Between 1996 and 2008, HPAI outbreaks in poultry have been recorded at least 11 times and 4 of these outbreaks have resulted in the death or culling of millions of birds. Since then, several virus strains (both LPAI and HPAI) have become endemic among wild birds with increasingly frequent outbreaks among domestic poultry, especially of the H5 and H7 subtypes. Transmission and prevention Birds – Influenza A viruses of various subtypes have a large reservoir in wild waterbirds of the orders Anseriformes (for example, ducks, geese, and swans) and Charadriiformes (for example, gulls, terns, and waders) which can infect the respiratory and gastrointestinal tract without affecting the health of the host. They can then be carried by the bird over large distances, especially during annual migration. Infected birds can shed avian influenza A viruses in their saliva, nasal secretions, and feces; susceptible birds become infected when they have contact with the virus as it is shed by infected birds. The virus can survive for long periods in water and at low temperatures, and can be spread from one farm to another on farm equipment. Domesticated birds (chickens, turkeys, ducks, etc.) may become infected with avian influenza A viruses through direct contact with infected waterfowl or other infected poultry, or through contact with contaminated feces or surfaces. Avian influenza outbreaks in domesticated birds are of concern for several reasons. There is potential for low pathogenic avian influenza viruses (LPAI) to evolve into strains which are high pathogenic to poultry (HPAI), and subsequent potential for significant illness and death among poultry during outbreaks. Because of this, international regulations state that any detection of H5 or H7 subtypes (regardless of their pathogenicity) must be notified to the appropriate authority. It is also possible that avian influenza viruses could be transmitted to humans and other animals which have been exposed to infected birds, causing infection with unpredictable but sometimes fatal consequences. When an HPAI infection is detected in poultry, it is normal to cull infected animals and those nearby in an effort to rapidly contain, control and eradicate the disease. This is done together with movement restrictions, improved hygiene and biosecurity, and enhanced surveillance. Humans – Avian flu viruses, both HPAI and LPAI, can infect humans who are in close, unprotected contact with infected poultry. Incidents of cross-species transmission are rare, with symptoms ranging in severity from no symptoms or mild illness, to severe disease that resulted in death. As of February, 2024 there have been very few instances of human-to-human transmission, and each outbreak has been limited to a few people. All subtypes of avian Influenza A have potential to cross the species barrier, with H5N1 and H7N9 considered the biggest threats. In order to avoid infection, the general public are advised to avoid contact with sick birds or potentially contaminated material such as carcasses or feces. People working with birds, such as conservationists or poultry workers, are advised to wear appropriate personal protection equipment. Other animals – a wide range of other animals have been affected by avian flu, generally due to eating birds which had been infected. There have been instances where transmission of the disease between mammals, including seals and cows, may have occurred. Pandemic potential Influenza viruses have a relatively high mutation rate that is characteristic of RNA viruses. The segmentation of the influenza A virus genome facilitates genetic recombination by segment reassortment in hosts who become infected with two different strains of influenza viruses at the same time. With reassortment between strains, an avian strain which does not affect humans may acquire characteristics from a different strain which enable it to infect and pass between humans – a zoonotic event. It is thought that all influenza A viruses causing outbreaks or pandemics among humans since the 1900s originated from strains circulating in wild aquatic birds through reassortment with other influenza strains. It is possible (though not certain) that pigs may act as an intermediate host for reassortment. As of June 2024, there is concern about two subtypes of avian influenza which are circulating in wild bird populations worldwide, H5N1 and H7N9. Both of these have potential to devastate poultry stocks, and both have jumped to humans with relatively high case fatality rates. Surveillance The Global Influenza Surveillance and Response System (GISRS) is a global network of laboratories that monitor the spread of influenza with the aim to provide the World Health Organization with influenza control information and to inform vaccine development. Several millions of specimens are tested by the GISRS network annually through a network of laboratories in 127 countries. As well as human viruses, GISRS monitors avian, swine, and other potentially zoonotic influenza viruses. Vaccine Poultry – it is possible to vaccinate poultry against specific strains of HPAI influenza. Vaccination should be combined with other control measures such as infection monitoring, early detection and biosecurity. Humans – Several "candidate vaccines" are available in case an avian virus acquires the ability to infect and transmit among humans. There are strategic stockpiles of vaccines against the H5N1 subtype, which is considered the biggest risk. A vaccine against the H7N9 subtype, which has also infected humans, has undergone a limited amount of testing. In the event of an outbreak, the "candidate" vaccine would be rapidly tested for safety as well as efficacy against the zoonotic strain, and then authorised and distributed to vaccine manufacturers. Zoonotic influenza vaccine Seqirus is authorized for use in the European Union. It is an H5N8 vaccine that is intended to provide acquired immunity against H5 subtype influenza A viruses. Influenza A virus subtype H5N1 The highly pathogenic influenza A virus subtype H5N1 is an emerging avian influenza virus that is causing global concern as a potential pandemic threat. It is often referred to simply as "bird flu" or "avian influenza", even though it is only one of many subtypes. A/H5N1 has killed millions of poultry in a growing number of countries throughout Asia, Europe, and Africa. Health experts are concerned that the coexistence of human flu viruses and avian flu viruses (especially H5N1) will provide an opportunity for genetic material to be exchanged between species-specific viruses, possibly creating a new virulent influenza strain that is easily transmissible and lethal to humans. Influenza A/H5N1 was first recorded in a small outbreak among poultry in Scotland in 1959, with numerous outbreaks subsequently in every continent. The first known transmission of A/H5N1 to a human occurred in Hong Kong in 1997, when there was an outbreak of 18 human cases resulting in 6 deaths. It was determined that all the infected people had been exposed to infected birds in poultry markets. As the disease continued to spread among poultry flocks in the territory, the decision was made to cull all 1.6 million poultry in the area and to impose strict controls on the movement and handling of poultry. This terminated the outbreak. There is weak evidence to support limited human-to-human transmission of A/H5N1 in 139 outbreaks between 2005 and 2009 in Sumatra. The reproduction number was well below the threshold for sustained transmission. Influenza A virus subtype H7N9 A significant outbreak of influenza A virus subtype H7N9 (A/H7N9) started in March 2013 when severe influenza affected 18 humans in China; six subsequently died. It was discovered that a low pathogenic strain of A/H7N9 was circulating among chickens, and that all the affected people had been exposed in poultry markets. Further cases among humans and poultry in mainland China continued to be identified sporadically throughout the year, followed by a peak around the festival season of Chinese New Year (January and February) in early 2014 which was attributed to the seasonal surge in poultry production. Up to December 2013, there had been 139 cases with 47 deaths. Infections among humans and poultry continued during the next few years, again with peaks around the new year. In 2016 a virus strain emerged which was highly pathogenic to chickens. In order to contain the HPAI outbreak, the Chinese authorities in 2017 initiated a large scale vaccination campaign against avian influenza in poultry. Since then, the number of outbreaks in poultry, as well as the number of human cases, dropped significantly. In humans, symptoms and mortality for both LPAI and HPAI strains have been similar. Although no human H7N9 infections have been reported since February 2019, the virus is still circulating in poultry, particularly in laying hens. It has demonstrated antigenic drift to evade vaccines, and remains a potential threat to the poultry industry and public health. Genetic and evolutionary analyses have shown that the A(H7) viruses in the Chinese outbreak probably transferred from domestic duck to chicken populations in China and then reassorted with poultry influenza A(H9N2) to generate the influenza A(H7N9) strain that affected humans. The genetic characteristics of A(H7N9) virus are of concern because of their pandemic potential, e.g. their potential to recognise human and avian influenza virus receptors which affects the ability to cause sustained human-to-human transmission, or the ability to replicate in the human host. Between February 2013 and February 2019, there were 1,568 confirmed human cases and 616 deaths associated with the outbreak in China. The majority of human cases have reported contact with poultry in markets or farms. Transmission between humans remains limited with some evidence of small family clusters. However, there is no evidence of sustained human-to-human transmission of A/H7N9 influenza. During early 2017, outbreaks of avian influenza A(H7N9) occurred in poultry in the USA. The strain in these outbreaks was of North American origin and is unrelated to the Asian lineage H7N9 which is associated with human infections in China. Domestic animals Several domestic species have been infected with and shown symptoms of H5N1 viral infection, including cats, dogs, ferrets, pigs, and birds. Poultry Attempts are made in the United States to minimize the presence of HPAI in poultry through routine surveillance of poultry flocks in commercial poultry operations. Detection of a HPAI virus may result in immediate culling of the flock. Less pathogenic viruses are controlled by vaccination. Dairy cows During April 2024, avian influenza was first detected in dairy cows in several US states and subsequently spread more widely through the year. Influenza A(H5N1) was found to be present at high levels in the mammary glands and in the milk of affected cows. It was shown that the virus can persist on milking equipment, which provides a probable transmission route for cow-to-cow and cow-to-human spread. A number of humans who had been in contact with cows tested positive for the virus, with mild symptoms. According to CDC, 7% of 115 dairy workers had evidence of recent infection in a study from Michigan and Colorado from June to August 2024 – half of them asymptomatic. This is higher than estimates from prior transmission studies in poultry. All dairy workers had worked in cleaning the milk parlor and none had used personal protective equipment. Cats Global aspects Global measures In 2005, the formation of the International Partnership on Avian and Pandemic Influenza was announced in order to elevate the importance of avian flu, coordinate efforts, and improve disease reporting and surveillance in order to better respond to future pandemics. New networks of laboratories have emerged to detect and respond to avian flu, such as the Crisis Management Center for Animal Health, the Global Avian Influenza Network for Surveillance, OFFLU, and the Global Early Warning System for major animal diseases. After the 2003 outbreak, WHO member states have also recognized the need for more transparent and equitable sharing of vaccines and other benefits from these networks. Cooperative measures created in response to HPAI have served as a basis for programs related to other emerging and re-emerging infectious diseases. Impact on national policies HPAI control has also been used for political ends. In Indonesia, negotiations with global response networks were used to recentralize power and funding to the Ministry of Health. In Vietnam, policymakers, with the support of the Food and Agriculture Organization of the United Nations (FAO), used HPAI control to accelerate the industrialization of livestock production for export by proposing to increase the portion of large-scale commercial farms and reducing the number of poultry keepers from 8 to 2 million by 2010. Traditional Asian practices Backyard poultry production was viewed as "traditional Asian" agricultural practices that contrasted with modern commercial poultry production and seen as a threat to biosecurity. Backyard production appeared to hold greater risk than commercial production due to lack of biosecurity and close contact with humans, though HPAI spread in intensively raised flocks was greater due to high density rearing and genetic homogeneity. Asian culture itself was blamed as the reason why certain interventions, such as those that only looked at placed-based interventions, would fail without looking for multifaceted solutions. Economic impact Approximately 20% of the protein consumed in developing countries come from poultry. A report by FAO totalled economic losses caused by avian influenza in South East Asia up to 2005 around US$10 billion. This had the greatest impact on small scale commercial and backyard producers. As poultry serves as a source of food security and liquid assets, the most vulnerable populations were poor, small scale farmers. The loss of birds due to HPAI and culling in Vietnam led to an average loss of 2.3 months of production and US$69–108 for households where many have an income of $2 a day or less. The loss of food security for vulnerable households can be seen in the stunting of children under five in Egypt. Women are another population at risk as in most regions of the world, small flocks are tended to by women. Widespread culling also resulted in the decreased enrollment of girls in school in Turkey.
Biology and health sciences
Viral diseases
Health
442934
https://en.wikipedia.org/wiki/Onager%20%28weapon%29
Onager (weapon)
The onager (, ; ) was a Roman torsion-powered siege engine. It is commonly depicted as a catapult with a bowl, bucket, or sling at the end of its throwing arm. The onager was first mentioned in 353 AD by Ammianus Marcellinus, who described onagers as the same as a scorpion. The onager is often confused with the later mangonel, a "traction trebuchet" that replaced torsion powered siege engines in the 6th century AD. Etymology According to two authors of the later Roman Empire who wrote on military affairs, the onager's name, meaning wild ass, derived from the kicking action of the machine that threw stones into the air. This action resembled the kicking action of the hooves of the wild ass, the Syrian wild ass, a subspecies of onager, which was native to the eastern part of the empire. In Latin this species was known as onagrum. Design The onager consisted of a large frame placed horizontally on the ground with a vertical frame of solid timber rigidly fixed to its front end. A vertical spoke that passed through a rope bundle fastened to the frame had a cup, bucket, or sling attached which contained a projectile. To fire it, the spoke or arm was forced down, against the tension of twisted ropes or other springs, by a windlass, and then suddenly released. As the sling swung outwards, one end would release, as with a staff-sling, and the projectile would be hurled forward. The arm would then be caught by a padded beam or bed when it could be winched back again. It weighed around two to six tons. Flavius Josephus described an instance where an onager shot a rock over a distance. According to Ammianus Marcellinus, a single-armed onager required eight men to wind down the arm. When it fired, the recoil was so great that it made the onager impossible to place on stone walls because the stones would be dislodged. This was confirmed by a reconstructed onager, considerably smaller than the ones described in the sources, that still caused substantial recoil. Its shot weighed . According to the historian Peter Purton: History The onager was used from the 4th century until the 6th century. It may have originated in the third century BC. It was initially developed for the purpose of disrupting enemy lines and destroying walls. The late-fourth century author Ammianus Marcellinus describes 'onager' as a neologism for scorpions and relates various incidents in which the engines fire both rocks and arrow-shaped missiles. According to Ammianus, the onager was a single-armed torsion engine unlike the twin-armed ballista before it. It needed eight men just to wind down the arm and could not be placed on fortifications because of its great recoil. It had very low mobility and was difficult to aim. Originally it used a bucket or cup to hold the projectile but at some point it was replaced with a sling, which elongated the throwing arm without burdening it and allowed for a greater range of shot. In 378, the onager was used against the Goths at Adrianople and although it did not cause any casualties, its large stone projectile was incredibly frightening to the Goths. The late-fourth or early-fifth century military writer Vegetius stipulates that a legion ought to field ten onagers, one for each cohort. These he says should be transported fully assembled on ox carts to ensure readiness in case of sudden attack, in which case the onagers could be used for defense immediately. For Vegetius, the onagers were stone throwing machines. In the late 6th century the Pannonian Avars brought the Chinese traction trebuchet, otherwise known as the mangonel, to the Mediterranean, where it soon replaced the slower and more complex torsion powered engines. The onager may have continued to be used by the Byzantines and Arabs during the Middle Ages. In modern history, the mangonel is often misrepresented as an onager although there is no evidence of its usage beyond the 6th century AD. The first attempts to reconstruct the onager were made by Chevalier de Folard and Robert Melvill in the 18th century. Swiss general Guillaume Henri Dufour made another attempt to reconstruct the onager based on the work of de Folard in 1840. Napoleon III had his general Verchère de Reffye create a reconstruction of the onager. By the end of the nineteenth century Sir Ralph Payne-Gallwey made another attempt at reconstructing the onager. Later, the German major-general Erwin Schramm and the British scholar Eric Marsden made a reconstruction of the onager which became the basis of the modern understanding of the weapon. Effectiveness The onager was considered to be less accurate and cruder than the ballista. One reason the onager may have become the Roman military's primary type of torsion catapult was because it was easier to produce and required less technical knowledge to operate. The onager was used to destroy walls and create confusion amongst the enemy lines. Ammianus Marcellinus described an instance during an Alemanni incursion in Gaul where although the onager fired a rock that did not kill anyone, it created mass confusion amongst the enemy and routed them.
Technology
Artillery
null
443259
https://en.wikipedia.org/wiki/Moray%20eel
Moray eel
Moray eels, or Muraenidae (), are a family of eels whose members are found worldwide. There are approximately 200 species in 15 genera which are almost exclusively marine, but several species are regularly seen in brackish water, and a few are found in fresh water. The English name, moray, dates back to the early 17th century, and is believed to be a derivative from Portuguese , which itself derives from Latin , in turn from Greek , ; these are the Latin and Greek names of the Mediterranean moray. Anatomy The dorsal fin extends from just behind the head along the back and joins seamlessly with the caudal and anal fins. Most species lack pectoral and pelvic fins, adding to their serpentine appearance. Their eyes are rather small; morays rely mostly on their highly developed sense of smell, lying in wait to ambush prey. The body is generally patterned. In some species, the inside of the mouth is also patterned. Their jaws are wide, framing a protruding snout. Most possess large teeth used to tear flesh or grasp slippery prey. A relatively small number of species, for example the snowflake moray (Echidna nebulosa) and zebra moray (Gymnomuraena zebra), primarily feed on crustaceans and other hard-shelled animals, and they have blunt, molar-like teeth suitable for crushing. Morays secrete a protective mucus over their smooth, scaleless skin, which in some species contains a toxin. They have much thicker skin and high densities of goblet cells in the epidermis that allows mucus to be produced at a higher rate than in other eel species. This allows sand granules to adhere to the sides of their burrows in sand-dwelling morays, thus making the walls of the burrow more permanent due to the glycosylation of mucins in mucus. Placement of their small, circular gills on their flanks, far behind the mouth, requires the moray to maintain a gape / gulping motion to facilitate respiration. Jaw The pharyngeal jaws of morays are located farther back in the head and closely resemble the oral jaws (complete with tiny "teeth"). When feeding, morays launch these jaws into the mouth cavity, where they grasp prey and transport it into the throat. Moray eels are the only known animals that use pharyngeal jaws to actively capture and restrain prey in this way. In addition to the presence of pharyngeal jaws, morays' mouth openings extend far back into the head, compared to fish which feed using suction. In the action of lunging at prey and biting down, water flows out the posterior side of the mouth opening, reducing waves in front of the eel which would otherwise displace prey. Thus, aggressive predation is still possible even with reduced bite times. In at least one species, the California moray (Gymnothorax mordax), teeth in the roof of the mouth are able to fold down as prey slides backwards, thus preventing the teeth from breaking and maintaining a hold on prey as it is transported to the throat. Differing shapes of the jaw and teeth reflect the respective diets of different species of moray eel. Evolving separately multiple times within the Muraenidae family, short, rounded jaws and molar-like teeth allow durophagous eels (e.g. zebra moray and genus Echidna) to consume crustaceans, while other piscivorous genera of Muraenidae have pointed jaws and longer teeth. These morphological patterns carry over to teeth positioned on the pharyngeal jaw. Feeding-behavior Morays are opportunistic, carnivorous predators and feed primarily on smaller fish, crabs, and octopuses. A spotted moray eel has been observed eating a red lionfish without harm. Groupers, barracudas and sea snakes are among their few known predators, making many morays (especially the larger species) apex predators in their ecosystems. Cooperative hunting Reef-associated roving coral groupers (Plectropomus pessuliferus) have been observed recruiting giant morays to help them hunt. The invitation to hunt is initiated by head-shaking. This style of hunting may allow morays to flush prey from niches not accessible to groupers. Habitat The moray eel can be found in both fresh and saltwater habitats. The vast majority of species are strictly marine, never entering freshwater. Of the few species known to live in freshwater, the most well-known is Gymnothorax polyuranodon. Within the marine realm, morays are found in shallow water nearshore areas, continental slopes, continental shelves, deep benthic habitats, and mesopelagic zones of the ocean, and in both tropical and temperate environments. Most species are found in tropical or subtropical environments, with only a few species (yellow moray) found in temperate ocean environments. Although the moray eel can occupy both tropical oceans and temperate oceans, as well as both freshwater and saltwater, the majority of moray eels occupy warm saltwater environments, which contain reefs. Within the tropical oceans and temperate oceans, the moray eel occupies shelters, such as dead patch reefs and coral rubble rocks, and less frequently occupies live coral reefs. Taxonomy Genera There are about 202 known species of moray eels, in 16 genera. These genera are in two sub-families, Muraeninae and Uropterygiinae, which are distinguished by the location of their fins. In Muraeninae the dorsal fin is near the gill slits and runs down the back of the eel, and the anal fin is behind the anus. In the Uropterygiinnae, both the dorsal and the anal fin are at the end of the tail. Though this distinction can be seen between the two sub-families, there are still many varieties of genera within Muraeninae and Uropterygiinae. Of these, the genus Gymnothorax is by far the broadest, including more than half of the total number of species. The family Muraenidae comprises the following subfamiles and genera: Subfamily Muraeninae Rafinesque, 1815 Diaphenchelys McCosker & Randall, 2007 Echidna Forster, 1788 Enchelycore Kaup, 1856 Enchelynassa Kaup, 1855 Gymnomuraena Lacepède, 1803 Gymnothorax Bloch, 1795 Monopenchelys Böhlke & McCosker, 1982 Muraena Linnaeus, 1758 Pseudechidna Bleeker, 1863 Rhinomuraena Garman, 1888 Strophidon McClelland, 1844 Subfamily Uropterygiinae Fowler 1925 Anarchias D. S. Jordan & Starks, 1906 Channomuraena Richardson, 1848 Cirrimaxilla H.-M. Chen & K.-T .Shao, 1995 Scuticaria D. S. Jordan & Snyder, 1901 Uropterygius Rüppell, 1838 Evolution The moray eel's elongation is due to an increase in the number of vertebrae, rather than a lengthening of each individual vertebra or a substantial decrease in body depth. Vertebrae have been added asynchronously between the pre-tail ("precaudal") and tail ("caudal") regions, unlike other groups of eels such as Ophicthids and Congrids. Relationship with humans Aquarium trade Several moray species are popular among aquarium hobbyists for their hardiness, flexible diets, and disease resistance. The most commonly traded species are the snowflake, zebra and goldentail moray (Gymnothorax miliaris). Several other species are occasionally seen, but are more difficult to obtain and can command a steep price on the market. Food poisoning Moray eels, particularly the giant moray (Gymnothorax javanicus) and yellow-edged moray (G. flavimarginatus''), are known to accumulate high levels of ciguatoxins, unlike other reef fish; if consumed by humans, ciguatera fish poisoning may result. Ciguatera is characterised by neurological, gastrointestinal, and cardiovascular problems that may persist for days after eating tainted fish. In morays, the toxins are most concentrated in the liver. In an especially remarkable instance, 57 people in the Northern Mariana Islands were poisoned after eating just the head and half of a cooked yellow-edged moray. Thus, morays are not recommended for human consumption.
Biology and health sciences
Anguilliformes
Animals
443293
https://en.wikipedia.org/wiki/Australopithecus%20afarensis
Australopithecus afarensis
Australopithecus afarensis is an extinct species of australopithecine which lived from about 3.9–2.9 million years ago (mya) in the Pliocene of East Africa. The first fossils were discovered in the 1930s, but major fossil finds would not take place until the 1970s. From 1972 to 1977, the International Afar Research Expedition—led by anthropologists Maurice Taieb, Donald Johanson and Yves Coppens—unearthed several hundreds of hominin specimens in Hadar, Ethiopia, the most significant being the exceedingly well-preserved skeleton AL 288-1 ("Lucy") and the site AL 333 ("the First Family"). Beginning in 1974, Mary Leakey led an expedition into Laetoli, Tanzania, and notably recovered fossil trackways. In 1978, the species was first described, but this was followed by arguments for splitting the wealth of specimens into different species given the wide range of variation which had been attributed to sexual dimorphism (normal differences between males and females). A. afarensis probably descended from A. anamensis and is hypothesised to have given rise to Homo, though the latter is debated. A. afarensis had a tall face, a delicate brow ridge, and prognathism (the jaw jutted outwards). The jawbone was quite robust, similar to that of gorillas. The living size of A. afarensis is debated, with arguments for and against marked size differences between males and females. Lucy measured perhaps in height and , but she was rather small for her species. In contrast, a presumed male was estimated at and . A perceived difference in male and female size may simply be sampling bias. The leg bones as well as the Laetoli fossil trackways suggest A. afarensis was a competent biped, though somewhat less efficient at walking and slower at running than humans. The arm and shoulder bones have some similar aspects to those of orangutans and gorillas, which has variously been interpreted as either evidence of partial tree-dwelling (arboreality), or basal traits inherited from the chimpanzee–human last common ancestor with no adaptive functionality. A. afarensis was probably a generalist omnivore of both C3 forest plants and C4 CAM savanna plants—and perhaps creatures which ate such plants—and was able to exploit a variety of different food sources. Similarly, A. afarensis appears to have inhabited a wide range of habitats with no real preference, inhabiting open grasslands or woodlands, shrublands, and lake- or riverside forests. Potential evidence of stone tool use would indicate meat was also a dietary component. Marked sexual dimorphism in primates typically corresponds to a polygynous society and low dimorphism to monogamy, but the group dynamics of early hominins is difficult to predict with accuracy. Early hominins may have fallen prey to the large carnivores of the time, such as big cats and hyenas. Taxonomy Research history Beginning in the 1930s, some of the most ancient hominin remains of the time dating to 3.8–2.9 million years ago were recovered from East Africa. Because Australopithecus africanus fossils were commonly being discovered throughout the 1920s and '40s in South Africa, these remains were often provisionally classified as Australopithecus aff. africanus. The first to identify a human fossil was German explorer Ludwig Kohl-Larsen in 1939 by the headwaters of the Gerusi River (near Laetoli, Tanzania), who encountered a maxilla. In 1948, German palaeontologist Edwin Hennig proposed classifying it into a new genus, "Praeanthropus", but he failed to give a species name. In 1950, German anthropologist Hans Weinert proposed classifying it as Meganthropus africanus, but this was largely ignored. In 1955, M.S. Şenyürek proposed the combination Praeanthropus africanus. Major collections were made in Laetoli, Tanzania, on an expedition beginning in 1974 directed by British palaeoanthropologist Mary Leakey, and in Hadar, Ethiopia, from 1972 to 1977 by the International Afar Research Expedition (IARE) formed by French geologist Maurice Taieb, American palaeoanthropologist Donald Johanson and Breton anthropologist Yves Coppens. These fossils were remarkably well preserved and many had associated skeletal aspects. In 1973, the IARE team unearthed the first knee joint, AL 129-1, and showed the earliest example at the time of bipedalism. On 24 November 1974, Johanson and graduate student Tom Gray discovered the extremely well-preserved skeleton AL 288–1, commonly referred to as "Lucy" (named after the 1967 Beatles song Lucy in the Sky with Diamonds which was playing on their tape recorder that evening). In 1975, the IARE recovered 216 specimens belonging to 13 individuals, AL 333 "the First Family" (though the individuals were not necessarily related). In 1976, Leakey and colleagues discovered fossil trackways, and preliminarily classified Laetoli remains into Homo spp., attributing Australopithecus-like traits as evidence of them being transitional fossils. In 1978, Johanson, Tim D. White and Coppens classified the hundreds of specimens collected thus far from both Hadar and Laetoli into a single new species, A. afarensis, and considered the apparently wide range of variation a result of sexual dimorphism. The species name honours the Afar Region of Ethiopia where the majority of the specimens had been recovered from. They later selected the jawbone LH 4 as the holotype specimen because of its preservation quality and because White had already fully described and illustrated it the year before. A. afarensis is known only from East Africa. Beyond Laetoli and the Afar Region, the species has been recorded in Kenya at Koobi Fora and possibly Lothagam; and elsewhere in Ethiopia at Woranso-Mille, Maka, Belohdelie, Ledi-Geraru and Fejej. The frontal bone fragment BEL-VP-1/1 from the Middle Awash, Afar Region, Ethiopia, dating to 3.9 million years ago has typically been assigned to A. anamensis based on age, but may be assignable to A. afarensis because it exhibits a derived form of postorbital constriction. This would mean A. afarensis and A. anamensis coexisted for at least 100,000 years. In 2005, a second adult specimen preserving both skull and body elements, AL 438–1, was discovered in Hadar. In 2006, an infant partial skeleton, DIK-1-1, was unearthed at Dikika, Afar Region. In 2015, an adult partial skeleton, KSD-VP-1/1, was recovered from Woranso-Mille. For a long time, A. afarensis was the oldest known African great ape until the 1994 description of the 4.4-million-year-old Ardipithecus ramidus, and a few earlier or contemporary taxa have been described since, including the 4-million-year-old A. anamensis in 1995, the 3.5-million-year-old Kenyanthropus platyops in 2001, the 6-million-year-old Orrorin tugenensis in 2001, and the 7- to 6-million-year-old Sahelanthropus tchadensis in 2002. Bipedalism was once thought to have evolved in australopithecines, but it is now thought to have begun evolving much earlier in habitually arboreal primates. The earliest claimed date for the beginnings of an upright spine and a primarily vertical body plan is 21.6 million years ago in the Early Miocene with Morotopithecus bishopi. Classification A. afarensis is now a widely accepted species, and it is now generally thought that Homo and Paranthropus are sister taxa deriving from Australopithecus, but the classification of Australopithecus species is in disarray. Australopithecus is considered a grade taxon whose members are united by their similar physiology rather than close relations with each other over other hominin genera. It is unclear how any Australopithecus species relate to each other, but it is generally thought that a population of A. anamensis evolved into A. afarensis. In 1979, Johanson and White proposed that A. afarensis was the last common ancestor between Homo and Paranthropus, supplanting A. africanus in this role. Considerable debate of the validity of this species followed, with proposals for synonymising them with A. africanus or recognising multiple species from the Laetoli and Hadar remains. In 1980, South African palaeoanthropologist Phillip V. Tobias proposed reclassifying the Laetoli specimens as A. africanus afarensis and the Hadar specimens as A. afr. aethiopicus. The skull KNM-ER 1470 (now H. rudolfensis) was at first dated to 2.9 million years ago, which cast doubt on the ancestral position of both A. afarensis or A. africanus, but it has been re-dated to about 2 million years ago. Several Australopithecus species have since been postulated to represent the ancestor to Homo, but the 2013 discovery of the earliest Homo specimen, LD 350-1, 2.8 million years old (older than almost all other Australopithecus species) from the Afar Region could potentially affirm A. afarensis ancestral position. However, A. afarensis is also argued to have been too derived (too specialised), due to resemblance in jaw anatomy to the robust australopithecines, to have been a human ancestor. Palaeoartist Walter Ferguson has proposed splitting A. afarensis into "H. antiquus", a relict dryopithecine "Ramapithecus" (now Kenyapithecus) and a subspecies of A. africanus. His recommendations have largely been ignored. In 2003, Spanish writer Camilo José Cela Conde and evolutionary biologist Francisco J. Ayala proposed reinstating "Praeanthropus" including A. afarensis alongside Sahelanthropus, A. anamensis, A. bahrelghazali and A. garhi. In 2004, Danish biologist Bjarne Westergaard and geologist Niels Bonde proposed splitting off "Homo hadar" with the 3.2-million-year-old partial skull AL 333–45 as the holotype, because a foot from the First Family was apparently more humanlike than that of Lucy. In 2011, Bonde agreed with Ferguson that Lucy should be split into a new species, though erected a new genus as "Afaranthropus antiquus". In 1996, a 3.6-million-year-old jaw from Koro Toro, Chad, originally classified as A. afarensis was split off into a new species as A. bahrelghazali. In 2015, some 3.5- to 3.3-million-year-old jaw specimens from the Afar Region (the same time and place as A. afarensis) were classified as a new species as A. deyiremeda, and the recognition of this species would call into question the species designation of fossils currently assigned to A. afarensis. However, the validity of A. bahrelghazali and A. deyiremeda is debated. Wood and Boyle (2016) stated there was "low confidence" that A. afarensis, A. bahrelghazali and A. deyiremeda are distinct species, with Kenyanthropus platyops perhaps being indistinct from the latter two. Anatomy Skull A. afarensis had a tall face, a delicate brow ridge, and prognathism (the jaw jutted outwards). One of the biggest skulls, AL 444–2, is about the size of a female gorilla skull. The first relatively complete jawbone was discovered in 2002, AL 822–1. This specimen strongly resembles the deep and robust gorilla jawbone. However, unlike gorillas, the strength of the sagittal and nuchal crests (which support the temporalis muscle used in biting) do not vary between sexes. The crests are similar to those of chimpanzees and female gorillas. Compared to earlier hominins, the incisors of A. afarensis are reduced in breadth, the canines reduced in size and lost the honing mechanism which continually sharpens them, the premolars are molar-shaped, and the molars are taller. The molars of australopiths are generally large and flat with thick enamel, which is ideal for crushing hard and brittle foods. The brain volume of Lucy was estimated to have been 365–417 cc, specimen AL 822-1 about 374–392 cc, AL 333-45 about 486–492 cc, and AL 444-2 about 519–526 cc. This would make for an average of about 445 cc. The brain volumes of the infant (about 2.5 years of age) specimens DIK-1-1 and AL 333-105 are 273–277 and 310–315 cc, respectively. Using these measurements, the brain growth rate of A. afarensis was closer to the growth rate of modern humans than to the faster rate in chimpanzees. Though brain growth was prolonged, the duration was nonetheless much shorter than modern humans, which is why the adult A. afarensis brain was so much smaller. The A. afarensis brain was likely organised like non-human ape brains, with no evidence for humanlike brain configuration. Size A. afarensis specimens apparently exhibit a wide range of variation, which is generally explained as marked sexual dimorphism with males much bigger than females. In 1991, American anthropologist Henry McHenry estimated body size by measuring the joint sizes of the leg bones and scaling down a human to meet that size. This yielded for a presumed male (AL 333–3), whereas Lucy was . In 1992, he estimated that males typically weighed about and females assuming body proportions were more humanlike than apelike. This gives a male to female body mass ratio of 1.52, compared to 1.22 in modern humans, 1.37 in chimpanzees, and about 2 for gorillas and orangutans. However, this commonly cited weight figure used only three presumed-female specimens, of which two were among the smallest specimens recorded for the species. It is also contested if australopiths even exhibited heightened sexual dimorphism at all, which if correct would mean the range of variation is normal body size disparity between different individuals regardless of sex. It has also been argued that the femoral head could be used for more accurate size modeling, and the femoral head size variation was the same for both sexes. Lucy is one of the most complete Pliocene hominin skeletons, with over 40% preserved, but she was one of the smaller specimens of her species. Nonetheless, she has been the subject of several body mass estimates since her discovery, ranging from for absolute lower and upper bounds. Most studies report ranges within . For the five makers of the Laetoli fossil trackways (S1, S2, G1, G2 and G3), based on the relationship between footprint length and bodily dimensions in modern humans, S1 was estimated to have been considerably large at about tall and in weight, S2 and , G1 and , G2 and , and G3 and . Based on these, S1 is interpreted to have been a male, and the rest females (G1 and G3 possibly juveniles), with A. afarensis being a highly dimorphic species. Torso DIK-1-1 preserves an oval hyoid bone (which supports the tongue) more similar to those of chimpanzees and gorillas than the bar-shaped hyoid of humans and orangutans. This would suggest the presence of laryngeal air sacs characteristic of non-human African apes (and large gibbons). Air sacs may lower the risk of hyperventilating when producing faster extended call sequences by rebreathing exhaled air from the air sacs. The loss of these in humans could have been a result of speech and resulting low risk of hyperventilating from normal vocalisation patterns. It was previously thought that the australopithecines' spine was more like that of non-human apes than humans, with weak neck vertebrae. However, the thickness of the neck vertebrae of KSD-VP-1/1 is similar to that of modern humans. Like humans, the series has a bulge and achieves maximum girth at C5 and 6, which in humans is associated with the brachial plexus, responsible for nerves and muscle innervation in the arms and hands. This could perhaps speak to advanced motor functions in the hands of A. afarensis and competency at precision tasks compared to non-human apes, possibly implicated in stone tool use or production. However, this could have been involved in head stability or posture rather than dexterity. A.L. 333-101 and A.L. 333-106 lack evidence of this feature. The neck vertebrae of KDS-VP-1/1 indicate that the nuchal ligament, which stabilises the head while distance running in humans and other cursorial creatures, was either not well developed or absent. KSD-VP-1/1, preserving (among other skeletal elements) six rib fragments, indicates that A. afarensis had a bell-shaped ribcage instead of the barrel shaped ribcage exhibited in modern humans. Nonetheless, the constriction at the upper ribcage was not so marked as exhibited in non-human great apes and was quite similar to humans. Originally, the vertebral centra preserved in Lucy were interpreted as being the T6, T8, T10, T11 and L3, but a 2015 study instead interpreted them as being T6, T7, T9, T10 and L3. DIK-1-1 shows that australopithecines had twelve thoracic vertebrae like modern humans instead of thirteen like non-human apes. Like humans, australopiths likely had five lumbar vertebrae, and this series was likely long and flexible in contrast to the short and inflexible non-human great ape lumbar series. Upper limbs Like other australopiths, the A. afarensis skeleton exhibits a mosaic anatomy with some aspects similar to modern humans and others to non-human great apes. The pelvis and leg bones clearly indicate weight-bearing ability, equating to habitual bipedalism, but the upper limbs are reminiscent of orangutans, which would indicate arboreal locomotion. However, this is much debated, as tree-climbing adaptations could simply be basal traits inherited from the great ape last common ancestor in the absence of major selective pressures at this stage to adopt a more humanlike arm anatomy. The shoulder joint is somewhat in a shrugging position, closer to the head, like in non-human apes. Juvenile modern humans have a somewhat similar configuration, but this changes to the normal human condition with age; such a change does not appear to have occurred in A. afarensis development. It was once argued that this was simply a byproduct of being a small-bodied species, but the discovery of the similarly sized H. floresiensis with a more or less human shoulder configuration and larger A. afarensis specimens retaining the shrugging shoulders show this to not have been the case. The scapular spine (reflecting the strength of the back muscles) is closer to the range of gorillas. The forearm of A. afarensis is incompletely known, yielding various brachial indexes (radial length divided by humeral length) comparable to non-human great apes at the upper estimate and to modern humans at the lower estimate. The most complete ulna specimen, AL 438–1, is within the range of modern humans and other African apes. However, the L40-19 ulna is much longer, though well below that exhibited in orangutans and gibbons. The AL 438-1 metacarpals are proportionally similar to those of modern humans and orangutans. The A. afarensis hand is quite humanlike, though there are some aspects similar to orangutan hands which would have allowed stronger flexion of the fingers, and it probably could not handle large spherical or cylindrical objects very efficiently. Nonetheless, the hand seems to have been able to have produced a precision grip necessary in using stone tools. However, it is unclear if the hand was capable of producing stone tools. Lower limbs The australopith pelvis is platypelloid and maintains a relatively wider distance between the hip sockets and a more oval shape. Despite being much smaller, Lucy's pelvic inlet is wide, about the same breadth as that of a modern human woman. These were likely adaptations to minimise how far the centre of mass drops while walking upright in order to compensate for the short legs (rotating the hips may have been more important for A. afarensis). Likewise, later Homo could reduce relative pelvic inlet size probably due to the elongation of the legs. Pelvic inlet size may not have been due to fetal head size (which would have increased birth canal and thus pelvic inlet width) as an A. afarensis newborn would have had a similar or smaller head size compared to that of a newborn chimpanzee. It is debated if the platypelloid pelvis provided poorer leverage for the hamstrings or not. The heel bone of A. afarensis adults and modern humans have the same adaptations for bipedality, indicating a developed grade of walking. The big toe is not dextrous as is in non-human apes (it is adducted), which would make walking more energy efficient at the expense of arboreal locomotion, no longer able to grasp onto tree branches with the feet. However, the foot of the infantile specimen DIK-1-1 indicates some mobility of the big toe, though not to the degree in non-human primates. This would have reduced walking efficiency, but a partially dextrous foot in the juvenile stage may have been important in climbing activities for food or safety, or made it easier for the infant to cling onto and be carried by an adult. Palaeobiology Diet and technology A. afarensis was likely a generalist omnivore. Carbon isotope analysis on teeth from Hadar and Dikika 3.4–2.9 million years ago suggests a widely ranging diet between different specimens, with forest-dwelling specimens showing a preference for C3 forest plants, and bush- or grassland-dwelling specimens a preference for C4 CAM savanna plants. C4 CAM sources include grass, seeds, roots, underground storage organs, succulents and perhaps creatures which ate those, such as termites. Thus, A. afarensis appears to have been capable of exploiting a variety of food resources in a wide range of habitats. In contrast, the earlier A. anamensis and Ar. ramidus, as well as modern savanna chimpanzees, target the same types of food as forest-dwelling counterparts despite living in an environment where these plants are much less abundant. Few modern primate species consume C4 CAM plants. The dental anatomy of A. afarensis is ideal for consuming hard, brittle foods, but microwearing patterns on the molars suggest that such foods were infrequently consumed, probably as fallback items in leaner times. In 2009 at Dikika, Ethiopia, a rib fragment belonging to a cow-sized hoofed animal and a partial femur of a goat-sized juvenile bovid was found to exhibit cut marks, and the former some crushing, which were initially interpreted as the oldest evidence of butchering with stone tools. If correct, this would make it the oldest evidence of sharp-edged stone tool use at 3.4 million years old, and would be attributable to A. afarensis as it is the only species known within the time and place. However, because the fossils were found in a sandstone unit (and were modified by abrasive sand and gravel particles during the fossilisation process), the attribution to hominin activity is weak. Society It is highly difficult to speculate with accuracy the group dynamics of early hominins. A. afarensis is typically reconstructed with high levels of sexual dimorphism, with males much larger than females. Using general trends in modern primates, high sexual dimorphism usually equates to a polygynous society due to intense male–male competition over females, like in the harem society of gorillas. However, it has also been argued that A. afarensis had much lower levels of dimorphism, and so had a multi-male kin-based society like chimpanzees. Low dimorphism could also be interpreted as having had a monogamous society with strong male–male competition. Contrarily, the canine teeth are much smaller in A. afarensis than in non-human primates, which should indicate lower aggression because canine size is generally positively correlated with male–male aggression. Birth The platypelloid pelvis may have caused a different birthing mechanism from modern humans, with the neonate entering the inlet facing laterally (the head was transversally orientated) until it exited through the pelvic outlet. This would be a non-rotational birth, as opposed to a fully rotational birth in humans. However, it has been suggested that the shoulders of the neonate may have been obstructed, and the neonate could have instead entered the inlet transversely and then rotated so that it exited through the outlet oblique to the main axis of the pelvis, which would be a semi-rotational birth. By this argument, there may not have been much space for the neonate to pass through the birth canal, causing a difficult childbirth for the mother. Gait The Laetoli fossil trackway, generally attributed to A. afarensis, indicates a rather developed grade of bipedal locomotion, more efficient than the bent-hip–bent-knee (BHBK) gait used by non-human great apes (though earlier interpretations of the gait include a BHBK posture or a shuffling movement). Trail A consists of short, broad prints resembling those of a two-and-a-half-year-old child, though it has been suggested this trail was made by the extinct bear Agriotherium africanus. G1 is a trail consisting of four cycles likely made by a child. G2 and G3 are thought to have been made by two adults. In 2015, two more trackways were discovered made by one individual, named S1, extending for a total of . In 2015, a single footprint from a different individual, S2, was discovered. The shallowness of the toe prints would indicate a more flexed limb posture when the foot hit the ground and perhaps a less arched foot, meaning A. afarensis was less efficient at bipedal locomotion than humans. Some tracks feature a long drag mark probably left by the heel, which may indicate the foot was lifted at a low angle to the ground. For push-off, it appears weight shifted from the heel to the side of the foot and then the toes. Some footprints of S1 either indicate asymmetrical walking where weight was sometimes placed on the anterolateral part (the side of the front half of the foot) before toe-off, or sometimes the upper body was rotated mid-step. The angle of gait (the angle between the direction the foot is pointing in on touchdown and median line drawn through the entire trackway) ranges from 2–11° for both right and left sides. G1 generally shows wide and asymmetrical angles, whereas the others typically show low angles. The speed of the track makers has been variously estimated depending on the method used, with G1 reported at 0.47, 0.56, 0.64, 0.7 and 1 m/s (1.69, 2, 2.3, 2.5 and 3.6 km/h; 1.1, 1.3, 1.4, 1.6 and 2.2 mph); G2/3 reported at 0.37, 0.84 and 1 m/s (1.3, 2.9 and 3.6 km/h; 0.8, 1.8 and 2.2 mph); and S1 at . For comparison, modern humans typically walk at . The average step distance is , and stride distance . S1 appears to have had the highest average step and stride length of, respectively, and whereas G1–G3 averaged, respectively, 416, 453 and 433 mm (1.4, 1.5 and 1.4 ft) for step and 829, 880 and 876 mm (2.7, 2.9 and 2.9 ft) for stride. A. afarensis was also capable of bipedal running with absolute speeds of , slower than modern humans with maximum running speeds up to , and its running energetics was similar to those of mammals and birds of similar body size. It has been suggested that the bipedal gait evolved specifically to improve running rather than to just enhance walking. Pathology Australopithecines, in general, seem to have had a high incidence rate of vertebral pathologies, possibly because their vertebrae were better adapted to withstand suspension loads in climbing than compressive loads while walking upright. Lucy presents marked thoracic kyphosis (hunchback) and was diagnosed with Scheuermann's disease, probably caused by overstraining her back, which can lead to a hunched posture in modern humans due to irregular curving of the spine. Because her condition presented quite similarly to that seen in modern human patients, this would indicate a basically human range of locomotor function in walking for A. afarensis. The original straining may have occurred while climbing or swinging in the trees, though, even if correct, this does not indicate that her species was maladapted for arboreal behaviour, much like how humans are not maladapted for bipedal posture despite developing arthritis. KSD-VP-1/1 seemingly exhibits compensatory action by the neck and lumbar vertebrae (gooseneck) consistent with thoracic kyphosis and Scheuermann's disease, but thoracic vertebrae are not preserved in this specimen. In 2010, KSD-VP-1/1 presented evidence of a valgus deformity of the left ankle involving the fibula, with a bony ring developing on the fibula's joint surface extending the bone an additional . This was probably caused by a fibular fracture during childhood which improperly healed in a nonunion. In 2016, palaeoanthropologist John Kappelman argued that the fracturing exhibited by Lucy was consistent with a proximal humerus fracture, which is most often caused by falling in humans. He then concluded she died from falling out of a tree, and that A. afarensis slept in trees or climbed trees to escape predators. However, similar fracturing is exhibited in many other creatures in the area, including the bones of antelope, elephants, giraffes and rhinos, and may well simply be taphonomic bias (fracturing was caused by fossilisation). Lucy may also have been killed in an animal attack or a mudslide. The 13 AL 333 individuals are thought to have been deposited at about the same time as one another, bear little evidence of carnivore activity, and were buried on a stretch of a hill. In 1981, anthropologists James Louis Aronson and Taieb suggested they were killed in a flash flood. British archaeologist Paul Pettitt considered natural causes unlikely and, in 2013, speculated that these individuals were purposefully hidden in tall grass by other hominins (funerary caching). This behaviour has been documented in modern primates, and may be done so that the recently deceased do not attract predators to living grounds. Palaeoecology A. afarensis was extremely adaptable in its environmental preferences. It does not appear to have had a preferred environment and it inhabited a wide range of habitats such as open grasslands or woodlands, shrublands, and lake- or riverside forests. Likewise, the animal assemblage varied widely from site to site. The Pliocene of East Africa was warm and wet compared to the preceding Miocene, with the dry season lasting about four months based on floral, faunal, and geological evidence. The extended rainy season would have made more desirable foods available to hominins for most of the year. During the Late Pliocene around 4–3 million years ago, Africa featured a greater diversity of large carnivores than today, and australopithecines likely fell prey to these dangerous creatures, including hyenas, Panthera, cheetahs, and the saber-toothed cats: Megantereon, Dinofelis, Homotherium and Machairodus. Australopithecines and early Homo likely preferred cooler conditions than later Homo, as there are no australopithecine sites that were below in elevation at the time of deposition. This would mean that, like chimpanzees, they often inhabited areas with an average diurnal temperature of , dropping to at night. At Hadar, the average temperature from 3.4 to 2.95 million years ago was about .
Biology and health sciences
Australopithecines
Biology
443366
https://en.wikipedia.org/wiki/Diorite
Diorite
Diorite ( ) is an intrusive igneous rock formed by the slow cooling underground of magma (molten rock) that has a moderate content of silica and a relatively low content of alkali metals. It is intermediate in composition between low-silica (mafic) gabbro and high-silica (felsic) granite. Diorite is found in mountain-building belts (orogens) on the margins of continents. It has the same composition as the fine-grained volcanic rock, andesite, which is also common in orogens. Diorite has been used since prehistoric times as decorative stone. It was used by the Akkadian Empire of Sargon of Akkad for funerary sculptures, and by many later civilizations for sculptures and building stone. Description Diorite is an intrusive igneous rock composed principally of the silicate minerals plagioclase feldspar (typically andesine), biotite, hornblende, and sometimes pyroxene. The chemical composition of diorite is intermediate, between that of mafic gabbro and felsic granite. It is distinguished from gabbro on the basis of the composition of the plagioclase species; the plagioclase in diorite is richer in sodium and poorer in calcium. Geologists use rigorous quantitative definitions to classify coarse-grained igneous rocks, based on the mineral content of the rock. For igneous rocks composed mostly of silicate minerals, and in which at least 10% of the mineral content consists of quartz, feldspar, or feldspathoid minerals, classification begins with the QAPF diagram. The relative abundances of quartz (Q), alkali feldspar (A), plagioclase (P), and feldspathoid (F), are used to plot the position of the rock on the diagram. The rock will be classified as either a dioritoid or a gabbroid if quartz makes up less than 20% of the QAPF content, feldspathoid makes up less than 10% of the QAPF content, and plagioclase makes up more than 65% of the total feldspar content. Dioritoids are distinguished from gabbroids by an anorthite (calcium plagioclase) fraction of their total plagioclase of less than 50%. The composition of the plagioclase cannot easily be determined in the field, and then a preliminary distinction is made between dioritoid and gabbroid based on the content of mafic minerals. A dioritoid typically has less than 35% mafic minerals, typically including hornblende, while a gabbroid typically has over 35% mafic minerals, mostly pyroxenes or olivine. The name diorite (from Ancient Greek , "to distinguish") was first applied to the rock by René Just Haüy on account of its characteristic, easily identifiable large crystals of hornblende. Dioritoids form a family of rock types similar to diorite, such as monzodiorite, quartz diorite, or nepheline-bearing diorite. Diorite itself is more narrowly defined, as a dioritoid in which quartz makes up less than 5% of the QAPF content, feldspathoids are not present, and plagioclase makes up more than 90% of the feldspar content. Diorite may contain small amounts of quartz, microcline, and olivine. Zircon, apatite, titanite, magnetite, ilmenite, and sulfides occur as accessory minerals. Varieties deficient in hornblende and other dark minerals are called leucodiorite. A ferrodiorite is a dioritoid enriched in iron and titanium. Ferrodiorites are common in the lower oceanic crust. Coarse-grained (phaneritic) dioritoids are produced by slow crystallization of magma having the same composition as the lava that solidifies rapidly to form fine-grained (aphanitic) andesite. Rock of similar composition to diorite or andesite but with an intermediate texture is sometimes called microdiorite. Diorite is occasionally porphyritic. It usually contains enough mafic minerals to be dark in appearance. Orbicular diorite shows alternating concentric growth bands of plagioclase and amphibole surrounding a nucleus, within a diorite porphyry matrix. Occurrence Diorite results from the partial melting of a mafic rock above a subduction zone. It is found in volcanic arcs, and in cordilleran mountain building, such as in the Andes Mountains. However, while its extrusive volcanic equivalent, andesite, is common in these settings, diorite is a minor component of the plutonic rocks, which are mostly granodiorite or granite. Diorite also makes up some stocks intruded beneath large calderas. Diorite source localities include Leicestershire and Aberdeenshire, UK; Thuringia and Saxony in Germany; Finland; Romania; central Sweden; southern Vancouver Island around Victoria, Canada; the Darran Range of New Zealand; the Andes Mountains; and Concordia in South Africa. Hornblende diorite is a common rock type in the Henry, Abajo, and La Sal Mountains of Utah, US, where it was emplaced as laccoliths. An orbicular variety found in Corsica was formerly called corsite. An obsolete name for microdiorite, markfieldite, was given by Frederick Henry Hatch in 1909 to exposures near the village of Markfield, England. Esterellite is a local name for microdiorite given by Auguste Michel-Lévy to exposures in the Esterel Massif in France. Use Human use of diorite dates at least to the Middle Neolithic, when it was used in a passage grave at Le Dolmen du Mont Ubé, Jersey. The use of stone of contrasting colour suggests that diorite was deliberately selected for its appearance. The first great Mesopotamian empire, the Akkadian Empire of Sargon of Akkad, began using diorite for sculpture after sources of the rock came under Akkadian control. Diorite was used to depict rulers or high officials in ceremonial poses or attitudes of prayer, and the sculptures may have been designed to receive funerary offerings. Diorite was also used for stone vases by Bronze Age craftspeople, who developed considerable skill at polishing diorite and other stones. The Egyptians had become skilled at shaping diorite and other hard stones by 4000 BCE. A large diorite stela in the Louvre Museum dating to 1700 BCE is inscribed with the Code of Hammurabi. Diorite was used by the Inca civilization as structural stone. It was used by medieval Islamic builders to construct water fountains in the Crimea. In later times, diorite was commonly used as cobblestone; today many diorite cobblestone streets can be found in England and Guernsey. Guernsey diorite was used in the steps of St Paul's Cathedral, London. Today, diorite is uncommon in construction, although it shares similar physical properties with granite. Diorite is often sold commercially as "black granite". Diorite's modern uses include construction aggregate, curbing, usage as dimension stones, cobblestone, and facing stones.
Physical sciences
Igneous rocks
Earth science
443800
https://en.wikipedia.org/wiki/Shingles
Shingles
Shingles, also known as herpes zoster or zona, is a viral disease characterized by a painful skin rash with blisters in a localized area. Typically the rash occurs in a single, wide mark either on the left or right side of the body or face. Two to four days before the rash occurs there may be tingling or local pain in the area. Other common symptoms are fever, headache, and tiredness. The rash usually heals within two to four weeks, but some people develop ongoing nerve pain which can last for months or years, a condition called postherpetic neuralgia (PHN). In those with poor immune function the rash may occur widely. If the rash involves the eye, vision loss may occur. Shingles is caused by the varicella zoster virus (VZV) that also causes chickenpox. In the case of chickenpox, also called varicella, the initial infection with the virus typically occurs during childhood or adolescence. Once the chickenpox has resolved, the virus can remain dormant (inactive) in human nerve cells (dorsal root ganglia or cranial nerves) for years or decades, after which it may reactivate and travel along nerve bodies to nerve endings in the skin, producing blisters. During an outbreak of shingles, exposure to the varicella virus found in shingles blisters can cause chickenpox in someone who has not yet had chickenpox, although that person will not suffer from shingles, at least on the first infection. How the virus remains dormant in nerve cells or subsequently re-activates is not well understood. The disease has been recognized since ancient times. Risk factors for reactivation of the dormant virus include old age, poor immune function, and having contracted chickenpox before 18 months of age. Diagnosis is typically based on the signs and symptoms presented. Varicella zoster virus is not the same as herpes simplex virus, although they both belong to the alpha subfamily of herpesviruses. Shingles vaccines reduce the risk of shingles by 50 to 90%, depending on the vaccine used. Vaccination also decreases rates of postherpetic neuralgia, and, if shingles occurs, its severity. If shingles develops, antiviral medications such as aciclovir can reduce the severity and duration of disease if started within 72 hours of the appearance of the rash. Evidence does not show a significant effect of antivirals or steroids on rates of postherpetic neuralgia. Paracetamol, NSAIDs, or opioids may be used to help with acute pain. It is estimated that about a third of people develop shingles at some point in their lives. While shingles is more common among older people, children may also get the disease. According to the US National Institutes of Health, the number of new cases per year ranges from 1.2 to 3.4 per 1,000 person-years among healthy individuals to 3.9 to 11.8 per 1,000 person-years among those older than 65 years of age. About half of those living to age 85 will have at least one attack, and fewer than 5% will have more than one attack. Although symptoms can be severe, risk of death is very low: 0.28 to 0.69 deaths per million. Signs and symptoms The earliest symptoms of shingles, which include headache, fever, and malaise, are nonspecific, and may result in an incorrect diagnosis. These symptoms are commonly followed by sensations of burning pain, itching, hyperesthesia (oversensitivity), or paresthesia ("pins and needles": tingling, pricking, or numbness). Pain can be mild to severe in the affected dermatome, with sensations that are often described as stinging, tingling, aching, numbing or throbbing, and can be interspersed with quick stabs of agonizing pain. Shingles in children is often painless, but people are more likely to get shingles as they age, and the disease tends to be more severe. In most cases, after one to two days—but sometimes as long as three weeks—the initial phase is followed by the appearance of the characteristic skin rash. The pain and rash most commonly occur on the torso but can appear on the face, eyes, or other parts of the body. At first, the rash appears similar to the first appearance of hives; however, unlike hives, shingles causes skin changes limited to a dermatome, normally resulting in a stripe or belt-like pattern that is limited to one side of the body and does not cross the midline. Zoster sine herpete ("zoster without herpes") describes a person who has all of the symptoms of shingles except this characteristic rash. Later the rash becomes vesicular, forming small blisters filled with a serous exudate, as the fever and general malaise continue. The painful vesicles eventually become cloudy or darkened as they fill with blood, and crust over within seven to ten days; usually the crusts fall off and the skin heals, but sometimes, after severe blistering, scarring and discolored skin remain. The blister fluid contains varicella zoster virus, which can be transmitted through contact or inhalation of fluid droplets until the lesions crust over, which may take up to four weeks. Face Shingles may have additional symptoms, depending on the dermatome involved. The trigeminal nerve is the most commonly involved nerve, of which the ophthalmic division is the most commonly involved branch. When the virus is reactivated in this nerve branch it is termed zoster ophthalmicus. The skin of the forehead, upper eyelid and orbit of the eye may be involved. Zoster ophthalmicus occurs in approximately 10% to 25% of cases. In some people, symptoms may include conjunctivitis, keratitis, uveitis, and optic nerve palsies that can sometimes cause chronic ocular inflammation, loss of vision, and debilitating pain. Shingles oticus, also known as Ramsay Hunt syndrome type II, involves the ear. It is thought to result from the virus spreading from the facial nerve to the vestibulocochlear nerve. Symptoms include hearing loss and vertigo (rotational dizziness). Shingles may occur in the mouth if the maxillary or mandibular division of the trigeminal nerve is affected, in which the rash may appear on the mucous membrane of the upper jaw (usually the palate, sometimes the gums of the upper teeth) or the lower jaw (tongue or gums of the lower teeth) respectively. Oral involvement may occur alone or in combination with a rash on the skin over the cutaneous distribution of the same trigeminal branch. As with shingles of the skin, the lesions tend to only involve one side, distinguishing it from other oral blistering conditions. In the mouth, shingles appears initially as 1–4 mm opaque blisters (vesicles), which break down quickly to leave ulcers that heal within 10–14 days. The prodromal pain (before the rash) may be confused with toothache. Sometimes this leads to unnecessary dental treatment. Post-herpetic neuralgia uncommonly is associated with shingles in the mouth. Unusual complications may occur with intra-oral shingles that are not seen elsewhere. Due to the close relationship of blood vessels to nerves, the virus can spread to involve the blood vessels and compromise the blood supply, sometimes causing ischemic necrosis. In rare cases, oral involvement causes complications such as osteonecrosis, tooth loss, periodontitis (gum disease), pulp calcification, pulp necrosis, periapical lesions and tooth developmental anomalies. Disseminated shingles In those with deficits in immune function, disseminated shingles may occur (wide rash). It is defined as more than 20 skin lesions appearing outside either the primarily affected dermatome or dermatomes directly adjacent to it. Besides the skin, other organs, such as the liver or brain, may also be affected (causing hepatitis or encephalitis, respectively), making the condition potentially lethal. Pathophysiology The causative agent for shingles is the varicella zoster virus (VZV)—a double-stranded DNA virus related to the herpes simplex virus. Most individuals are infected with this virus as children which causes an episode of chickenpox. The immune system eventually eliminates the virus from most locations, but it remains dormant (or latent) in the ganglia adjacent to the spinal cord (called the dorsal root ganglion) or the trigeminal ganglion in the base of the skull. Shingles occurs only in people who have been previously infected with VZV; although it can occur at any age, approximately half of the cases in the United States occur in those aged 50 years or older. Shingles can recur. In contrast to the frequent recurrence of herpes simplex symptoms, repeated attacks of shingles are unusual. It is extremely rare for a person to have more than three recurrences. The disease results from virus particles in a single sensory ganglion switching from their latent phase to their active phase. Due to difficulties in studying VZV reactivation directly in humans (leading to reliance on small-animal models), its latency is less well understood than that of the herpes simplex virus. Virus-specific proteins continue to be made by the infected cells during the latent period, so true latency, as opposed to chronic, low-level, active infection, has not been proven to occur in VZV infections. Although VZV has been detected in autopsies of nervous tissue, there are no methods to find dormant virus in the ganglia of living people. Unless the immune system is compromised, it suppresses reactivation of the virus and prevents shingles outbreaks. Why this suppression sometimes fails is poorly understood, but shingles is more likely to occur in people whose immune systems are impaired due to aging, immunosuppressive therapy, psychological stress, or other factors. Upon reactivation, the virus replicates in neuronal cell bodies, and virions are shed from the cells and carried down the axons to the area of skin innervated by that ganglion. In the skin, the virus causes local inflammation and blistering. The short- and long-term pain caused by shingles outbreaks originates from inflammation of affected nerves due to the widespread growth of the virus in those areas. As with chickenpox and other forms of alpha-herpesvirus infection, direct contact with an active rash can spread the virus to a person who lacks immunity to it. This newly infected individual may then develop chickenpox, but will not immediately develop shingles. The complete sequence of the viral genome was published in 1986. Diagnosis If the rash has appeared, identifying this disease (making a differential diagnosis) requires only a visual examination, since very few diseases produce a rash in a dermatomal pattern (sometimes called by doctors on TV "a dermatonal map"). However, herpes simplex virus (HSV) can occasionally produce a rash in such a pattern (zosteriform herpes simplex). When the rash is absent (early or late in the disease, or in the case of ), shingles can be difficult to diagnose. Apart from the rash, most symptoms can occur also in other conditions. Laboratory tests are available to diagnose shingles. The most popular test detects VZV-specific IgM antibody in blood; this appears only during chickenpox or shingles and not while the virus is dormant. In larger laboratories, lymph collected from a blister is tested by polymerase chain reaction (PCR) for VZV DNA, or examined with an electron microscope for virus particles. Molecular biology tests based on in vitro nucleic acid amplification (PCR tests) are currently considered the most reliable. Nested PCR test has high sensitivity, but is susceptible to contamination leading to false positive results. The latest real-time PCR tests are rapid, easy to perform, and as sensitive as nested PCR, and have a lower risk of contamination. They also have more sensitivity than viral cultures. Differential diagnosis Shingles can be confused with herpes simplex, dermatitis herpetiformis and impetigo, and skin reactions caused by contact dermatitis, candidiasis, certain drugs and insect bites. Prevention Shingles risk can be reduced in children by the chickenpox vaccine if the vaccine is administered before the individual gets chickenpox. If primary infection has already occurred, there are shingles vaccines that reduce the risk of developing shingles or developing severe shingles if the disease occurs. They include a live attenuated virus vaccine, Zostavax, and an adjuvanted subunit vaccine, Shingrix. A review by Cochrane concluded that Zostavax was useful for preventing shingles for at least three years. This equates to about 50% relative risk reduction. The vaccine reduced rates of persistent, severe pain after shingles by 66% in people who contracted shingles despite vaccination. Vaccine efficacy was maintained through four years of follow-up. It has been recommended that people with primary or acquired immunodeficiency should not receive the live vaccine. Two doses of Shingrix are recommended, which provide about 90% protection at 3.5 years. As of 2016, it had been studied only in people with an intact immune system. It appears to also be effective in the very old. In the UK, shingles vaccination is offered by the National Health Service (NHS) to all people in their 70s. Zostavax is the usual vaccine, but Shingrix vaccine is recommended if Zostavax is unsuitable, for example for those with immune system issues. Vaccination is not available to people over 80 as "it seems to be less effective in this age group". By August 2017, just under half of eligible 70–78 year olds had been vaccinated. About 3% of those eligible by age have conditions that suppress their immune system, and should not receive Zostavax. There had been 1,104 adverse reaction reports by April 2018. In the US, it is recommended that healthy adults 50 years and older receive two doses of Shingrix, two to six months apart. Treatment The aims of treatment are to limit the severity and duration of pain, shorten the duration of a shingles episode, and reduce complications. Symptomatic treatment is often needed for the complication of postherpetic neuralgia. However, a study on untreated shingles shows that, once the rash has cleared, postherpetic neuralgia is very rare in people under 50 and wears off in time; in older people, the pain wore off more slowly, but even in people over 70, 85% were free from pain a year after their shingles outbreak. Analgesics People with mild to moderate pain can be treated with over-the-counter pain medications. Topical lotions containing calamine can be used on the rash or blisters and may be soothing. Occasionally, severe pain may require an opioid medication, such as morphine. Once the lesions have crusted over, capsaicin cream (Zostrix) can be used. Topical lidocaine and nerve blocks may also reduce pain. Administering gabapentin along with antivirals may offer relief of postherpetic neuralgia. Antivirals Antiviral drugs may reduce the severity and duration of shingles; however, they do not prevent postherpetic neuralgia. Of these drugs, aciclovir has been the standard treatment, but the newer drugs valaciclovir and famciclovir demonstrate similar or superior efficacy and good safety and tolerability. The drugs are used both for prevention (for example in people with HIV/AIDS) and as therapy during the acute phase. Complications in immunocompromised individuals with shingles may be reduced with intravenous aciclovir. In people who are at a high risk for repeated attacks of shingles, five daily oral doses of aciclovir are usually effective. Steroids Corticosteroids do not appear to decrease the risk of long-term pain. Side effects however appear to be minimal. Their use in Ramsay Hunt syndrome had not been properly studied as of 2008. Zoster ophthalmicus Treatment for zoster ophthalmicus is similar to standard treatment for shingles at other sites. A trial comparing aciclovir with its prodrug, valaciclovir, demonstrated similar efficacies in treating this form of the disease. Prognosis The rash and pain usually subside within three to five weeks, but about one in five people develop a painful condition called postherpetic neuralgia, which is often difficult to manage. In some people, shingles can reactivate presenting as zoster sine herpete: pain radiating along the path of a single spinal nerve (a dermatomal distribution), but without an accompanying rash. This condition may involve complications that affect several levels of the nervous system and cause many cranial neuropathies, polyneuritis, myelitis, or aseptic meningitis. Other serious effects that may occur in some cases include partial facial paralysis (usually temporary), ear damage, or encephalitis. Although initial infections with VZV during pregnancy, causing chickenpox, may lead to infection of the fetus and complications in the newborn, chronic infection or reactivation in shingles are not associated with fetal infection. There is a slightly increased risk of developing cancer after a shingles episode. However, the mechanism is unclear and mortality from cancer did not appear to increase as a direct result of the presence of the virus. Instead, the increased risk may result from the immune suppression that allows the reactivation of the virus. Although shingles typically resolves within 3–5 weeks, certain complications may arise: Secondary bacterial infection. Motor involvement, including weakness especially in "motor herpes zoster". Eye involvement: trigeminal nerve involvement (as seen in herpes ophthalmicus) should be treated early and aggressively as it may lead to blindness. Involvement of the tip of the nose in the zoster rash is a strong predictor of herpes ophthalmicus. Postherpetic neuralgia, a condition of chronic pain following shingles. Epidemiology Varicella zoster virus (VZV) has a high level of infectivity and has a worldwide prevalence. Shingles is a re-activation of latent VZV infection: zoster can only occur in someone who has previously had chickenpox (varicella). Shingles has no relationship to season and does not occur in epidemics. There is, however, a strong relationship with increasing age. The incidence rate of shingles ranges from 1.2 to 3.4 per 1,000 person‐years among younger healthy individuals, increasing to 3.9–11.8 per 1,000 person‐years among those older than 65 years, and incidence rates worldwide are similar. This relationship with age has been demonstrated in many countries, and is attributed to the fact that cellular immunity declines as people grow older. Another important risk factor is immunosuppression. Other risk factors include psychological stress. According to a study in North Carolina, "black subjects were significantly less likely to develop zoster than were white subjects." It is unclear whether the risk is different by sex. Other potential risk factors include mechanical trauma and exposure to immunotoxins. There is no strong evidence for a genetic link or a link to family history. A 2008 study showed that people with close relatives who had shingles were twice as likely to develop it themselves, but a 2010 study found no such link. Adults with latent VZV infection who are exposed intermittently to children with chickenpox receive an immune boost. This periodic boost to the immune system helps to prevent shingles in older adults. When routine chickenpox vaccination was introduced in the United States, there was concern that, because older adults would no longer receive this natural periodic boost, there would be an increase in the incidence of shingles. Multiple studies and surveillance data, at least when viewed superficially, demonstrate no consistent trends in incidence in the U.S. since the chickenpox vaccination program began in 1995. However, upon closer inspection, the two studies that showed no increase in shingles incidence were conducted among populations where varicella vaccination was not as yet widespread in the community. A later study by Patel et al. concluded that since the introduction of the chickenpox vaccine, hospitalization costs for complications of shingles increased by more than $700 million annually for those over age 60. Another study by Yih et al. reported that as varicella vaccine coverage in children increased, the incidence of varicella decreased, and the occurrence of shingles among adults increased by 90%. The results of a further study by Yawn et al. showed a 28% increase in shingles incidence from 1996 to 2001. It is likely that incidence rate will change in the future, due to the aging of the population, changes in therapy for malignant and autoimmune diseases, and changes in chickenpox vaccination rates; a wide adoption of zoster vaccination could dramatically reduce the incidence rate. In one study, it was estimated that 26% of those who contract shingles eventually present complications. Postherpetic neuralgia arises in approximately 20% of people with shingles. A study of 1994 California data found hospitalization rates of 2.1 per 100,000 person-years, rising to 9.3 per 100,000 person-years for ages 60 and up. An earlier Connecticut study found a higher hospitalization rate; the difference may be due to the prevalence of HIV in the earlier study, or to the introduction of antivirals in California before 1994. History Shingles has a long recorded history, although historical accounts fail to distinguish the blistering caused by VZV and those caused by smallpox, ergotism, and erysipelas. Aulus Cornelius Celsus, around 25 BC to 50 AD, first used the term herpes zoster. In the late 18th century William Heberden established a way to differentiate shingles and smallpox, and in the late 19th century, shingles was differentiated from erysipelas. In 1831 Richard Bright hypothesized that the disease arose from the dorsal root ganglion, and an 1861 paper by Felix von Bärensprung confirmed this. Recognition that chickenpox and shingles were caused by the same virus came at the beginning of the 20th century. Physicians began to report that cases of shingles were often followed by chickenpox in younger people who lived with the person with shingles. The idea of an association between the two diseases gained strength when it was shown that lymph from a person with shingles could induce chickenpox in young volunteers. This was finally proved by the first isolation of the virus in cell cultures, by the Nobel laureate Thomas Huckle Weller, in 1953. Some sources also attribute the first isolation of the herpes zoster virus to Evelyn Nicol. Until the 1940s the disease was considered benign, and serious complications were thought to be very rare. However, by 1942, it was recognized that shingles was a more serious disease in adults than in children and that it increased in frequency with advancing age. Further studies during the 1950s on immunosuppressed individuals showed that the disease was not as benign as once thought, and the search for various therapeutic and preventive measures began. By the mid-1960s, several studies identified the gradual reduction in cellular immunity in old age, observing that in a cohort of 1,000 people who lived to the age of 85, approximately 500 (i.e., 50%) would have at least one attack of shingles, and 10 (i.e., 1%) would have at least two attacks. In historical shingles studies, shingles incidence generally increased with age. However, in his 1965 paper, Hope-Simpson suggested that the "peculiar age distribution of zoster may in part reflect the frequency with which the different age groups encounter cases of varicella and because of the ensuing boost to their antibody protection have their attacks of zoster postponed". Lending support to this hypothesis that contact with children with chickenpox boosts adult cell-mediated immunity to help postpone or suppress shingles, a study by Thomas et al. reported that adults in households with children had lower rates of shingles than households without children. Also, the study by Terada et al. indicated that pediatricians reflected incidence rates from 1/2 to 1/8 that of the general population their age. Etymology The family name of all the herpesviruses derives from the Greek word , from ("to creep"), referring to the latent, recurring infections typical of this group of viruses. Zoster comes from Greek , meaning "belt" or "girdle", after the characteristic belt-like dermatomal rash. The common name for the disease, shingles, derives from the Latin , a variant of Latin , meaning "girdle". Research Until the mid-1990s, infectious complications of the central nervous system (CNS) caused by VZV reactivation were regarded as rare. The presence of rash, as well as specific neurological symptoms, were required to diagnose a CNS infection caused by VZV. Since 2000, PCR testing has become more widely used, and the number of diagnosed cases of CNS infection has increased. Classic textbook descriptions state that VZV reactivation in the CNS is restricted to immunocompromised individuals and the elderly; however, studies have found that most participants are immunocompetent, and younger than 60 years old. Historically, vesicular rash was considered a characteristic finding, but studies have found that rash is only present in 45% of cases. In addition, systemic inflammation is not as reliable an indicator as previously thought: the mean level of C-reactive protein and mean white blood cell count are within the normal range in participants with VZV meningitis. MRI and CT scans are usually normal in cases of VZV reactivation in the CNS. CSF pleocytosis, previously thought to be a strong indicator of VZV encephalitis, was absent in half of a group of people diagnosed with VZV encephalitis by PCR. The frequency of CNS infections presented at the emergency room of a community hospital is not negligible, so a means of diagnosing cases is needed. PCR is not a foolproof method of diagnosis, but because so many other indicators have turned out not to be reliable in diagnosing VZV infections in the CNS, PCR is the recommended method of testing for VZV. Negative PCR does not rule out VZV involvement, but a positive PCR can be used for diagnosis, and appropriate treatment started (for example, antivirals can be prescribed rather than antibiotics). The introduction of DNA analysis techniques has shown some complications of varicella-zoster to be more common than previously thought. For example, sporadic meningoencephalitis (ME) caused by varicella-zoster was regarded as rare disease, mostly related to childhood chickenpox. However, meningoencephalitis caused by varicella-zoster is increasingly recognized as a predominant cause of ME among immunocompetent adults in non-epidemic circumstances. Diagnosis of complications of varicella-zoster, particularly in cases where the disease reactivates after years or decades of latency, is difficult. A rash (shingles) can be present or absent. Symptoms vary, and there is a significant overlap in symptoms with herpes-simplex symptoms. Although DNA analysis techniques such as polymerase chain reaction (PCR) can be used to look for DNA of herpesviruses in spinal fluid or blood, the results may be negative, even in cases where other definitive symptoms exist. Notwithstanding these limitations, the use of PCR has resulted in an advance in the state of the art in our understanding of herpesviruses, including VZV, during the 1990s and 2000s. For example, in the past, clinicians believed that encephalitis was caused by herpes simplex and that people always died or developed serious long-term function problems. People were diagnosed at autopsy or by brain biopsy. Brain biopsy is not undertaken lightly: it is reserved only for serious cases that cannot be diagnosed by less invasive methods. For this reason, knowledge of these herpes virus conditions was limited to severe cases. DNA techniques have made it possible to diagnose "mild" cases, caused by VZV or HSV, in which the symptoms include fever, headache, and altered mental status. Mortality rates in treated people are decreasing.
Biology and health sciences
Viral diseases
Health
443995
https://en.wikipedia.org/wiki/Heliconia
Heliconia
Heliconia is a genus of flowering plants in the monotypic family Heliconiaceae. Most of the 194 known species are native to the tropical Americas, but a few are indigenous to certain islands of the western Pacific and Maluku in Indonesia. Many species of Heliconia are found in the tropical forests of these regions. Most species are listed as either vulnerable or data deficient by the IUCN Red List of threatened species. Several species are widely cultivated as ornamentals, and a few are naturalized in Florida, Gambia, and Thailand. Common names for the genus include lobster-claws, toucan beak, wild plantain, or false bird-of-paradise; the last term refers to their close similarity to the bird-of-paradise flowers in the Strelitzia genus. Collectively, these plants are also simply referred to as "heliconias". Heliconia originated in the Late Eocene (39 Ma) and are the oldest known clade of hummingbird-pollinated plants. Description These herbaceous plants range from 0.5 to nearly 4.5 m (1.5–15 ft) tall, depending on the species. Leaves The simple leaves of these plants are 15–300 cm (6 in–10 ft). They are characteristically long, oblong, alternate, or growing opposite one another on nonwoody petioles often longer than the leaf, often forming large clumps with age. The leaves in different positions on the plant have a different absorption potential of sunlight for photosynthesis when exposed to different degrees of sunlight. They also look like lobster claws. Flower Their flowers are produced on long, erect or drooping panicles, and consist of brightly colored, waxy bracts, with small true flowers peeping out from the bracts. The growth habit of heliconias is similar to Canna, Strelitzia, and bananas, to which they are related. The flowers can be hues of reds, oranges, yellows, and greens, and are subtended by brightly colored bracts. The flowers' shape often limits pollination to a subset of the hummingbirds in the region. They also produce ample nectar that attract these birds. Seeds Fruits are blue-purple when ripe and primarily bird dispersed. Studies of post-dispersal seed survival showed that seed size was not a determinant. The highest amount of seed predation came from mammals. Taxonomy The generic name Heliconia was given by Carl Linnaeus in 1771 from the Greek word Helikṓnios from Helikṓn after Mount Helicon in Boeotia, central Greece. Heliconia is the only genus in the monotypic family Heliconiaceae, but was formerly included in the family Musaceae, which includes the bananas (e.g. Musa, Ensete and so on). However, the APG system of 1998, and its successor, the APG II system of 2003, confirm the Heliconiaceae as distinct and places them in the order Zingiberales, in the commelinid clade of monocots. Species Species accepted by Kew Botanic Gardens Distribution and habitat Most of the 194 known species are native to the tropical Americas, but a few are indigenous to certain islands of the western Pacific and Maluku. Many species of Heliconia are found in the tropical forests of these regions. Several species are widely cultivated as ornamentals, and a few are naturalized in Florida, Gambia and Thailand. Ecology Heliconias are an important food source for forest hummingbirds, especially the hermits (Phathornithinae), some of which – such as the rufous-breasted hermit (Glaucis hirsuta) – also use the plant for nesting. The Honduran white bat (Ectophylla alba) also lives in tents it makes from heliconia leaves. Bats Pollination Although Heliconia are almost exclusively pollinated by hummingbirds, some bat pollination has been found to occur. Heliconia solomonensis is pollinated by the macroglosine bat (Melonycteris woodfordi) in the Solomon Islands. Heliconia solomonensis has green inflorescences and flowers that open at night, which is typical of bat pollinated plants. The macroglosine bat is the only known nocturnal pollinator of Heliconia solomonensis. Habitat Many bats use Heliconia leaves for shelter. The Honduran white bat, Ectohylla alba, utilizes five species of Heliconia to make diurnal tent-shaped roosts. The bat cuts the side veins of the leaf extending from the midrib, causing the leaf to fold like a tent. This structure provides the bat with shelter from rain, sun, and predators. In addition, the stems of the Heliconia leaves are not strong enough to carry the weight of typical bat predators, so shaking of the leaf alerts roosting bats to presence of predators. The bats Artibeus anderseni and A. phaeotis form tents from the leaves of Heliconia in the same manner as the Honduran white bat. The neotropical disk-winged bat, Thyroptera tricolor, has suction disks on the wrists which allow it to cling to the smooth surfaces of the Heliconia leaves. This bat roosts head-up in the rolled young leaves of Heliconia plants. Insects Heliconias provide shelter for a diverse range of insects within their young rolled leaves and water-filled floral bracts. Insects that inhabit the rolled leaves often feed upon the inner surfaces of the leaf, such as beetles of the family Chrysomelidae. In bracts containing small amounts of water, fly larvae and beetles are the dominant inhabitants. In bracts with greater quantities of water the typical inhabitants are mosquito larva. Insects living in the bracts often feed on the bract tissue, nectar of the flower, flower parts, other insects, microorganisms, or detritus in the water contained in the bract (Siefert 1982). Almost all species of Hispini beetles that use rolled leaves are obligate herbivores of plants of the order of Zingiberales, which includes Heliconia. These beetles live in and feed from the rolled leaf, the stems, the inflorescences, or the unfurled mature leaves of the Heliconia plant. In addition, these beetles deposit their eggs on the leaf surface, petioles of immature leaves, or in the bracts of the Heliconia. Furthermore, some wasp species such as Polistes erythrocephalus build their nest on the protected underside of large leaves. Hummingbirds Hummingbirds are the main pollinators of heliconia flowers in many locations. The concurrent diversification of hummingbird-pollinated taxa in the order Zingiberales and the hummingbird family (Trochilidae: Phaethorninae) starting 18 million years ago supports the idea that these radiations have influenced one another through evolutionary time. At La Selva Research Station in Costa Rica, specific species of Heliconia were found to have specific hummingbird pollinators. These hummingbirds can be organized into two different groups: hermits and non-hermits. Hermits are the subfamily Phaethornithinae, consisting of the genera Anopetia, Eutoxeres, Glaucis, Phaethornis, Ramphodon, and Threnetes. Non-hermits are a catch-all group of other hummingbirds that often visit heliconias, comprising several clades (McGuire 2008). Hermits are generally traplining foragers; that is, individuals visit a repeated circuit of high-reward flowers instead of holding fixed territories Non-hermits are territorial over their Heliconia clumps, causing greater self-pollination. Hermits tend to have long curved bills while non-hermits tend to possess short straight bills, a morphological difference that likely spurred the divergence of these groups in the Miocene era. Characteristics of Heliconia flowers that select for either hermit or non-hermit pollinator specificity are degree of self-compatibility, flowering phenology, nectar production, color, and shape of flower. The hummingbird itself will choose the plants its feeds from on the basis of its beak shape, its perch on the plant, and its territory choice. Hummingbird visits to the Heliconia flower do not affect its production of nectar. This may account for the flowers not having a consistent amount of nectar produced from flower to flower. Different Heliconia species have different flowering seasons. This suggests that the species compete for pollinators. Many species of Heliconia, even the newly colonized species, are visited by many different pollinators. Cultivation Several cultivars and hybrids have been selected for garden planting, including: H. psittacorum × H. spathocircinata, both species of South America, mainly Brazil H. × rauliniana = H. marginata (Venezuela) × H. bihai (Brazil) H. chartacea cv. 'Sexy Pink' Most commonly grown landscape Heliconia species include H. augusta, H. bihai, H. brasiliensis, H. caribaea, H. latispatha, H. pendula, H. psittacorum, H. rostrata, H. schiediana, and H. wagneriana. Uses Heliconias are grown for the florist's trade and as landscape plants. These plants do not grow well in cold, dry conditions. They are very drought intolerant, but can endure some soil flooding. Heliconias need an abundance of water, sunlight, and soils that are rich in humus in order to grow well. These flowers are grown in tropical regions all over the world as ornamental plants. The flower of H. psittacorum (parrot heliconia) is especially distinctive, its greenish-yellow flowers with black spots and red bracts reminiscent of the bright plumage of parrots. Gallery
Biology and health sciences
Monocots
null
444008
https://en.wikipedia.org/wiki/Alstroemeria
Alstroemeria
Alstroemeria (), commonly called the Peruvian lily or lily of the Incas, is a genus of flowering plants in the family Alstroemeriaceae. They are all native to South America, although some have become naturalized in the United States, Mexico, Australia, New Zealand, Madeira and the Canary Islands. Almost all of the species are restricted to one of two distinct centers of diversity: one in central Chile and southern Argentina, the other in eastern Brazil. Species of Alstroemeria from Patagonia are winter-growing plants, while those of Brazil are summer growing. All are long-lived perennials except A. graminea, a diminutive annual from the Atacama Desert of Chile. Description Plants of this genus grow from a cluster of tubers. They send up fertile and sterile stems, the fertile stems of some species reaching in height. The leaves are alternately arranged and resupinate, twisted on the petioles so that the undersides face up. The leaves are variable in shape and the blades have smooth edges. The flowers are solitary or borne in umbels. The flower has six petals each up to long. They come in many shades of red, orange, yellow, green, purple, pink, and white, flecked and striped and streaked with darker colors. There are six curving stamens. The stigma has three lobes. The fruit is a capsule with three valves. Alstroemeria are classified as an inferior monocot, meaning the petals are located above the ovary and the leaf veins are parallel. Distribution and habitat The genus Alstroemeria is exclusively native to South America, with various species found ranging from Venezuela (3° north of the Equator), to Tierra del Fuego, Argentina (53° South). Within this range of the entire genus, two centers of species diversity are recognized, one in Brazil and one in Chile. In Chile, Alstroemeria is amongst the most diverse genera of vascular monocotyledons, with more than 50 recognized or accepted taxa (species, subspecies and varieties). Of these taxa, roughly 80% are endemic to the Mediterranean matorral zone of central Chile. In Brazil, which is home to more than 40 species, most Alstroemeria species are found outside of the Amazonian region, and are concentrated towards the south and east of the country. Alstroemeria can be found in almost all types of habitat, from forests to savannahs, caatingas to swamps, and commonly, high altitude grasslands and rocky outcrops, with typical altitudes ranging from 300m in the Amazon, to 2300m in the Itatiaia National Park. Most Brazilian species have relatively restricted distributions. Taxonomy The genus was described by Johan Peter Falk and his thesis supervisor Carl Linnaeus in his 1762 dissertation Planta Alströmeria. Linnaeus bears the botanical authority (L.). Etymology The genus was named after the Swedish baron Clas Alströmer (1736–1794), a friend of Linnaeus. Cultivation and uses Many hybrids and at least 190 cultivars have been developed, featuring many markings and colors, including white, yellow, orange, apricot, pink, red, purple, and lavender. The most popular and showy hybrids commonly grown today result from crosses between species from Chile (winter-growing) with species from Brazil (summer-growing). This strategy has overcome the florists' problem of seasonal dormancy and resulted in plants that are evergreen, or nearly so, and flower for most of the year. This breeding work derives mainly from trials that began in the United States in the 1980s; the main breeding is done nowadays by companies in the Netherlands. The flower, which resembles a miniature lily, is very popular for bouquets and flower arrangements in the commercial cut flower trade. These delicate flowers survive up to 14 days in water without any signs of shrivelling. Most cultivars available for the home garden will bloom in the late spring and early summer. The roots are hardy to a temperature of . The plant requires at least six hours of morning sunlight, regular water, and well-drained soil. AGM cultivars The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit, all with a hardiness rating of H4 (Hardy – average winter ) apart from 'Friendship' (H5: Hardy – cold winter ): 'Apollo' (white/yellow flowers, 100 cm) 'Cahors' (pink/yellow, 90 cm) 'Coronet' (salmon/yellow flowers, 140 cm) 'Friendship' (yellow flushed pink, 100 cm) 'Orange Glory' (150 cm) 'Oriana' (salmon/yellow, 50 cm) 'Phoenix' (red/yellow, 100 cm) 'Red Elf' (100 cm) 'Sirius' (pink/yellow, 100 cm) 'Sonata' (red/yellow, 100 cm) 'Spitfire' (orange/yellow, 90 cm) 'Tessa' (red flowers, 120 cm) 'Yellow Friendship' (140 cm) Ecology Some alstroemerias have escaped cultivation and become weeds, such as Alstroemeria pulchella. and A. aurea, which are now weeds in Australia. Species
Biology and health sciences
Monocots
null
444091
https://en.wikipedia.org/wiki/Weierstrass%20function
Weierstrass function
In mathematics, the Weierstrass function, named after its discoverer, Karl Weierstrass, is an example of a real-valued function that is continuous everywhere but differentiable nowhere. It is also an example of a fractal curve. The Weierstrass function has historically served the role of a pathological function, being the first published example (1872) specifically concocted to challenge the notion that every continuous function is differentiable except on a set of isolated points. Weierstrass's demonstration that continuity did not imply almost-everywhere differentiability upended mathematics, overturning several proofs that relied on geometric intuition and vague definitions of smoothness. These types of functions were denounced by contemporaries: Henri Poincaré famously described them as "monsters" and called Weierstrass' work "an outrage against common sense", while Charles Hermite wrote that they were a "lamentable scourge". The functions were difficult to visualize until the arrival of computers in the next century, and the results did not gain wide acceptance until practical applications such as models of Brownian motion necessitated infinitely jagged functions (nowadays known as fractal curves). Construction In Weierstrass's original paper, the function was defined as a Fourier series: where , is a positive odd integer, and The minimum value of for which there exists such that these constraints are satisfied is . This construction, along with the proof that the function is not differentiable over any interval, was first delivered by Weierstrass in a paper presented to the Königliche Akademie der Wissenschaften on 18 July 1872. Despite being differentiable nowhere, the function is continuous: Since the terms of the infinite series which defines it are bounded by and this has finite sum for , convergence of the sum of the terms is uniform by the Weierstrass M-test with . Since each partial sum is continuous, by the uniform limit theorem, it follows that is continuous. Additionally, since each partial sum is uniformly continuous, it follows that is also uniformly continuous. It might be expected that a continuous function must have a derivative, or that the set of points where it is not differentiable should be countably infinite or finite. According to Weierstrass in his paper, earlier mathematicians including Gauss had often assumed that this was true. This might be because it is difficult to draw or visualise a continuous function whose set of nondifferentiable points is something other than a countable set of points. Analogous results for better behaved classes of continuous functions do exist, for example the Lipschitz functions, whose set of non-differentiability points must be a Lebesgue null set (Rademacher's theorem). When we try to draw a general continuous function, we usually draw the graph of a function which is Lipschitz or otherwise well-behaved. Moreover, the fact that the set of non-differentiability points for a monotone function is measure-zero implies that the rapid oscillations of Weierstrass' function are necessary to ensure that it is nowhere-differentiable. The Weierstrass function was one of the first fractals studied, although this term was not used until much later. The function has detail at every level, so zooming in on a piece of the curve does not show it getting progressively closer and closer to a straight line. Rather between any two points no matter how close, the function will not be monotone. The computation of the Hausdorff dimension of the graph of the classical Weierstrass function was an open problem until 2018, while it was generally believed that . That D is strictly less than 2 follows from the conditions on and from above. Only after more than 30 years was this proved rigorously. The term Weierstrass function is often used in real analysis to refer to any function with similar properties and construction to Weierstrass's original example. For example, the cosine function can be replaced in the infinite series by a piecewise linear "zigzag" function. G. H. Hardy showed that the function of the above construction is nowhere differentiable with the assumptions . Riemann function The Weierstrass function is based on the earlier Riemann function, claimed to be differentiable nowhere. Occasionally, this function has also been called the Weierstrass function. While Bernhard Riemann strongly claimed that the function is differentiable nowhere, no evidence of this was published by Riemann, and Weierstrass noted that he did not find any evidence of it surviving either in Riemann's papers or orally from his students. In 1916, G. H. Hardy confirmed that the function does not have a finite derivative in any value of where x is irrational or is rational with the form of either or , where A and B are integers. In 1969, Joseph Gerver found that the Riemann function has a defined differential on every value of x that can be expressed in the form of with integer A and B, or rational multipliers of pi with an odd numerator and denominator. On these points, the function has a derivative of . In 1971, J. Gerver showed that the function has no finite differential at the values of x that can be expressed in the form of , completing the problem of the differentiability of the Riemann function. As the Riemann function is differentiable only on a null set of points, it is differentiable almost nowhere. Hölder continuity It is convenient to write the Weierstrass function equivalently as for . Then is Hölder continuous of exponent α, which is to say that there is a constant C such that for all and . Moreover, is Hölder continuous of all orders but not Lipschitz continuous. Density of nowhere-differentiable functions It turns out that the Weierstrass function is far from being an isolated example: although it is "pathological", it is also "typical" of continuous functions: In a topological sense: the set of nowhere-differentiable real-valued functions on [0, 1] is comeager in the vector space C([0, 1]; R) of all continuous real-valued functions on [0, 1] with the topology of uniform convergence. In a measure-theoretic sense: when the space C([0, 1]; R) is equipped with classical Wiener measure γ, the collection of functions that are differentiable at even a single point of [0, 1] has γ-measure zero. The same is true even if one takes finite-dimensional "slices" of C([0, 1]; R), in the sense that the nowhere-differentiable functions form a prevalent subset of C([0, 1]; R).
Mathematics
Specific functions
null
444283
https://en.wikipedia.org/wiki/Marine%20hatchetfish
Marine hatchetfish
Marine hatchetfishes or deep-sea hatchetfishes are small deep-sea mesopelagic ray-finned fish of the stomiiform subfamily Sternoptychinae. They should not be confused with the freshwater hatchetfishes, which are not particularly closely related Teleostei in the characiform family Gasteropelecidae. The scientific name means "Sternoptyx-subfamily", from Sternoptyx (the type genus) + the standard animal family suffix "-inae". It ultimately derives from Ancient Greek stérnon (στέρνον, "breast") + ptýx (πτύξ, "a fold/crease") + Latin forma ("external form"), the Greek part in reference to the thorax shape of marine hatchetfishes. Description and ecology Found in tropical, subtropical and temperate waters of the Atlantic, Pacific and Indian Oceans, marine hatchetfishes range in size from Polyipnus danae at to the c.-long giant hatchetfish (Argyropelecus gigas). They are small deep-sea fishes which have evolved a peculiar body shape and like their relatives have bioluminescent photophores. The latter allow them to use counter-illumination to escape predators that lurk in the depths: by matching the light intensity with the light penetrating the water from above, the fish does not appear darker if seen from below. They typically occur at a few hundred meters below the surface, but their entire depth range spans from 50 to 1,500 meters deep. The body is deep and laterally extremely compressed, somewhat resembling a hatchet (with the thorax being the "blade" and the caudal peduncle being the "handle"). The genus Polyipnus is rounded, the other two – in particular Sternoptyx – decidedly angular if seen from the side. Their pelvis is rotated to a vertical position. The mouth is located at the tip of the snout and directed almost straight downwards. Their scales are silvery, delicate and easily abraded. In some species, such as the highlight hatchetfish (Sternoptyx pseudobscura), large sections of the body at the base of the anal fin and/or caudal fin are transparent. They have perpendicular spines and blade-like pterygiophores in front of the dorsal fin. The anal fin has 11–19 rays and in some species is divided in two parts; almost all have an adipose fin. Their large, sometimes tube-shaped eyes can collect the faintest of light and focus well on objects both close and far. They are directed somewhat upwards, most conspicuously in the genus Argyropelecus. This allows them to discern the silhouettes of prey moving overhead against the slightly brighter upper waters. Genera There are three genera in this subfamily, with some 40 species altogether: Argyropelecus – silver hatchetfishes (7 species) Polyipnus (32 species) Sternoptyx (4 species) Four fossil genera are also known: Eosternoptyx (middle-late Eocene of Iran) Polyipnoides (middle Eocene of Georgia) Horbatshia (Oligocene of the Carpathians) Discosternon (Miocene of Italy)
Biology and health sciences
Stomiiformes
Animals
444511
https://en.wikipedia.org/wiki/Hedera
Hedera
Hedera, commonly called ivy (plural ivies), is a genus of 12–15 species of evergreen climbing or ground-creeping woody plants in the family Araliaceae, native to Western Europe, Central Europe, Southern Europe, Macaronesia, northwestern Africa and across central-southern Asia east to Japan and Taiwan. Several species are cultivated as climbing ornamentals, and the name ivy especially denotes common ivy (Hedera helix), known in North America as "English ivy", which is frequently planted to clothe brick walls. Description On level ground ivies remain creeping, not exceeding 5–20 cm height, but on surfaces suitable for climbing, including trees, natural rock outcrops or man-made structures such as quarry rock faces or built masonry and wooden structures, they can climb to at least 30 m above the ground. Ivies have two leaf types, with palmately lobed juvenile leaves on creeping and climbing stems and unlobed cordate adult leaves on fertile flowering stems exposed to full sun, usually high in the crowns of trees or the tops of rock faces, from 2 m or more above ground. The juvenile and adult shoots also differ, the former being slender, flexible and scrambling or climbing with small aerial roots to affix the shoot to the substrate (rock or tree bark), the latter thicker, self-supporting and without roots. The flowers are greenish-yellow with five small petals; they are produced in umbels in autumn to early winter and are very rich in nectar. The fruit is a greenish-black, dark purple or (rarely) yellow berry 5–10 mm diameter with one to five seeds, ripening in late winter to mid-spring. The seeds are dispersed by birds which eat the berries. The species differ in detail of the leaf shape and size (particularly of the juvenile leaves) and in the structure of the leaf trichomes, and also in the size and, to a lesser extent, the colour of the flowers and fruit. The chromosome number also differs between species. The basic diploid number is 48, while some are tetraploid with 96, and others hexaploid with 144 and octaploid with 192 chromosomes. Ecology Ivies are natives of Eurasia and North Africa, but have been introduced to North America and Australia. They invade disturbed forest areas in North America. Ivy seeds are spread by birds. Ivies are of major ecological importance for their nectar and fruit production, both produced at times of the year when few other nectar or fruit sources are available. The ivy bee Colletes hederae is completely dependent on ivy flowers, timing its entire life cycle around ivy flowering. The fruit are eaten by a range of birds, including thrushes, blackcaps, and woodpigeons. The leaves are eaten by the larvae of some species of Lepidoptera such as angle shades, lesser broad-bordered yellow underwing, scalloped hazel, small angle shades, small dusty wave (which feeds exclusively on ivy), swallow-tailed moth and willow beauty. A very wide range of invertebrates shelter and overwinter in the dense woody tangle of ivy. Birds and small mammals also nest in ivy. It serves to increase the surface area and complexity of woodland environments. Taxonomy The following species are widely accepted; they are divided into two main groups, depending on whether they have scale-like or stellate trichomes on the undersides of the leaves: Trichomes scale-like Hedera algeriensis Hibberd – Algerian ivy. Algeria, Tunisia (Mediterranean coast). Hedera canariensis Willd. – Canaries ivy. Canary Islands. Hedera colchica (K.Koch) K.Koch – Persian ivy. Alborz, Caucasus, Turkey. Hedera cypria McAllister – Cyprus ivy (syn. H. pastuchovii subsp. cypria (McAll.) Hand). Cyprus (Troodos Mts.) Hedera iberica (McAllister) Ackerfield & J.Wen – Iberian ivy. SW Iberian coasts. Hedera maderensis – Madeiran ivy. Madeira. Hedera maroccana McAllister – Moroccan ivy. Morocco. Hedera nepalensis K.Koch – Himalayan ivy (syn. H. sinensis (Tobl.) Hand.-Mazz.). Himalaya, SW China. Hedera pastuchovii G.Woronow – Pastuchov's ivy. Caucasus, Alborz. Hedera rhombea (Miq.) Siebold ex Bean – Japanese ivy. Japan, Korea, Taiwan. Trichomes stellate Hedera azorica Carrière – Azores ivy. Azores. Hedera helix L. – Common ivy (syn. H. caucasigena Pojark., H. taurica (Hibberd) Carrière). Europe, and widely introduced elsewhere. Hedera hibernica (G.Kirchn.) Bean – Atlantic ivy (syn. H. helix subsp. hibernica (G.Kirchn.) D.C.McClint.). Atlantic western Europe. The species of ivy are largely allopatric and closely related, and many have on occasion been treated as varieties or subspecies of H. helix, the first species described. Several additional species have been described in the southern parts of the former Soviet Union, but are not regarded as distinct by most botanists. Hybrids have been recorded between several Hedera species, including Atlantic ivy (H hibernica) with common ivy (H helix). Hybridisation may also have played a part in the evolution of some species in the genus. A well-known hybrid involving ivies is the intergeneric hybrid × Fatshedera lizei, a cross between Fatsia japonica and Hedera hibernica. This hybrid was produced once in a garden in France in 1910 and has never successfully been repeated, the hybrid being maintained in cultivation by vegetative propagation. Uses and cultivation Ivies are very popular in cultivation within their native range and compatible climates elsewhere, for their evergreen foliage, attracting wildlife, and for adaptable design uses in narrow planting spaces and on tall or wide walls for aesthetic addition, or to hide unsightly walls, fences and tree stumps. Numerous cultivars with variegated foliage and/or unusual leaf shapes have been selected for horticultural use. The American Ivy Society is the International Cultivar Registration Authority for Hedera, and recognises over 400 registered cultivars. Problems and dangers On trees Much discussion has involved whether or not ivy climbing trees will harm trees. In Europe, the harm is generally minor although there can be competition for soil nutrients, light, and water, and senescent trees supporting heavy ivy growth can be liable to windthrow damage. The UK's Woodland Trust says "Ivy has long been accused of strangling trees, but it doesn’t harm the tree at all, and even supports at least 50 species of wildlife." Harm and problems are more significant in North America, where ivy is without the natural pests and diseases that control its vigour in its native continents; the photosynthesis or structural strength of a tree can be overwhelmed by aggressive ivy growth leading to death directly or by opportunistic disease and insect attacks. Invasive exotic Several ivy species have become a serious invasive species (invasive exotic) in natural native plant habitats, especially riparian and woodland types, and also a horticultural weed in gardens of the western and southern regions of North America with milder winters. Ivies create a dense, vigorously smothering, shade-tolerant evergreen groundcover that can spread through assertive underground rhizomes and above-ground runners quickly over large natural plant community areas and outcompete the native vegetation. The use of ivies as ornamental plants in horticulture in California and other states is now discouraged or banned in certain jurisdictions. . Drought-tolerant H. canariensis and H. algeriensis and European H. helix were originally cultivated in garden, park, and highway landscaping, but they have become aggressively invasive in coastal forests and riparian ecosystems, now necessitating costly eradication programs. Similar problems exist in Australia. Only one species of ivy, H. helix , bears the status of 'declared weed' in Australia; in the Australian Capital Territory, and Western Australia only. Toxicity The berries are moderately toxic to humans. Ivy foliage contains triterpenoid saponins and falcarinol. Falcarinol is capable of inducing contact dermatitis. It has also been shown to kill breast cancer cells. Stinging insects The flowers of ivy are pollinated by Hymenoptera and are particularly attractive to the common wasp. Etymology and other names The name ivy derives from Old English ifig, cognate with German Efeu, of unknown original meaning. The scientific name Hedera is the classical Latin name for the plant. Old regional common names in Britain, no longer used, include "Bindwood" and "Lovestone", for the way it clings and grows over stones and bricks. US Pacific Coast regional common names for H. canariensis include "California ivy" and "Algerian ivy". For H. helix, regional common names include "common ivy" (Britain and Ireland) and "English ivy" (North America). The name ivy has also been used as a common name for a number of other unrelated plants, including Boston ivy (Japanese Creeper Parthenocissus tricuspidata, in the family Vitaceae), Cape-ivy (used interchangeably for Senecio angulatus and Delairea odorata, Asteraceae), poison-ivy (Toxicodendron radicans, Anacardiaceae), Swedish ivy (Whorled Plectranthus Plectranthus verticillatus, Lamiaceae) and ground ivy (Glechoma hederacea, also Lamiaceae), and Kenilworth ivy (Cymbalaria muralis, Plantaginaceae). Cultural symbolism Like many other evergreen plants, which impressed European cultures by persisting through the winter, ivy has traditionally been imbued with a spiritual significance. It was brought into homes to drive out evil spirits. In Ancient Rome it was believed that a wreath of ivy could prevent a person from becoming drunk, and such a wreath was worn by Bacchus, the god of intoxication. Ivy bushes or ivy-wrapped poles have traditionally been used to advertise taverns in the United Kingdom, and many pubs are still called The Ivy. The clinging nature of ivy makes it a symbol of love and friendship, there was once a tradition of priests giving ivy to newlyweds, and as it clings to dead trees and remains green, it was also viewed as a symbol of the eternal life of the soul after the death of the body in medieval Christian symbolism. The traditional British Christmas carol, "The Holly and the Ivy", uses ivy as a symbol for the Virgin Mary. Ivy-covered ruins were a staple of the Romantic movement in landscape painting, for example Visitor to a Moonlit Churchyard by Philip James de Loutherbourg (1790), Tintern Abbey, West Front by Joseph Mallord William Turner (1794) and Netley Abbey by Francis Towne (1809). In this context ivy may represent the ephemerality of human endeavours and the sublime power of nature. The image of ivy-covered historic buildings gave the name Ivy League to a group of old and prestigious American universities. Ivy features extensively in the 2010 movie Arrietty and the poster for the film. Gallery
Biology and health sciences
Apiales
null
444541
https://en.wikipedia.org/wiki/Dilophosaurus
Dilophosaurus
Dilophosaurus ( ) is a genus of theropod dinosaurs that lived in what is now North America during the Early Jurassic, about 186 million years ago. Three skeletons were discovered in northern Arizona in 1940, and the two best preserved were collected in 1942. The most complete specimen became the holotype of a new species in the genus Megalosaurus, named M. wetherilli by Samuel P. Welles in 1954. Welles found a larger skeleton belonging to the same species in 1964. Realizing it bore crests on its skull, he assigned the species to the new genus Dilophosaurus in 1970, as Dilophosaurus wetherilli. The genus name means "two-crested lizard", and the species name honors John Wetherill, a Navajo councilor. Further specimens have since been found, including an infant. Fossil footprints have also been attributed to the animal, including resting traces. Another species, Dilophosaurus sinensis from China, was named in 1993, but was later found to belong to the genus Sinosaurus. At about in length, with a weight of about , Dilophosaurus was one of the earliest large predatory dinosaurs and the largest known land-animal in North America at the time. It was slender and lightly built, and the skull was proportionally large, but delicate. The snout was narrow, and the upper jaw had a gap or kink below the nostril. It had a pair of longitudinal, arched crests on its skull; their complete shape is unknown, but they were probably enlarged by keratin. The mandible was slender and delicate at the front, but deep at the back. The teeth were long, curved, thin, and compressed sideways. Those in the lower jaw were much smaller than those of the upper jaw. Most of the teeth had serrations at their front and back edges. The neck was long, and its vertebrae were hollow, and very light. The arms were powerful, with a long and slender upper arm bone. The hands had four fingers; the first was short but strong and bore a large claw, the two following fingers were longer and slenderer with smaller claws; the fourth was vestigial. The thigh bone was massive, the feet were stout, and the toes bore large claws. Dilophosaurus has been considered a member of the family Dilophosauridae along with Dracovenator, a group placed between the Coelophysidae and later theropods, but some researchers have not found support for this grouping. Dilophosaurus would have been active and bipedal, and may have hunted large animals; it could also have fed on smaller animals and fish. Due to the limited range of movement and shortness of the forelimbs, the mouth may instead have made first contact with prey. The function of the crests is unknown; they were too weak for battle, but may have been used in visual display, such as species recognition and sexual selection. It may have grown rapidly, attaining a growth rate of per year early in life. The holotype specimen had multiple paleopathologies, including healed injuries and signs of a developmental anomaly. Dilophosaurus is known from the Kayenta Formation, and lived alongside dinosaurs such as Scutellosaurus and Sarahsaurus. It was designated as the state dinosaur of Connecticut based on tracks found there. Dilophosaurus was featured in the novel Jurassic Park and its movie adaptation, where it was given the fictional abilities to spit venom and expand a neck frill, and was depicted as smaller than the real animal. History of discovery In the summer of 1942, the paleontologist Charles L. Camp led a field party from the University of California Museum of Paleontology (UCMP) in search of fossil vertebrates in Navajo County in northern Arizona. Word of this was spread among the Native Americans there, and the Navajo Jesse Williams brought three members of the expedition to some fossil bones he had discovered in 1940. The area was part of the Kayenta Formation, about north of Cameron near Tuba City in the Navajo Indian Reservation. Three dinosaur skeletons were found in purplish shale, arranged in a triangle, about long at one side. The first was nearly complete, lacking only the front of the skull, parts of the pelvis, and some vertebrae. The second was very eroded, included the front of the skull, lower jaws, some vertebrae, limb bones, and an articulated hand. The third was so eroded that it consisted only of vertebral fragments. The first good skeleton was encased in a block of plaster after 10 days of work and loaded onto a truck, the second skeleton was easily collected, as it was almost entirely weathered out of the ground, but the third skeleton was almost gone. The nearly complete first specimen was cleaned and mounted at the UCMP under supervision of the paleontologist Wann Langston, a process that took three men two years. The skeleton was wall-mounted in bas relief, with the tail curved upwards, the neck straightened, and the left leg moved up for visibility, but the rest of the skeleton was kept in its burial position. As the skull was crushed, it was reconstructed based on the back of the skull of the first specimen and the front of the second. The pelvis was reconstructed after that of Allosaurus, and the feet were also reconstructed. At the time, it was one of the best-preserved skeletons of a theropod dinosaur, though incomplete. In 1954, the paleontologist Samuel P. Welles, who was part of the group who excavated the skeletons, preliminarily described and named this dinosaur as a new species in the existing genus Megalosaurus, M. wetherilli. The nearly complete specimen (catalogued as UCMP 37302) was made the holotype of the species, and the second specimen (UCMP 37303) was made the paratype. The specific name honored John Wetherill, a Navajo councilor whom Welles described as an "explorer, friend of scientists, and trusted trader". Wetherill's nephew, Milton, had first informed the expedition of the fossils. Welles placed the new species in Megalosaurus due to the similar limb proportions of it and M. bucklandii, and because he did not find great differences between them. At the time, Megalosaurus was used as a "wastebasket taxon", wherein many species of theropods were placed, regardless of their age or locality. Welles returned to Tuba City in 1964 to determine the age of the Kayenta Formation (it had been suggested to be Late Triassic in age, whereas Welles thought it was Early to Middle Jurassic), and discovered another skeleton about south of where the 1942 specimens had been found. The nearly complete specimen (catalogued as UCMP 77270) was collected with the help of William J. Breed of the Museum of Northern Arizona and others. During preparation of this specimen, it became clear that it was a larger individual of M. wetherilli, and that it would have had two crests on the top of its skull. Being a thin plate of bone, one crest was originally thought to be part of the missing left side of the skull, which may had been pulled out of its position by a scavenger. When it became apparent that it was a crest, it was also realized that a corresponding crest would have been on the left side, since the right crest was right of the midline, and was concave along its middle length. This discovery led to re-examination of the holotype specimen, which was found to have bases of two thin, upwards-extended bones, which were crushed together. These also represented crests, but they had formerly been assumed to be part of a misplaced cheek bone. The two 1942 specimens were also found to be juveniles, while the 1964 specimen was an adult, about one-third larger than the others. Welles later recalled that he thought the crests were as unexpected as finding "wings on a worm". New genus and subsequent discoveries Welles and an assistant subsequently corrected the wall mount of the holotype specimen based on the new skeleton, by restoring the crests, redoing the pelvis, making the neck ribs longer, and placing them closer together. After studying the skeletons of North American and European theropods, Welles realized that the dinosaur did not belong to Megalosaurus, and needed a new genus name. At that time, no other theropods with large longitudinal crests on their heads were known, and the dinosaur had therefore gained the interest of paleontologists. A mold of the holotype specimen was made, and fiberglass casts of it were distributed to various exhibits; to make labeling these casts easier, Welles decided to name the new genus in a brief note, rather than wait until the publication of a detailed description. In 1970, Welles coined the new genus name Dilophosaurus, from the Greek words di (δι) meaning "two", lophos (λόφος) meaning "crest", and sauros () meaning "lizard": "two-crested lizard". Welles published a detailed osteological description of Dilophosaurus in 1984, but did not include the 1964 specimen, since he thought it belonged to a different genus. Dilophosaurus was the first well-known theropod from the Early Jurassic, and remains one of the best-preserved examples of that age. In 2001, the paleontologist Robert J. Gay identified the remains of at least three new Dilophosaurus specimens (this number is based on the presence of three pubic bone fragments and two differentially sized femora) in the collections of the Museum of Northern Arizona. The specimens were found in 1978 in the Rock Head Quadrangle, away from where the original specimens were found, and had been labeled as a "large theropod". Though most of the material is damaged, it is significant in including elements not preserved in the earlier specimens, including part of the pelvis and several ribs. Some elements in the collection belonged to an infant specimen (MNA P1.3181), the youngest known example of this genus, and one of the earliest known infant theropods from North America, only preceded by some Coelophysis specimens. The juvenile specimen includes a partial humerus, a partial fibula, and a tooth fragment. In 2005, paleontologist Ronald S. Tykoski assigned a specimen (TMM 43646-140) from Gold Spring, Arizona, to Dilophosaurus, but in 2012, paleontologist Matthew T. Carrano and colleagues found it to differ in some details. In 2020, the paleontologists Adam D. Marsh and Timothy B. Rowe comprehensively redescribed Dilophosaurus based on the by then known specimens, including specimen UCMP 77270 which had remained undescribed since 1964. They also removed some previously assigned specimens, finding them too fragmentary to identify, and relocated the type quarry with the help of a relative of Jesse Williams. In an interview, Marsh called Dilophosaurus the "best worst-known dinosaur", since the animal was poorly understood despite having been discovered 80 years earlier. A major problem was that previous studies of the specimens did not make clear which parts were original fossils and which were reconstructed in plaster, yet subsequent researchers only had Welles' 1984 monograph to rely on for subsequent studies, muddling understanding of the dinosaur's anatomy. Marsh spent seven years studying the specimens to clarify the issues surrounding the dinosaur, including two specimens found two decades earlier by Rowe, his Ph.D. advisor. Formerly assigned species In 1984, Welles suggested that the 1964 specimen (UCMP 77270) did not belong to Dilophosaurus, but to a new genus, based on differences in the skull, vertebrae, and femora. He maintained that both genera bore crests, but that the exact shape of these was unknown in Dilophosaurus. Welles died in 1997, before he could name this supposed new dinosaur, and the idea that the two were separate genera has generally been ignored or forgotten since. In 1999, amateur paleontologist Stephan Pickering privately published the new name Dilophosaurus "breedorum" based on the 1964 specimen, named in honor of Breed, who had assisted in collecting it. This name is considered a nomen nudum, an invalidly published name, and Gay pointed out in 2005 that no significant differences exist between D. "breedorum" and other D. wetherilli specimens. In 2012, Carrano and colleagues found differences between the 1964 specimen and the holotype specimen, but attributed them to variation between individuals rather than species. Paleontologists Christophe Hendrickx and Octávio Mateus suggested in 2014 that the known specimens might represent two species of Dilophosaurus based on different skull features and stratigraphic separation, pending thorough description of assigned specimens. Marsh and Rowe concluded in 2020 that there was only one taxon among known Dilophosaurus specimens, and that differences between them were due to their different degree of maturity and preservation. They did not find considerable stratigraphic separation between the specimens either. A nearly complete theropod skeleton (KMV 8701) was discovered in the Lufeng Formation, in Yunnan Province, China, in 1987. It is similar to Dilophosaurus, with a pair of crests and a gap separating the premaxilla from the maxilla, but differs in some details. The paleontologist Shaojin Hu named it as a new species of Dilophosaurus in 1993, D. sinensis (from Greek Sinai, referring to China). In 1998, the paleontologist Matthew C. Lamanna and colleagues found D. sinensis to be identical to Sinosaurus triassicus, a theropod from the same formation, named in 1940. This conclusion was confirmed by paleontologist Lida Xing and colleagues in 2013, and though paleontologist Guo-Fu Wang and colleagues agreed the species belonged in Sinosaurus in 2017, they suggested it may be a separate species, S. sinensis. Description Dilophosaurus was one of the earliest large predatory dinosaurs, a medium-sized theropod, though small compared to some of the later theropods. It was also the largest known land-animal of North America during the Early Jurassic. Slender and lightly built, its size was comparable to that of a brown bear. The largest known specimen weighed about , measured about in length, and its skull was long. The smaller holotype specimen weighed about , was long, with a hip height of about , and its skull was long. A resting trace of a theropod similar to Dilophosaurus and Liliensternus has been interpreted by some researchers as showing impressions of feathers around the belly and feet, similar to down. Other researchers instead interpret these impressions as sedimentological artifacts created as the dinosaur moved, though this interpretation does not rule out that the track-maker could have borne feathers. Skull The skull of Dilophosaurus was large in proportion to the overall skeleton, yet delicate. The snout was narrow in front view, becoming narrower towards the rounded top. The premaxilla (front bone of the upper jaw) was long and low when seen from the side, bulbous at the front, and its outer surface became less convex from snout to naris (bony nostril). The nostrils were placed further back than in most other theropods. The premaxillae were in close articulation with each other, and while the premaxilla only connected to the maxilla (the following bone of the upper jaw) at the middle of the palate, with no connection at the side, they formed a strong joint through the robust, interlocking articulation between the hindwards and forwards directed processes of these bones. Hindwards and below, the premaxilla formed a wall for a gap between itself and the maxilla called the subnarial gap (also termed a "kink"). Such a gap is also present in coelophysoids, as well as other dinosaurs. The subnarial gap resulted in a diastema, a gap in the tooth row (which has also been called a "notch"). Within the subnarial gap was a deep excavation behind the toothrow of the premaxilla, called the subnarial pit, which was walled by a downwards keel of the premaxilla. The outer surface of the premaxilla was covered in foramina (openings) of varying sizes. The upper of the two backward-extending processes of the premaxilla was long and low, and formed most of the upper border of the elongated naris. It had a dip towards the font, which made the area by its base concave in profile. The underside of the premaxilla containing the alveoli (tooth sockets) was oval. The maxilla was shallow, and was depressed around the antorbital fenestra (a large opening in front of the eye), forming a recess that was rounded towards the front, and smoother than the rest of the maxilla. A foramen called the preanteorbital fenestra opened into this recess at the front bend. Large foramina ran on the side of the maxilla, above the alveoli. A deep nutrient groove ran backward from the subnarial pit along the base of the interdental plates (or rugosae) of the maxilla. Dilophosaurus bore a pair of high, thin, and arched (or plate-shaped) crests longitudinally on the skull roof. The crests (termed the nasolacrimal crests) began as low ridges on the premaxillae and were mainly formed by the upwards expanded and . These bones were coossified together (fusion during bone tissue formation), so the sutures between them cannot be determined. The lacrimal bone expanded into a thick, rugose boss, forming an arc at the upper front border of the orbit (eye socket), and supported the bottom of the back of the crest. Uniquely for this genus, the rim above the orbit continued hindwards and ended in a small, almost triangular process behind the orbit, which curved slightly outwards. Since only a short part of the upper surface of this process is unbroken, the rest of the crest may have risen above the skull over a distance of ~. The preserved part of the crest in UCMP 77270 is tallest around the midpoint of the antorbital fenestra's length. UCMP 77270 preserves the concave shelf between the bases of the crests, and when seen from the front, they are projected upwards and to the sides at an ~80° angle. Welles found the crests reminiscent of a double-crested cassowary, while Marsh and Rowe stated they were probably covered in keratin or keratinized skin. They pointed out that by comparison with helmeted guineafowl, the keratin on the crests of Dilophosaurus could have enlarged them much more than what is indicated by the bone. As only one specimen preserves much of the crests, whether they differed between individuals is unknown. CT scans show that air sacs (pockets of air that provide strength for and lighten bones) were present in the bones that surrounded the brain, and were continuous with the sinus cavities in the front of the skull. The antorbital fenestra was continuous with the side of the crests, which indicates the crests also had air sacs (a ridge of bone forms a roof over the antorbital fenestrae in most other theropods). The orbit was oval, and narrow towards the bottom. The jugal bone had two upwards-pointing processes, the first of which formed part of the lower margin of the antorbital fenestra, and part of the lower margin of the orbit. A projection from the quadrate bone into the lateral temporal fenestra (opening behind the eye) gave this a reniform (kidney-shaped) outline. The foramen magnum (the large opening at the back of the braincase) was about half the breadth of the occipital condyle, which was itself cordiform (heart-shaped), and had a short neck and a groove on the side. The mandible was slender and delicate at the front, but the articular region (where it connected with the skull) was massive, and the mandible was deep around the mandibular fenestra (an opening on its side). The mandibular fenestra was small in Dilophosaurus, compared to that of coelophysoids, and reduced from front to back, uniquely for this genus. The dentary bone (the front part of the mandible where most of the teeth there were attached) had an up-curved rather than pointed chin. The chin had a large foramen at the tip, and a row of small foramina ran in rough parallel with the upper edge of the dentary. On the inner side, the mandibular symphysis (where the two halves of the lower jaw connected) was flat and smooth, and showed no sign of being fused with its opposite half. A Meckelian foramen ran along the outer side of the dentary. The side surface of the had a unique pyramidal process in front of the articulation with the quadrate, and this horizontal ridge formed a shelf. The retroarticular process of the mandible (a backwards projection) was long. Dilophosaurus had four teeth in each premaxilla, 12 in each maxilla, and 17 in each dentary. The teeth were generally long, thin, and recurved, with relatively small bases. They were compressed sideways, oval in cross-section at the base, lenticular (lens-shaped) above, and slightly concave on their outer and inner sides. The largest tooth of the maxilla was either in or near the fourth alveolus, and the height of the tooth crowns decreased hindwards. The first tooth of the maxilla pointed slightly forwards from its alveolus because the lower border of the premaxilla process (which projected backward towards the maxilla) was upturned. The teeth of the dentary were much smaller than those of the maxilla. The third or fourth tooth in the dentary of Dilophosaurus and some coelophysoids was the largest there, and seems to have fit into the subnarial gap of the upper jaw. Most of the teeth had serrations on the front and back edges, which were offset by vertical grooves, and were smaller at the front. About 31 to 41  serrations were on the front edges, and 29 to 33 were on the back. At least the second and third teeth of the premaxilla had serrations, but the fourth tooth did not. The teeth were covered in a thin layer of enamel, thick, which extended far towards their bases. The alveoli were elliptical to almost circular, and all were larger than the bases of the teeth they contained, which may therefore have been loosely held in the jaws. Though the number of alveoli in the dentary would seem to indicate that the teeth were very crowded, they were rather far apart, due to the larger size of their alveoli. The jaws contained replacement teeth at various stages of eruption. The interdental plates between the teeth were very low. Postcranial skeleton Dilophosaurus had 10 cervical (neck), 14 dorsal (back), and 45 caudal (tail) vertebrae, and air sacs grew into the vertebrae. It had a long neck, which was probably flexed nearly 90° by the skull and by the shoulder, holding the skull in a horizontal posture. The cervical vertebrae were unusually light; their centra (the "bodies" of the vertebrae) were hollowed out by pleurocoels (depressions on the sides) and centrocoels (cavities on the inside). The arches of the cervical vertebrae also had pneumatic fossae (or chonoses), conical recesses so large that the bones separating them were sometimes paper-thin. The centra were plano-concave, flat to weakly convex at the front and deeply cupped (or concave) at the back, similar to Ceratosaurus. This indicates that the neck was flexible, though it had long, overlapping cervical ribs, which were fused to the centra. The cervical ribs were slender and may have bent easily. The atlas bone (the first cervical vertebra which attaches to the skull) had a small, cubic centrum, and had a concavity at the front where it formed a cup for the occipital condyle (protuberance that connects with the atlas vertebra) at the back of the skull. The axis bone (the second cervical vertebra) had a heavy spine, and its postzygapophyses (the processes of the vertebrae that articulated with the prezygapophyses of a following vertebrae) were met by long prezygapophyses that curved upwards from the third cervical vertebra. The centra and neural spines of the cervical vertebrae were long and low, and the spines were stepped in side view, forming "shoulders" at the front and back, as well as taller, central "caps" that gave the appearance of a Maltese cross (cruciform) when seen from above, distinctive features of this dinosaur. The posterior centrodiapophyseal lamina of the cervicals showed serial variation, bifurcating and reuniting down the neck, a unique feature. The neural spines of the dorsal vertebrae were also low and expanded front and back, which formed strong attachments for ligaments. Uniquely for this genus, additional laminae emanated from the middle trunk vertebrae's anterior centrodiapophyseal laminae and posterior centrodiapophyseal laminae. The sacral vertebrae which occupied the length of the ilium blade did not appear to be fused. The rib of the first sacral vertebra articulated with the preacetabular process of the ilium, a distinct feature. The centra of the caudal vertebrae were very consistent in length, but their diameter became smaller towards the back, and they went from elliptical to circular in cross-section. The scapulae (shoulder blades) were moderate in length and concave on their inner sides to follow the body's curvature. The scapulae were wide, particularly the upper part, which was rectangular (or squared off), a unique feature. The coracoids were elliptical, and not fused to the scapulae. The lower hind portions of the coracoids had a "horizontal buttress" next to the biceps tuber, unique for this genus. The arms were powerful, and had deep pits and stout processes for attachment of muscles and ligaments. The humerus (upper arm bone) was large and slender, and the ulna (lower arm bone) was stout and straight, with a stout olecranon. The hands had four fingers: the first was shorter but stronger than the following two fingers, with a large claw, and the two following fingers were longer and slenderer, with smaller claws. The claws were curved and sharp. The third finger was reduced, and the fourth was vestigial (retained, but without function). The crest of the ilium was highest over the ilial peduncle (the downwards process of the ilium), and its outer side was concave. The foot of the pubic bone was only slightly expanded, whereas the lower end was much more expanded on the ischium, which also had a very thin shaft. The hind legs were large, with a slighter longer femur (thigh bone) than tibia (lower leg bone), the opposite of, for example, Coelophysis. The femur was massive; its shaft was sigmoid-shaped (curved like an 'S'), and its greater trochanter was centered on the shaft. The tibia had a developed tuberosity and was expanded at the lower end. The astragalus bone (ankle bone) was separated from the tibia and the calcaneum, and formed half of the socket for the fibula. It had long, stout feet with three well-developed toes that bore large claws, which were much less curved than those of the hand. The third toe was the stoutest, and the smaller first toe (the hallux) was kept off the ground. Classification Welles thought Dilophosaurus a megalosaur in 1954, but revised his opinion in 1970 after discovering that it had crests. By 1974, Welles and the paleontologist Robert A. Long found Dilophosaurus to be a ceratosauroid. In 1984 Welles found that Dilophosaurus exhibited features of both Coelurosauria and Carnosauria, the two main groups into which theropods had hitherto been divided, based on body size, and he suggested this division was inaccurate. He found Dilophosaurus to be closest to those theropods that were usually placed in the family Halticosauridae, particularly Liliensternus. In 1988, paleontologist Gregory S. Paul classified the halticosaurs as a subfamily of the family Coelophysidae, and suggested that Dilophosaurus could have been a direct descendant of Coelophysis. Paul also considered the possibility that spinosaurs were late-surviving dilophosaurs, based on similarity of the kinked snout, nostril position, and slender teeth of Baryonyx. In 1994, paleontologist Thomas R. Holtz placed Dilophosaurus in the group Coelophysoidea, along with but separate from the Coelophysidae. He placed the Coelophysoidea in the group Ceratosauria. In 2000, paleontologist James H. Madsen and Welles divided Ceratosauria into the families Ceratosauridae and Dilophosauridae, with Dilophosaurus as the sole member of the latter family. Lamanna and colleagues pointed out in 1998 that since Dilophosaurus was discovered to have had crests on its skull, other similarly crested theropods have been discovered (including Sinosaurus), and that this feature is, therefore, not unique to the genus, and of limited use for determining interrelationships within their group. Paleontologist Adam M. Yates described the genus Dracovenator from South Africa in 2005, and found it closely related to Dilophosaurus and Zupaysaurus. His cladistic analysis suggested they did not belong in the Coelophysoidea, but rather in the Neotheropoda, a more derived (or "advanced") group. He proposed that if Dilophosaurus was more derived than the Coelophysoidea, the features it shared with this group may have been inherited from basal (or "primitive") theropods, indicating that theropods may have passed through a "coelophysoid stage" in their early evolution. In 2007, paleontologist Nathan D. Smith and colleagues found the crested theropod Cryolophosaurus to be the sister species of Dilophosaurus, and grouped them with Dracovenator and Sinosaurus. This clade was more derived than the Coelophysoidea, but more basal than the Ceratosauria, thereby placing basal theropods in a ladder-like arrangement. In 2012, Carrano and colleagues found that the group of crested theropods proposed by Smith and colleagues was based on features that relate to the presence of such crests, but that the features of the rest of the skeleton were less consistent. They instead found that Dilophosaurus was a coelophysoid, with Cryolophosaurus and Sinosaurus being more derived, as basal members of the group Tetanurae. Paleontologist Christophe Hendrickx and colleagues defined the Dilophosauridae to include Dilophosaurus and Dracovenator in 2015, and noted that while general uncertainty exists about the placement of this group, it appears to be slightly more derived than the Coelophysoidea, and the sister group to the Averostra. The Dilophosauridae share features with the Coelophysoidea such as the subnarial gap and the front teeth of the maxilla pointing forwards, while features shared with Averostra include a fenestra at the front of the maxilla and a reduced number of teeth in the maxilla. They suggested that the cranial crests of Cryolophosaurus and Sinosaurus had either evolved convergently, or were a feature inherited from a common ancestor. The following cladogram is based on that published by Hendrickx and colleagues, itself based on earlier studies: In 2019, paleontologists Marion Zahner and Winand Brinkmann found the members of the Dilophosauridae to be successive basal sister taxa of the Averostra rather than a monophyletic clade (a natural group), but noted that some of their analyses did find the group valid, containing Dilophosaurus, Dracovenator, Cryolophosaurus, and possibly Notatesseraeraptor as the basal-most member. They therefore provided a diagnosis for the Dilophosauridae, based on features in the lower jaw. In the phylogenetic analysis accompanying their 2020 redescription, Marsh and Rowe found all specimens of Dilophosaurus to form a monophyletic group, sister to Averostra, and more derived than Cryolophosaurus. Their analysis did not find support for Dilophosauridae, and they suggested cranial crests were a plesiomorphic (ancestral) trait of Ceratosauria and Tetanurae. Ichnology Various ichnotaxa (taxa based on trace fossils) have been attributed to Dilophosaurus or similar theropods. In 1971, Welles reported dinosaur footprints from the Kayenta Formation of northern Arizona, on two levels and below where the original Dilophosaurus specimens were found. The lower footprints were tridactyl (three-toed), and could have been made by Dilophosaurus; Welles created the new ichnogenus and species Dilophosauripus williamsi based on them, in honor of Williams, the discoverer of the first Dilophosaurus skeletons. The type specimen is a cast of a large footprint catalogued as UCMP 79690-4, with casts of three other prints included in the hypodigm. In 1984, Welles conceded that no way had been found to prove or disprove that the footprints belonged to Dilophosaurus. In 1996, the paleontologists Michael Morales and Scott Bulkey reported a trackway of the ichnogenus Eubrontes from the Kayenta Formation made by a very large theropod. They noted it could have been made by a very large Dilophosaurus individual, but found that unlikely, as they estimated the trackmaker would have been tall at the hips, compared to the of Dilophosaurus. The paleontologist Gerard Gierliński examined tridactyl footprints from the Holy Cross Mountains in Poland and concluded in 1991 that they belonged to a theropod like Dilophosaurus. He named the new ichnospecies Grallator (Eubrontes) soltykovensis based on them, with a cast of footprint MGIW 1560.11.12 as the holotype. In 1994 Gierliński also assigned footprints from the Höganäs Formation in Sweden discovered in 1974 to G. (E.) soltykovensis. In 1996, Gierliński attributed track AC 1/7 from the Turners Falls Formation of Massachusetts, a resting trace he believed to show feather impressions, to a theropod similar to Dilophosaurus and Liliensternus, and assigned it to the ichnotaxon Grallator minisculus. The paleontologist Martin Kundrát agreed that the track showed feather impressions in 2004, but this interpretation was disputed by the paleontologist Martin Lockley and colleagues in 2003 and the paleontologist Anthony J. Martin and colleagues in 2004, who considered them as sedimentological artifacts. Martin and colleagues also reassigned the track to the ichnotaxon Fulicopus lyellii. Gierliński and Karol Sabath responded at a conference talk in 2005, pointing out that the algae mat imprint would not only have been present on the stomach, but also the footprints. Based on detailed photos and experiments, they found the traces similar to those left by the fibrous feathers (semiplumes) of modern birds, and different from those left by a scaly body. The paleontologist Robert E. Weems proposed in 2003 that Eubrontes tracks were not produced by a theropod, but by a sauropodomorph similar to Plateosaurus, excluding Dilophosaurus as a possible trackmaker. Instead, Weems proposed Kayentapus hopii, another ichnotaxon named by Welles in 1971, as the best match for Dilophosaurus. The attribution to Dilophosaurus was primarily based on the wide angle between digit impressions three and four shown by these tracks, and the observation that the foot of the holotype specimen shows a similarly splayed-out fourth digit. Also in 2003, paleontologist Emma Rainforth argued that the splay in the holotype foot was merely the result of distortion, and that Eubrontes would indeed be a good match for Dilophosaurus. The paleontologist Spencer G. Lucas and colleagues stated in 2006 that virtually universal agreement existed that Eubrontes tracks were made by a theropod like Dilophosaurus, and that they and other researchers dismissed Weems' claims. In 2006, Weems defended his 2003 assessment of Eubrontes, and proposed an animal like Dilophosaurus as the possible trackmaker of numerous Kayentapus trackways of the Culpeper Quarry in Virginia. Weems suggested rounded impressions associated with some of these trackways to represent hand impressions lacking digit traces, which he interpreted as a trace of quadrupedal movement. Milner and colleagues used the new combination Kayentapus soltykovensis in 2009, and suggested that Dilophosauripus may not be distinct from Eubrontes and Kayentapus. They suggested that the long claw marks that were used to distinguish Dilophosauripus may be an artifact of dragging. They found that Gigandipus and Anchisauripus tracks may likewise also just represent variations of Eubrontes. They pointed out that differences between ichnotaxa may reflect how the trackmaker interacted with the substrate rather than taxonomy. They also found Dilophosaurus to be a suitable match for a Eubrontes trackway and resting trace (SGDS 18.T1) from the St. George dinosaur discovery site in the Moenave Formation of Utah, though the dinosaur itself is not known from the formation, which is slightly older than the Kayenta Formation. Weems stated in 2019 that Eubrontes tracks do not reflect the gracile feet of Dilophosaurus, and argued they were instead made by the bipedal sauropodomorph Anchisaurus. In a 2024 review of Jurassic tracks, the paleontologist John R. Foster and colleagues stated that few other ichnologists had accepted Weems' sauropodomorph interpretation of Eubrontes, partially because such tracks are abundant in places where no sauropodomorph fossils have been found. Paleobiology Feeding and diet Welles found that Dilophosaurus did not have a powerful bite, due to weakness caused by the subnarial gap. He thought that it used its front premaxillary teeth for plucking and tearing rather than biting, and the maxillary teeth further back for piercing and slicing. He thought that it was probably a scavenger rather than a predator, and that if it did kill large animals, it would have done so with its hands and feet rather than its jaws. Welles did not find evidence of cranial kinesis in the skull of Dilophosaurus, a feature that allows individual bones of the skull to move in relation to each other. In 1986, the paleontologist Robert T. Bakker instead found Dilophosaurus, with its massive neck and skull and large upper teeth, to have been adapted for killing large prey, and strong enough to attack any Early Jurassic herbivores. In 1988, Paul dismissed the idea that Dilophosaurus was a scavenger, and claimed that strictly scavenging terrestrial animals are a myth. He stated that the snout of Dilophosaurus was better braced than had been thought previously, and that the very large, slender maxillary teeth were more lethal than the claws. Paul suggested that it hunted large animals such as prosauropods, and that it was more capable of snapping small animals than other theropods of a similar size. A 2005 beam-theory study by the palaeontologist François Therrien and colleagues found that the bite force in the mandible of Dilophosaurus decreased rapidly hindwards in the tooth-throw. This indicates that the front of the mandible, with its upturned chin, "rosette" of teeth, and strengthened symphyseal region (similar to spinosaurids), was used to capture and manipulate prey, probably of relatively smaller size. The properties of its mandibular symphysis were similar to those of felids and crocodilians that use the front of their jaws to deliver a powerful bite when subduing prey. The loads exerted on the mandibles were consistent with struggle of small prey, which may have been hunted by delivering slashing bites to wound it, and then captured with the front of the jaws after being too weakened to resist. The prey may then have been moved further back into the jaws, where the largest teeth were located, and killed by slicing bites (similar to some crocodilians) with the sideways-compressed teeth. The authors suggested that if Dilophosaurus indeed fed on small prey, possible hunting packs would have been of limited size. Milner and paleontologist James I. Kirkland suggested in 2007 that Dilophosaurus had features that indicate it may have eaten fish. They pointed out that the ends of the jaws were expanded to the sides, forming a "rosette" of interlocking teeth, similar to those of spinosaurids, known to have eaten fish, and gharials, which is the modern crocodile that eats the most fish. The nasal openings were also retracted back on the jaws, similar to spinosaurids, which have even more retracted nasal openings, and this may have limited water splashing into the nostrils during fishing. Both groups also had long arms with well-developed claws, which could help when catching fish. Lake Dixie, a large lake that extended from Utah to Arizona and Nevada, would have provided abundant fish in the "post-cataclysmic", biologically more impoverished world that followed the Triassic–Jurassic extinction event (wherein about three quarters of life on Earth vanished), 5 to 15 million years before Dilophosaurus appeared. In 2018, Marsh and Rowe reported that the holotype specimen of the sauropodomorph Sarahsaurus bore possible tooth marks scattered across the skeleton that may have been left by Dilophosaurus (Syntarsus was too small to have produced them) scavenging the specimen after it died (the positions of the bones may also have been disturbed by scavenging). An example of such marks can be seen on the left scapula, which has an oval depression on the surface of its upper side, and a large hole on the lower front end of the right tibia. The quarry where the holotype and paratype specimens of Sarahsaurus were excavated also contained a partial immature Dilophosaurus specimen. Marsh and Rowe suggested in 2020 that many of the features that distinguished Dilophosaurus from earlier theropods were associated with increased body size and macropredation (preying on large animals). While Marsh and Rowe agreed that Dilophosaurus could have fed on fish and small prey in the fluvial system in its environment, they pointed out that the articulation between the premaxilla and maxilla of the upper jaw was immobile and much more robust than previously thought, and that large-bodied prey could have been grasped and manipulated with the forelimbs during predation and scavenging. They considered the large bite marks on Sarahsaurus specimens alongside shed teeth and the presence of a Dilophosaurus specimen within the same quarry as support for this idea. In a 2021 article, paleontologist Matthew A. Brown and Rowe stated that these remains showed that Dilophosaurus had jaws strong enough to puncture bone. The fleshy air sacs from its respiratory system that grew into the vertebrae both strengthened and lightened the skeleton, and allowed unidirectional airflow through its lungs, similar to birds and crocodiles, and thereby more oxygen than a bidirectional respiratory system of mammals (wherein the air flows in and out of the lungs). Unidirectional breathing indicates relatively high metabolic rates and therefore high levels of activity, indicating that Dilophosaurus was likely a fast, agile hunter. Brown and Rowe considered Dilophosaurus to have been an apex predator in its ecosystem, and not a scavenger. Motion Welles envisioned Dilophosaurus as an active, clearly bipedal animal, similar to an enlarged ostrich. He found the forelimbs to have been powerful weapons, strong and flexible, and not used for locomotion. He noted that the hands were capable of grasping and slashing, of meeting each other, and reaching two-thirds up the neck. He proposed that in a sitting posture, the animal would rest on the large "foot" of its ischium, as well as its tail and feet. In 1990, paleontologists Stephen and Sylvia Czerkas suggested that the weak pelvis of Dilophosaurus could have been an adaptation for an aquatic lifestyle, where the water would help support its weight, and that it could have been an efficient swimmer. They found it doubtful that it would have been restricted to a watery environment, though, due to the strength and proportions of its hind limbs, which would have made it fleet-footed and agile during bipedal locomotion. Paul depicted Dilophosaurus bouncing on its tail while lashing out at an enemy, similar to a kangaroo. In 2005, paleontologists Phil Senter and James H. Robins examined the range of motion in the forelimbs of Dilophosaurus and other theropods. They found that Dilophosaurus would have been able to draw its humerus backward until it was almost parallel with the scapula, but could not move it forwards to a more than vertical orientation. The elbow could approach full extension and flexion at a right angle, but not achieve it completely. The fingers do not appear to have been voluntarily hyperextensible (able to extend backwards, beyond their normal range), but they may have been passively hyperextensible, to resist dislocation during violent movements by captured prey. A 2015 article by Senter and Robins gave recommendations for how to reconstruct the fore limb posture in bipedal dinosaurs, based on examination of various taxa, including Dilophosaurus. The scapulae were held very horizontally, the resting orientation of the elbow would have been close to a right angle, and the orientation of the hand would not have deviated much from that of the lower arm. In 2018, Senter and Corwin Sullivan examined the range of motion in the fore limb joints of Dilophosaurus by manipulating the bones, to test hypothesized functions of the fore limbs. They also took into account that experiments with alligator carcasses show that the range of motion is greater in elbows covered in soft tissue (such as cartilage, ligaments, and muscles) than what would be indicated by manipulation of bare bones. They found that the humerus of Dilophosaurus could be retracted into a position that was almost parallel with the scapula, protracted to an almost vertical level, and elevated 65°. The elbow could not be flexed past a right angle to the humerus. Pronation and supination of the wrists (crossing the radius and ulna bones of the lower arm to turn the hand) was prevented by the radius and ulna joints not being able to roll, and the palms, therefore, faced medially, towards each other. The inability to pronate the wrists was an ancestral feature shared by theropods and other dinosaur groups. The wrist had limited mobility, and the fingers diverged during flexion, and were very hyperextensible. Senter and Sullivan concluded that Dilophosaurus was able to grip and hold objects between two hands, to grip and hold small objects in one hand, to seize objects close beneath the chest, to bring an object to the mouth, to perform a display by swinging the arms in an arc along the sides of the ribcage, to scratch the chest, belly, or the half of the other forelimb farthest from the body, to seize prey beneath the chest or the base of the neck, and to clutch objects to the chest. Dilophosaurus was unable to perform scratch-digging, hook-pulling, to hold objects between two fingertips of one hand, to maintain balance by extending the arms outwards to the sides, or to probe small crevices like the modern aye aye does. The hyperextensibility of the fingers may have prevented the prey's violent struggle from dislocating them, since it would have allowed greater motion of the fingers (with no importance to locomotion). The limited mobility of the shoulder and shortness of the forelimbs indicates that the mouth made first contact with the prey rather than the hands. Capture of prey with the forelimbs would only be possible for seizing animals small enough to fit beneath the chest of Dilophosaurus, or larger prey that had been forced down with its mouth. The great length of the head and neck would have enabled the snout to extend much further than the hands. The Dilophosauripus footprints reported by Welles in 1971 were all on the same level, and were described as a "chicken yard hodge-podge" of footprints, with few forming a trackway. The footprints had been imprinted in mud, which allowed the feet to sink down . The prints were sloppy, and the varying breadth of the toe prints indicates that mud had clung to the feet. The impressions varied according to differences in the substrate and the manner in which they were made; sometimes, the foot was planted directly, but often a backward or forward slip occurred as the foot came down. The positions and angles of the toes also varied considerably, which indicate they must have been quite flexible. The Dilophosauripus footprints had an offset second toe with a thick base, and very long, straight claws that were in line with the axes of the toe pads. One of the footprints was missing the claw of the second toe, perhaps due to injury. In 1984, Welles interpreted the fact that three individuals were found close together, and the presence of criss-crossed trackways nearby, as indications that Dilophosaurus traveled in groups. Gay agreed that they may have traveled in small groups, but noted that no direct evidence supported this, and that flash floods could have picked up scattered bones from different individuals and deposited them together. Milner and colleagues examined the possible Dilophosaurus trackway SGDS 18.T1 in 2009, which consists of typical footprints with tail drags and a more unusual resting trace, deposited in lacustrine beach sandstone. The trackway began with the animal first oriented approximately in parallel with the shoreline, and then stopping by a berm with both feet in parallel, whereafter it lowered its body, and brought its metatarsals and the callosity around its ischium to the ground; this created impressions of symmetrical "heels" and circular impressions of the ischium. The part of the tail closest to the body was kept off the ground, whereas the end further away from the body made contact with the ground. The fact that the animal rested on a slope is what enabled it to bring both hands to the ground close to the feet. After resting, the dinosaur shuffled forwards, and left new impressions with its feet, metatarsals, and ischium, but not the hands. The right foot now stepped on the print of the right hand, and the second claw of the left foot made a drag mark from the first resting position to the next. After some time, the animal stood up and moved forwards, with the left foot first, and once fully erect, it walked across the rest of the exposed surface, while leaving thin drag marks with the end of the tail. Crouching is a rarely captured behavior of theropods, and SGDS 18.T1 is the only such track with unambiguous impressions of theropod hands, which provides valuable information about how they used their forelimbs. The crouching posture was found to be very similar to that of modern birds, and shows that early theropods held the palms of their hands facing medially, towards each other. As such a posture therefore evolved early in the lineage, it may have characterized all theropods. Theropods are often depicted with their palms facing downwards, but studies of their functional anatomy have shown that they, like birds, were unable to pronate or supinate their arms. The track showed that the legs were held symmetrically with the body weight distributed between the feet and the metatarsals, which is also a feature seen in birds such as ratites. Milner and colleagues also dismissed the idea that the Kayentapus minor track reported by Weems showed a palm imprint made by a quadrupedally walking theropod. Weems had proposed the trackmaker would have been able to move quadrupedally when walking slowly, while the digits would have been habitually hyperextended so only the palms touched the ground. Milner and colleagues found the inferred pose unnecessary, and suggested the track was instead made in a similar way as SGDS 18.T1, but without leaving traces of the digits. Crest function Welles conceded that suggestions as to the function of the crests of Dilophosaurus were conjectural, but thought that, though the crests had no grooves to indicate vascularization, they could have been used for thermoregulation. He also suggested they could have been used for species recognition or ornamentation. Bakker considered the crests sexual adornments in 1986, noting they were so thin that they could only have been for visual effect, unlike the heavier crests of allosaurs, which could have been used for head-butting. The Czerkas pointed out in 1990 that the crests could not have been used during battle, as their delicate structure would have been easily damaged. They suggested that they were a visual display for attracting a mate, and even thermoregulation. In 1990, paleontologist Walter P. Coombs stated that the crests may have been enhanced by colors for use in display. In 2011 the paleontologists Kevin Padian and John R. Horner proposed that "bizarre structures" in dinosaurs in general (including crests, frills, horns, and domes) were primarily used for species recognition, and dismissed other explanations as unsupported by evidence. They noted that too few specimens of cranially ornamented theropods, including Dilophosaurus, were known to test their evolutionary function statistically, and whether they represented sexual dimorphism or sexual maturity. In a response to Padian and Horner the same year, the paleontologists Rob J. Knell and Scott D. Sampson argued that species recognition was not unlikely as a secondary function for "bizarre structures" in dinosaurs, but that sexual selection (used in display or combat to compete for mates) was a more likely explanation, due to the high cost of developing them, and because such structures appear to be highly variable within species. In 2013, paleontologists David E. Hone and Darren Naish criticized the "species recognition hypothesis", and argued that no extant animals use such structures primarily for species recognition, and that Padian and Horner had ignored the possibility of mutual sexual selection (where both sexes are ornamented). Marsh and Rowe agreed in 2020 that the crests of Dilophosaurus likely had a role in species identification or intersexual/intrasexual selection, as in some modern birds. It is unknown if the air sacs in the crests supported such functions. Development Welles originally interpreted the smaller Dilophosaurus specimens as juveniles, and the larger specimen as an adult, later interpreting them as different species. Paul suggested that the differences between the specimens was perhaps due to sexual dimorphism, as was seemingly also apparent in Coelophysis, which had "robust" and "gracile" forms of the same size, that might otherwise have been regarded as separate species. Following this scheme, the smaller Dilophosaurus specimen would represent a "gracile" example. In 2005 Tykoski found that most Dilophosaurus specimens known were juvenile individuals, with only the largest an adult, based on the level of co-ossification of the bones. In 2005 Gay found no evidence of the sexual dimorphism suggested by Paul (but supposedly present in Coelophysis), and attributed the variation seen between Dilophosaurus specimens to individual variation and ontogeny (changes during growth). There was no dimorphism in the skeletons, but he did not rule out that there could have been in the crests; more data was needed to determine this. Based on the tiny nasal crests on a juvenile specimen, Yates had tentatively assigned to the related genus Dracovenator, he suggested that these would have grown larger as the animal became adult. The paleontologist J.S. Tkach reported a histological study (microscopical study of internal features) of Dilophosaurus in 1996, conducted by taking thin-sections of long bones and ribs of specimen UCMP 37303 (the lesser preserved of the two original skeletons). The bone tissues were well vascularized and had a fibro-lamellar structure similar to that found in other theropods and the sauropodomorph Massospondylus. The plexiform (woven) structure of the bones suggested rapid growth, and Dilophosaurus may have attained a growth rate of per year early in life. Welles found that the replacement teeth of Dilophosaurus and other theropods originated deep inside the bone, decreasing in size the farther they were from the alveolar border. There were usually two or three replacement teeth in the alveoli, with the youngest being a small, hollow crown. The replacement teeth erupted on the outer side of the old teeth. When a tooth neared the gum line, the inner wall between the interdental plates was resorbed and formed a nutrient notch. As the new tooth erupted, it moved outwards to center itself in the alveolus, and the nutrient notch closed over. Paleopathology Welles noted various paleopathologies (ancient signs of disease, such as injuries and malformations) in Dilophosaurus. The holotype had a sulcus (groove or furrow) on the neural arch of a cervical vertebra that may have been due to an injury or crushing, and two pits on the right humerus that may have been abscesses (collections of pus) or artifacts. Welles also noted that it had a smaller and more delicate left humerus than the right, but with the reverse condition in its forearms. In 2001, paleontologist Ralph Molnar suggested that this was caused by a developmental anomaly called fluctuating asymmetry. This anomaly can be caused by stress in animal populations, for example due to disturbances in their environment, and may indicate more intense selective pressure. Asymmetry can also result from traumatic events in early development of an animal, which would be more randomly distributed in time. A 2001 study conducted by paleontologist Bruce Rothschild and colleagues examined 60 Dilophosaurus foot bones for signs of stress fractures (which are caused by strenuous, repetitive actions), but none were found. Such injuries can be the result of very active, predatory lifestyles. In 2016 Senter and Sara L. Juengst examined the paleopathologies of the holotype specimen and found that it bore the greatest and most varied number of such maladies on the pectoral girdle and forelimb of any theropod dinosaur so far described, some of which are not known from any other dinosaur. Only six other theropods are known with more than one paleopathology on the pectoral girdle and forelimbs. The holotype specimen had eight afflicted bones, whereas no other theropod specimen is known with more than four. On its left side, it had a fractured scapula and radius, and fibriscesses (like abscesses) in the ulna and the outer phalanx bone of the thumb. On the right side it had torsion of its humeral shaft, three bony tumors on its radius, a truncated articular surface of its third metacarpal bone, and deformities on the first phalanx bone of the third finger. This finger was permanently deformed and unable to flex. The deformities of the humerus and the third finger may have been due to osteodysplasia, which had not been reported from non-avian dinosaurs before, but is known in birds. Affecting juvenile birds that have experienced malnutrition, this disease can cause pain in one limb, which makes the birds prefer to use the other limb instead, which thereby develops torsion. The number of traumatic events that led to these features is not certain, and it is possible that they were all caused by a single encounter, for example by crashing into a tree or rock during a fight with another animal, which may have caused puncture wounds with its claws. Since all the injuries had healed, it is certain that the Dilophosaurus survived for a long time after these events, for months, perhaps years. The use of the forelimbs for prey capture must have been compromised during the healing process. The dinosaur may therefore have endured a long period of fasting or subsisted on prey that was small enough for it to dispatch with the mouth and feet, or with one forelimb. According to Senter and Juengst, the high degree of pain the dinosaur might have experienced in multiple locations for long durations also shows that it was a hardy animal. They noted that paleopathologies in dinosaurs are underreported, and that even though Welles had thoroughly described the holotype, he had mentioned only one of the pathologies found by them. They suggested that such features may sometimes be omitted because descriptions of species are concerned with their characteristics rather than abnormalities, or because such features are difficult to recognize. Senter and Sullivan found that the pathologies significantly altered the range of motion in the right shoulder and right third finger of the holotype, and that estimates for range of motion may therefore not match those made for a healthy forelimb. Paleoenvironment Dilophosaurus is known from the Kayenta Formation, which dates to the Sinemurian-Toarcian stages of the Early Jurassic, approximately 196–186 million years ago (187–190 mya has also been suggested, and the age of the Kayenta is considered complex). As Dilophosaurus is known from the base to the middle of the formation, which is Pliensbachian in age, the taxon had a chronostratigraphic range of 15 million years. The Kayenta Formation is part of the Glen Canyon Group that includes formations in northern Arizona, parts of southeastern Utah, western Colorado, and northwestern New Mexico. It is composed mostly of two facies, one dominated by siltstone deposition and the other by sandstone. The siltstone facies is found in much of Arizona, while the sandstone facies is present in areas of northern Arizona, southern Utah, western Colorado, and northwestern New Mexico. The formation was primarily deposited by rivers, with the siltstone facies as the slower, more sluggish part of the river system. Kayenta Formation deposition was ended by the encroaching dune field that would become the Navajo Sandstone. The environment was seasonally dry, with sand dunes migrating in and out of the wet environments where animals lived, and has been likened to a river oasis; a waterway lined with conifers and surrounded by sand. The Kayenta Formation has yielded a small but growing assemblage of organisms. Most fossils are from the siltstone facies. Most organisms known so far are vertebrates. Non-vertebrates include microbial or "algal" limestone, petrified wood, plant impressions, freshwater bivalves and snails, ostracods, and invertebrate trace fossils. Vertebrates are known from both body fossils and trace fossils. Vertebrates known from body fossils include hybodont sharks, indeterminate bony fish, lungfish, salamanders, the frog Prosalirus, the caecilian Eocaecilia, the turtle Kayentachelys, a sphenodontian reptile, lizards, and several early crocodylomorphs including Calsoyasuchus, Eopneumatosuchus, Kayentasuchus, and Protosuchus, and the pterosaur Rhamphinion. Apart from Dilophosaurus, several dinosaurs are known, including the theropods Megapnosaurus, and Kayentavenator, the sauropodomorph Sarahsaurus, a heterodontosaurid, and the thyreophoran Scutellosaurus. Synapsids include the tritylodontids Dinnebitodon, Kayentatherium, and Oligokyphus, morganucodontids, the possible early true mammal Dinnetherium, and a haramiyid mammal. The majority of these finds come from the vicinity of Gold Spring, Arizona. Vertebrate trace fossils include coprolites and the tracks of therapsids, lizard-like animals, and several types of dinosaur. Taphonomy Welles outlined the taphonomy of the original specimens, changes that happened during their decay and fossilization. The holotype skeleton was found lying on its right side, and its head and neck were recurvedcurved backwardsin the "death pose" in which dinosaur skeletons are often found. This pose was thought to be opisthotonus (due to death-spasms) at the time, but may instead have been the result of how a carcass was embedded in sediments. The back was straight, and the hindmost dorsal vertebrae were turned on their left sides. The caudal vertebrae extended irregularly from the pelvis, and the legs were articulated, with little displacement. Welles concluded that the specimens were buried at the place of their deaths, without having been transported much, but that the holotype specimen appears to have been disturbed by scavengers, indicated by the rotated dorsal vertebrae and crushed skull. Gay noted that the specimens he described in 2001 showed evidence of having been transported by a stream. As none of the specimens were complete, they may have been transported over some distance, or have lain on the surface and weathered for some time before transport. They may have been transported by a flood, as indicated by the variety of animals found as fragments and bone breakage. Cultural significance According to Navajo myth, the carcasses of slain monsters were "beaten into the earth", but were impossible to obliterate, and fossils have traditionally been interpreted as their remains. While Navajo people have helped paleontologists locate fossils since the 19th century, traditional beliefs suggest that the ghosts of the monsters remain in their partially buried corpses, and have to be kept there through potent rituals. Likewise, some worry that the bones of their relatives would be dug up along with dinosaur remains, and that removing fossils shows disrespect to the past lives of these beings. In 2005, the historian Adrienne Mayor stated Welles had noted that during the original excavation of Dilophosaurus, the Navajo Williams disappeared from the excavation after some days, and speculated this was because Williams found the detailed work with fine brushes "beneath his dignity". Mayor instead pointed out that Navajo men do occupy themselves with detailed work, such as jewellery and painting, and that the explanation for Williams' departure may instead have been traditional anxiety as the skeletons emerged and were disturbed. Mayor also pointed to an incident in the 1940s when a Navajo man helped excavate a Pentaceratops skeleton as long as he did not have to touch the bones, but left the site when only a few inches of dirt were left covering them. In a 1994 book, Welles said Williams had come back some days later with two Navajo women saying "that's no man's work, that's squaw's work". The cliffs in Arizona that contained the bones of Dilophosaurus also have petroglyphs by ancestral Puebloans carved onto them, and the criss-crossing tracks of the area are called Naasho'illbahitsho Biikee by the Navajo, meaning "big lizard tracks". According to Mayor, Navajos used to hold ceremonies and make offerings to these monster tracks. Tridactyl tracks were also featured as decorations on the costumes and rock art of the Hopi and Zuni, probably influenced by such dinosaur tracks. In 2017 Dilophosaurus was designated as the state dinosaur of the US state of Connecticut, to become official with the new state budget in 2019. Dilophosaurus was chosen because tracks thought to have been made by similar dinosaurs were discovered in Rocky Hill in 1966, during excavation for the Interstate Highway 91. The six tracks were assigned to the ichnospecies Eubrontes giganteus, which was made the state fossil of Connecticut in 1991. The area they were found in had been a Triassic lake, and when the significance of the area was confirmed, the highway was rerouted, and the area made a state park named Dinosaur State Park. In 1981 a sculpture of Dilophosaurus, the first life-sized reconstruction of this dinosaur, was donated to the park. Dilophosaurus was proposed as the state dinosaur of Arizona by a nine-year-old boy in 1998, but lawmakers suggested Sonorasaurus instead, arguing that Dilophosaurus was not unique to Arizona. A compromise was suggested that would recognize both dinosaurs, but the bill died when it was revealed that the Dilophosaurus fossils had been taken without permission from the Navajo Reservation, and because they did not reside in Arizona anymore (an 11-year-old boy again suggested Sonorasaurus as Arizona's state dinosaur in 2018). Navajo Nation officials subsequently discussed how to get the fossils returned. According to Mayor, one Navajo stated that they do not ask to get the fossils back anymore, but wondered why casts had not been made so the bones could be left, as it would be better to keep them in the ground, and a museum built so people could come to see them there. Further field work related to Dilophosaurus in the Navajo Nation was conducted with permission from the Navajo Nation Minerals Department. Jurassic Park Dilophosaurus was featured in the 1990 novel Jurassic Park, by the writer Michael Crichton, and its 1993 movie adaptation by the director Steven Spielberg. The Dilophosaurus of Jurassic Park was acknowledged as the "only serious departure from scientific veracity" in the movie's making-of book, and as the "most fictionalized" of the movie's dinosaurs in a book about Stan Winston Studios, which created the animatronics effects. For the novel, Crichton invented the dinosaur's ability to spit venom (explaining how it was able to kill prey, in spite of its seemingly weak jaws). The art department added another feature, a neck frill or cowl folded against its neck that expanded and vibrated as the animal prepared to attack, similar to that of the frill-necked lizard. To avoid confusion with the Velociraptor as featured in the movie, Dilophosaurus was presented as only tall, instead of its assumed true height of about . Nicknamed "the spitter", the Dilophosaurus of the movie was realized through puppeteering, and required a full body with three interchangeable heads to produce the actions required by the script. Separate legs were also constructed for a shot where the dinosaur hops by. Unlike most of the other dinosaurs in the movie, no computer-generated imagery was employed when showing the Dilophosaurus. The geologist J. Bret Bennington noted in 1996 that though Dilophosaurus probably did not have a frill and could not spit venom like in the movie, its bite could have been venomous, as has been claimed for the Komodo dragon. He found that adding venom to the dinosaur was no less allowable than giving a color to its skin, which is also unknown. If the dinosaur had a frill, there would have been evidence for this in the bones, in the shape of a rigid structure to hold up the frill, or markings at the places where the muscles used to move it were attached. He also added that if it did have a frill, it would not have used it to intimidate its meal, but rather a competitor (he speculated it may have responded to a character in the movie pulling a hood over his head). In a 1997 review of a book about the science of Jurassic Park, the paleontologist Peter Dodson likewise pointed out the wrong scale of the film's Dilophosaurus, as well as the improbability of its venom and frill. Bakker pointed out in 2014 that the movie's Dilophosaurus lacked the prominent notch in the upper jaw, and concluded that the movie-makers had done a good job at creating a frightening chimaera of different animals, but warned it could not be used to teach about the real animal. Brown and Marsh stated that while these traits were fictitious, they were made believable by being based on the biology of real animals. Welles himself was "thrilled" to see Dilophosaurus in Jurassic Park: he noted the inaccuracies, but found them minor points, enjoyed the movie, and was happy to find the dinosaur "an internationally known actor".
Biology and health sciences
Theropods
Animals
445536
https://en.wikipedia.org/wiki/Oven
Oven
An oven is a tool that is used to expose materials to a hot environment. Ovens contain a hollow chamber and provide a means of heating the chamber in a controlled way. In use since antiquity, they have been used to accomplish a wide variety of tasks requiring controlled heating. Because they are used for a variety of purposes, there are many different types of ovens. These types differ depending on their intended purpose and based upon how they generate heat. Ovens are often used for cooking, usually baking, sometimes broiling; they can be used to heat food to a desired temperature. Ovens are also used in the manufacturing of ceramics and pottery; these ovens are sometimes referred to as kilns. Metallurgical furnaces are ovens used in the manufacturing of metals, while glass furnaces are ovens used to produce glass. There are many methods by which different types of ovens produce heat. Some ovens heat materials using the combustion of a fuel, such as wood, coal, or natural gas, while many employ electricity. Microwave ovens heat materials by exposing them to microwave radiation, while electric ovens and electric furnaces heat materials using resistive heating. Some ovens use forced convection, the movement of gases inside the heating chamber, to enhance the heating process, or, in some cases, to change the properties of the material being heated, such as in the Bessemer method of steel production. History The earliest ovens were found in Central Europe and date back to 29,000 BC. They were roasting and boiling pits inside yurts used to cook mammoth. In Ukraine from 20,000 BC they used pits with hot coals covered in ashes. Food was wrapped in leaves and set on top, then covered with earth. In camps found in Mezhirich, each mammoth bone house had a hearth used for heating and cooking. Ovens were used by cultures who lived in the Indus Valley and in pre-dynastic Egypt. By 3200 BC, each mud-brick house had an oven in settlements across the Indus Valley. Ovens were used to cook food and to make bricks. Pre-dynastic civilizations in Egypt used kilns around 5000–4000 BC to make pottery. Tandır ovens used to bake unleavened flatbread were common in Anatolia during the Seljuk and Ottoman eras and have been found at archaeological sites distributed across the Middle East. The word tandır comes from the Akkadian tinuru, which becomes tanur in Hebrew and Arabic, and tandır in Turkish. Of the hundreds of bread varieties known from cuneiform sources, unleavened tinuru bread was made by adhering bread to the side walls of a heated cylindrical oven. This type of bread is still central to rural food culture in this part of the world, reflected by the local folklore, in which a young man and woman sharing fresh tandır bread is a symbol of young love. However, the culture of traditional bread baking is changing with younger generations, especially with those who reside in towns and prefer modern conveniences. During the Middle Ages, instead of earth and ceramic ovens, Europeans used fireplaces in conjunction with large cauldrons. These were similar to the Dutch oven. After the Middle Ages, ovens underwent many changes over time from wood, iron, coal, gas, and even electric. Each design had its own motivation and purpose. The wood-burning stoves saw improvement through the addition of fire chambers that allowed better containment and release of smoke. Another recognizable oven would be the cast-iron stove. These were first used around the early 1700s, when they themselves underwent several variations including the Stewart Oberlin iron stove that was smaller and had its own chimney. In the early part of the 19th century, the coal oven was developed. It was cylindrical in shape and made of heavy cast iron. The gas oven saw its first use as early as the beginning of the 19th century as well. Gas stoves became very common household ovens once gas lines were available to most houses and neighborhoods. James Sharp patented one of the first gas stoves in 1826. Other improvements to the gas stove included the AGA cooker, invented in 1922 by Gustaf Dalén. The first electric ovens were invented in the very late 19th century; however, like many electrical inventions destined for commercial use, mass ownership of electrical ovens could not be a reality until a better and more efficient use of electricity was available. Over time, ovens have become more high-tech in terms of cooking strategy. The microwave as a cooking tool was discovered by Percy Spencer in 1946, and with help from engineers, the microwave oven was patented. The microwave oven uses microwave radiation to excite water molecules in food, causing friction and thus producing heat. Types Double oven A built-in oven fixture that has either two ovens, or one oven and one microwave oven. It is usually built into the kitchen cabinet. Earth oven An earth oven is a pit dug into the ground and then heated, usually by rocks or smoldering debris. Historically these have been used by many cultures for cooking. Cooking times are usually long, and the process is usually cooking by slow roasting the food. Earth ovens are among the most common things archaeologists look for at an anthropological dig, as they are one of the key indicators of human civilization and static society. Ceramic oven The ceramic oven is an oven constructed of clay or any other ceramic material and takes different forms depending on the culture. The Indians refer to it as a tandoor, and use it for cooking. They can be dated back as far as 3,000 BC, and they have been argued to have their origins in the Indus Valley. Brick ovens are also another ceramic type oven. A culture most notable for the use of brick ovens is Italy and its intimate history with pizza. However, its history also dates further back to Roman times, wherein the brick oven was used not only for commercial use but household use as well. Gas oven One of the first recorded uses of a gas stove and oven referenced a dinner party in 1802 hosted by Zachaus Winzler, where all the food was prepared either on a gas stove or in its oven compartment. In 1834, British inventor James Sharp began to commercially produce gas ovens after installing one in his own house. In 1851, the Bower's Registered Gas Stove was displayed at the Great Exhibition. This stove would set the standard and basis for the modern gas oven. Notable improvements to the gas stove since include the addition of the thermostat which assisted in temperature regulation; also an enamel coating was added to the production of gas stoves and ovens in order to help with easier cleaning. Electric oven These produce their heat electrically, often via resistive heating. Toaster oven Toaster ovens are small electric ovens with a front door, wire rack and removable baking pan. To toast bread with a toaster oven, slices of bread are placed horizontally on the rack. When the toast is done, the toaster turns off, but in most cases the door must be opened manually. Most toaster ovens are significantly larger than toasters, but are capable of performing most of the functions of electric ovens, albeit on a much smaller scale. Masonry oven Masonry ovens consist of a baking chamber made of fireproof brick, concrete, stone, or clay. Though traditionally wood-fired, coal-fired ovens were common in the 19th century. Modern masonry ovens are often fired with natural gas or even electricity, and are closely associated with artisanal bread and pizza. In the past, however, they were also used for any cooking task that required baking. Microwave oven An oven that cooks food using microwave radiation rather than infrared radiation (typically a fire source). Conceptualized in 1946, Dr. Percy Spencer allegedly discovered the heating properties of microwaves while studying the magnetron. By 1947, the first commercial microwave was in use in Boston, Massachusetts. Wall oven Wall ovens make it easier to work with large roasting pans and Dutch ovens. A width is typically 24, 27, or 30 inches. Mounted at waist or eye level, a wall oven eliminates bending. However, it can be nested under a countertop to save space. A separate wall oven is expensive compared with a range. Steam oven An oven that cooks food using steam to provide heat. Some ovens can perform in multiple ways, sometimes at once. Combination ovens may be able to microwave and conventional heating such as baking or grilling simultaneously. Uses Cooking Ovens are used as kitchen appliances for roasting and heating. Foods normally cooked in this manner include meat, casseroles and baked goods such as bread, cake and other desserts. In modern times, the oven is used to cook and heat food in many households around the globe. Modern ovens are typically fueled by either natural gas or electricity, with bottle gas models available but not common. When an oven is contained in a complete stove, the fuel used for the oven may be the same as or different from the fuel used for the burners on top of the stove. Ovens usually can use a variety of methods to cook. The most common may be to heat the oven from below. This is commonly used for baking and roasting. The oven may also be able to heat from the top to provide broiling (US) or grilling (UK/Commonwealth). A fan-assisted oven that uses a small fan to circulate the air in the cooking chamber, can be used. Both are also known as convection ovens. An oven may also provide an integrated rotisserie. Ovens also vary in the way that they are controlled. The simplest ovens (for example, the AGA cooker) may not have any controls at all; the ovens simply run continuously at various temperatures. More conventional ovens have a simple thermostat which turns the oven on and off and selects the temperature at which it will operate. Set to the highest setting, this may also enable the broiler element. A timer may allow the oven to be turned on and off automatically at pre-set times. More sophisticated ovens may have complex, computer-based controls allowing a wide variety of operating modes and special features including the use of a temperature probe to automatically shut the oven off when the food is completely cooked to the desired degree. Toaster ovens are essentially small-scale ovens and can be used to cook foods other than just toasting. A frontal door is opened, horizontally-oriented bread slices (or other food items) are placed on a rack that has heat elements above and below it, and the door is closed. The controls are set and actuated to toast the bread to the desired doneness, whereupon the heat elements are switched off. In most cases, the door must be opened manually, though there are also toaster ovens with doors that open automatically. Because the bread is horizontal, a toaster oven can be used to cook toast with toppings, like garlic bread, melt sandwiches, or toasted cheese. Toaster ovens are generally slower to make toast than pop-up toasters, taking 4–6 minutes as compared to 2–3 minutes. In addition to the automatic-toasting settings, toaster ovens typically have settings and temperature controls to allow use of the appliance as a small oven. Extra features on toaster ovens can include: Heating element control options, such as a "top brown" setting that powers only the upper elements so food can be broiled without heat from below. Multiple shelf racks – Having options for positioning the oven shelf gives more control over the distance between food and the heating element. Industrial, scientific, and artisanal Outside the culinary world, ovens are used for a number of purposes: A furnace can be used either to provide heat to a building or used to melt substances such as glass or metal for further processing. A blast furnace is a particular type of furnace generally associated with metal smelting (particularly steel manufacture) using refined coke or similar hot-burning substance as a fuel, with air pumped in under pressure to increase the temperature of the fire. A blacksmith uses a temporarily blown furnace, the smith's heart to heat iron to a glowing red to yellow temperature. A kiln is a high-temperature oven used in wood drying, ceramics and cement manufacturing to convert mineral feedstock (in the form of clay or calcium or aluminum rocks) into a glassier, more solid form. In the case of ceramic kilns, a shaped clay object is the final result, while cement kilns produce a substance called clinker that is crushed to make the final cement product. (Certain types of drying ovens used in food manufacture, especially those used in malting, are also referred to as kilns.) An autoclave is an oven-like device with features similar to a pressure cooker that allows the heating of aqueous solutions to higher temperatures than water's boiling point in order to sterilize the contents of the autoclave. Industrial ovens are similar to their culinary equivalents and are used for a number of different applications that do not require the high temperatures of a kiln or furnace.
Technology
Food and health
null
445787
https://en.wikipedia.org/wiki/Linyphiidae
Linyphiidae
Linyphiidae, spiders commonly known as sheet weavers (from the shape of their webs), or money spiders (in the United Kingdom, Ireland, Australia, New Zealand, and Portugal) is a family of very small spiders comprising 4706 described species in 620 genera worldwide. This makes Linyphiidae the second largest family of spiders after the Salticidae. The family is poorly understood due to their small body size and wide distribution; new genera and species are still being discovered throughout the world. The newest such genus is Himalafurca from Nepal, formally described in April 2021 by Tanasevitch. Since it is so difficult to identify such tiny spiders, there are regular changes in taxonomy as species are combined or divided. Money spiders are known for drifting through the air via a technique termed "ballooning". Within the agriculture industry, money spiders are regarded as biological control agents against pest species like aphids and springtails. Description In Linyphiidae, the clypeus is normally over twice as high as the diameter of the anterior median eyes. The chelicerae have lateral stridulating ridges and lack lateral condyles. The legs are long and thin, and bear macrosetae. The abdomen is usually oval or elongated. Distribution Spiders of this family occur nearly worldwide. In Norway many species have been found walking on snow at temperatures of down to −7 °C. While these spiders are light enough to utilize ballooning for travel, they are limited by the physics of an often turbulent atmosphere and microclimate. For this reason ballooning spiders have little control over where they land, leading to a high mortality rate for the practice and its predominant usage by spiderlings and juveniles. The travel of money spiders by ballooning likely contributes to their vast distribution and speciation. Predators and prey Among birds, goldcrests are known to prey on money spiders. Money spiders are known to prey on aphids, springtails, flies, and other spiders. Taxonomy The Pimoidae are the sister group to the Linyphiidae. There are six subfamilies, of which Linyphiinae (the sheetweb spiders), Erigoninae (the dwarf spiders), and Micronetinae, contain the majority of described species. Many species have been described in monotypic genera, especially in the Erigoninae, which probably reflects the scientific techniques traditionally used in this family. Common genera include Neriene, Lepthyphantes, Erigone, Eperigone, Bathyphantes, Troglohyphantes, Tennesseellum and many others. These are among the most abundant spiders in the temperate regions, although many are also found in the tropics. The generally larger bodied members of the subfamily Linyphiinae are commonly found in classic "bowl and doily" webs or filmy domes. The usually tiny members of the Erigoninae are builders of tiny sheet webs. These tiny spiders (usually 3 mm or less) commonly balloon even as adults and may be very numerous in a given area on one day, only to disappear the next. Males in the subfamily Erigoninae typically have modified cephalothoraxes. These modifications are diagnostic for a given taxon, being genus or species-specific. These come in an impressive array of forms including, but not limited to, grooves, tubercles, projections, bumps, lobes, and spines. Occasionally, the projections may be decorated with tufts of hair or even bear eyes. The following are select examples of species in which males possess rather remarkable modifications. Walckenaeria acuminata has its eyes placed on a tall, thin spire whose height exceeds the length of the cephalothorax. Grammonota gigas has a transverse row of four longitudinal lobes behind the eyes. Gnathonargus unicorn has a long, slender, upward-pointing clypeal projection resembling a unicorn horn. Hypselistes florens has a cephalic lobe shaped like an hourglass when viewed from the front. Perregrinus deformis has a short, downcurved clypeal projection resembling a human nose. Praestigia kulczynskii has its anterior median eyes placed ventrally at the end of a long, thick projection issuing from the clypeus. The genera Coreorgonal and Spirembolus have their cephalic regions deeply divided into two pronounced lobes. Eskovia exarmata has a cephalothorax shaped like a trapezoid when viewed laterally. Horcotes quadricristatus has a single, sharp tooth sticking up between the anterior and posterior eyes. Similarly, the pedipalps of males range from simple to complex in their design, with some possessing striking features and arrangements of palpal sclerites that are unique for a given genus and/or species. A few spiders in this family include: Bowl and doily spider, Frontinella pyramitela Filmy dome spider, Neriene radiata Blacktailed red sheetweaver, Florinda coccinea Orsonwelles, a genus of giant Hawaiian Linyphids containing the largest Linyphiid, O. malus. Erigone atra, a dwarf spider Genera , the World Spider Catalog accepts the following genera: Abacoproeces Simon, 1884 — Austria, Russia Aberdaria Holm, 1962 — Kenya Abiskoa Saaristo & Tanasevitch, 2000 — Poland, Russia, China Acanoides Sun, Marusik & Tu, 2014 — China Acanthoneta Eskov & Marusik, 1992 — North America, Asia Acartauchenius Simon, 1884 — Asia, Africa, Europe Acorigone Wunderlich, 2008 — Azores Acroterius Irfan, Bashir & Peng, 2021 — China Adelonetria Millidge, 1991 — Chile Afribactrus Wunderlich, 1995 — South Africa Afromynoglenes Merrett & Russell-Smith, 1996 — Ethiopia Afroneta Holm, 1968 — Africa Agnyphantes Hull, 1932 — Canada, Russia, China Agyneta Hull, 1911 — South America, Asia, Africa, Europe, North America, Bermuda, Panama, Australia Agyphantes Saaristo & Marusik, 2004 — Russia Ainerigone Eskov, 1993 — Russia, Japan Algarveneta Wunderlich, 2021 — Portugal Alioranus Simon, 1926 — Asia, Greece Allomengea Strand, 1912 — Asia, Canada Allotiso Tanasevitch, 1990 — Turkey, Georgia Anacornia Chamberlin & Ivie, 1933 — United States Anguliphantes Saaristo & Tanasevitch, 1996 — Asia, Romania Anibontes Chamberlin, 1924 — United States Annapolis Millidge, 1984 — United States Anodoration Millidge, 1991 — Brazil, Argentina Anthrobia Tellkampf, 1844 — United States Antrohyphantes Dumitrescu, 1971 — Bulgaria Aperturina Tanasevitch, 2014 — Thailand, Malaysia Aphileta Hull, 1920 — Kazakhstan, United States, Russia Apobrata Miller, 2004 — Philippines Aprifrontalia Oi, 1960 — Asia Arachosinella Denis, 1958 — Asia Araeoncus Simon, 1884 — Europe, Asia, Africa, New Zealand Archaraeoncus Tanasevitch, 1987 — Asia, Europe Arcterigone Eskov & Marusik, 1994 — Russia, Canada Arcuphantes Chamberlin & Ivie, 1943 — North America, Asia Ascetophantes Tanasevitch & Saaristo, 2006 — Nepal Asemostera Simon, 1898 — Central America, South America Asiafroneta Tanasevitch, 2020 — Borneo Asiagone Tanasevitch, 2014 — Thailand, China, Laos Asiceratinops Eskov, 1992 — Russia Asiophantes Eskov, 1993 — Russia Asperthorax Oi, 1960 — Russia, Japan, China Asthenargellus Caporiacco, 1949 — Kenya Asthenargoides Eskov, 1993 — Russia Asthenargus Simon & Fage, 1922 — Africa, Europe, Asia Atypena Simon, 1894 — Asia Australolinyphia Wunderlich, 1976 — Australia Australophantes Tanasevitch, 2012 — Indonesia, Australia Bactrogyna Millidge, 1991 — Chile Baryphyma Simon, 1884 — Europe, Asia Baryphymula Eskov, 1992 — Japan Bathylinyphia Eskov, 1992 — Asia Bathyphantes Menge, 1866 — North America, Asia, Africa, Europe, Argentina, Oceania Batueta Locket, 1982 — Asia Bifurcia Saaristo, Tu & Li, 2006 — China, Russia Birgerius Saaristo, 1973 — France, Spain Bisetifer Tanasevitch, 1987 — Ukraine, Russia Bishopiana Eskov, 1988 — Russia Blestia Millidge, 1993 — United States Bolephthyphantes Strand, 1901 — Greenland, Russia, Kazakhstan Bolyphantes C. L. Koch, 1837 — Asia, Europe Bordea Bosmans, 1995 — Portugal, Spain, France Brachycerasphora Denis, 1962 — Africa, Asia Bursellia Holm, 1962 — Africa Caenonetria Millidge & Russell-Smith, 1992 — Indonesia Callitrichia Fage, 1936 — Africa, Asia Callosa Zhao & Li, 2017 — China Camafroneta Frick & Scharff, 2018 — Cameroon Cameroneta Bosmans & Jocqué, 1983 — Cameroon Canariellanum Wunderlich, 1987 — Canary Is. Canariphantes Wunderlich, 1992 — Africa, Israel, Europe Capsulia Saaristo, Tu & Li, 2006 — China Caracladus Simon, 1884 — Europe, Asia Carorita Duffey & Merrett, 1963 — Russia, China Cassafroneta Blest, 1979 — New Zealand Catacercus Millidge, 1985 — Chile Catonetria Millidge & Ashmole, 1994 — Ascension Is. Caucasopisthes Tanasevitch, 1990 — Caucasus Cautinella Millidge, 1985 — Chile Caviphantes Oi, 1960 — Romania, Asia, United States Centromerita Dahl, 1912 — United States, Canada Centromerus Dahl, 1886 — Europe, Asia, Africa, North America Centrophantes Miller & Polenec, 1975 — Slovenia, Austria Ceraticelus Simon, 1884 — North America, Europe, Russia, Cuba Ceratinella Emerton, 1882 — North America, Asia, Europe, Australia Ceratinops Banks, 1905 — United States, Canada Ceratinopsidis Bishop & Crosby, 1930 — United States Ceratinopsis Emerton, 1882 — Africa, North America, Asia, Guatemala, Cuba Ceratocyba Holm, 1962 — Kenya Cheniseo Bishop & Crosby, 1935 — United States, Canada Chenisides Denis, 1962 — Congo, Kenya Cherserigone Denis, 1954 — Algeria Chiangmaia Millidge, 1995 — Thailand Chthiononetes Millidge, 1993 — Australia Cinetata Wunderlich, 1995 — Georgia Cirrosus Zhao & Li, 2014 — China Claviphantes Tanasevitch & Saaristo, 2006 — Nepal Cnephalocotes Simon, 1884 — Canada, Russia, France Collinsia O. Pickard-Cambridge, 1913 — Asia, North America, Europe Coloncus Chamberlin, 1949 — United States, Canada Comorella Jocqué, 1985 — Comoros Concavocephalus Eskov, 1989 — Russia Conglin Zhao & Li, 2014 — China Connithorax Eskov, 1993 — Russia Coreorgonal Bishop & Crosby, 1935 — United States, Canada Cornicephalus Saaristo & Wunderlich, 1995 — China Cornitibia Lin, Lopardo & Uhl, 2022 - Nepal Cresmatoneta Simon, 1929 — Asia Crispiphantes Tanasevitch, 1992 — China, Korea, Russia Crosbyarachne Charitonov, 1937 — Turkey, Europe Crosbylonia Eskov, 1988 — Russia Cryptolinyphia Millidge, 1991 — Colombia Ctenophysis Millidge, 1985 — Chile Curtimeticus Zhao & Li, 2014 — China Cyphonetria Millidge, 1995 — Thailand Dactylopisthes Simon, 1884 — Europe, Asia, North America Dactylopisthoides Eskov, 1990 — Russia Decipiphantes Saaristo & Tanasevitch, 1996 — Belarus, Asia Deelemania Jocqué & Bosmans, 1983 — Africa Dendronetria Millidge & Russell-Smith, 1992 — Indonesia Denisiphantes Tu, Li & Rollard, 2005 — China Diastanillus Simon, 1926 — France, Austria, Norway Dicornua Oi, 1960 — Japan Dicymbium Menge, 1868 — North America, Asia Didectoprocnemis Denis, 1950 — Europe, Africa Diechomma Millidge, 1991 — Colombia Diplocentria Hull, 1911 — Asia, Sweden, North America Diplocephaloides Oi, 1960 — Korea, Japan, China Diplocephalus Bertkau, 1883 — Africa, Europe, North America, Asia Diploplecta Millidge, 1988 — New Zealand Diplostyla Emerton, 1882 — Turkey, Russia Diplothyron Millidge, 1991 — Venezuela Disembolus Chamberlin & Ivie, 1933 — United States, Canada Dismodicus Simon, 1884 — Russia, North America, Europe Doenitzius Oi, 1960 — Asia Dolabritor Millidge, 1991 — Colombia Donacochara Simon, 1884 — Angola Drapetisca Menge, 1866 — United States, New Zealand, Asia Drepanotylus Holm, 1945 — Asia, Bulgaria Dresconella Denis, 1950 — France Dubiaranea Mello-Leitão, 1943 — South America, Indonesia Dumoga Millidge & Russell-Smith, 1992 — Indonesia Dunedinia Millidge, 1988 — New Zealand, Australia Eborilaira Eskov, 1989 — Russia Eldonnia Tanasevitch, 2008 — Russia, Korea, Japan Emenista Simon, 1894 — India Emertongone Lin, Lopardo & Uhl, 2022 - USA Enguterothrix Denis, 1962 — Congo, Asia Entelecara Simon, 1884 — North America, Asia, Europe, Algeria Eordea Simon, 1899 — Indonesia Epibellowia Tanasevitch, 1996 — Russia, Japan Epiceraticelus Crosby & Bishop, 1931 — United States Epigyphantes Saaristo & Tanasevitch, 2004 — Russia Epigytholus Tanasevitch, 1996 — Russia, Mongolia Episolder Tanasevitch, 1996 — Russia Epiwubana Millidge, 1991 — Chile Eridantes Crosby & Bishop, 1933 — United States, Mexico, Canada Erigokhabarum Tanasevitch, 2022 - Russia (Far East) Erigomicronus Tanasevitch, 2018 — Japan, Russia, China Erigone Audouin, 1826 — North America, Europe, South America, Panama, Asia, Africa, Caribbean, Oceania Erigonella Dahl, 1901 — Canada, Asia, France Erigonoploides Eskov, 1989 — Russia Erigonoplus Simon, 1884 — Europe, Asia, Morocco Erigonops Scharff, 1990 — South Africa Erigophantes Wunderlich, 1995 — Indonesia Eskovia Marusik & Saaristo, 1999 — Russia, Canada, Mongolia Eskovina Kocak & Kemal, 2006 — Russia, China, Korea Esophyllas Prentice & Redak, 2012 — United States Estrandia Blauvelt, 1936 — Russia, China, Japan Eulaira Chamberlin & Ivie, 1933 — United States, Mexico Eurymorion Millidge, 1993 — Brazil, Bolivia Evansia O. Pickard-Cambridge, 1900 — Europe (Japan?) Exechopsis Millidge, 1991 — South America Exocora Millidge, 1991 — Brazil, Venezuela, Bolivia Fageiella Kratochvíl, 1934 — Serbia, Montenegro Falklandoglenes Usher, 1983 — Falkland Is. Fissiscapus Millidge, 1991 — Ecuador, Colombia Fistulaphantes Tanasevitch & Saaristo, 2006 — Nepal Flagelliphantes Saaristo & Tanasevitch, 1996 — Russia Floricomus Crosby & Bishop, 1925 — United States, Canada Florinda O. Pickard-Cambridge, 1896 — United States, Mexico Floronia Simon, 1887 — Ecuador, Asia Formiphantes Saaristo & Tanasevitch, 1996 — Europe Frederickus Paquin, Dupérré, Buckle & Crawford, 2008 — United States, Canada Frontella Kulczyński, 1908 — Russia Frontinella F. O. Pickard-Cambridge, 1902 — North America, China, El Salvador Frontinellina van Helsdingen, 1969 — Asia, South Africa Frontiphantes Wunderlich, 1987 — Madeira Fusciphantes Oi, 1960 — Japan Gibbafroneta Merrett, 2004 — Congo Gibothorax Eskov, 1989 — Russia Gigapassus Miller, 2007 — Argentina Gladiata Zhao & Li, 2014 — China Glebala Zhao & Li, 2014 — China Glomerosus Zhao & Li, 2014 — China Glyphesis Simon, 1926 — Asia, North America, Europe Gnathonargus Bishop & Crosby, 1935 — United States Gnathonarium Karsch, 1881 — Asia, North America Gnathonaroides Bishop & Crosby, 1938 — United States, Canada Gonatium Menge, 1868 — Asia, Europe, North America, Africa Gonatoraphis Millidge, 1991 — Colombia Goneatara Bishop & Crosby, 1935 — United States Gongylidiellum Simon, 1884 — Africa, Asia, Romania, United States, Argentina Gongylidioides Oi, 1960 — Asia Gongylidium Menge, 1868 — Asia, Italy Grammonota Emerton, 1882 — North America, Colombia, Central America, Caribbean Graphomoa Chamberlin, 1924 — United States Gravipalpus Millidge, 1991 — Brazil, Peru, Argentina Habreuresis Millidge, 1991 — Chile Halorates Hull, 1911 — Kazakhstan, Pakistan Haplinis Simon, 1894 — New Zealand, Australia Haplomaro Miller, 1970 — Angola Helophora Menge, 1866 — Russia, China, United States Helsdingenia Saaristo & Tanasevitch, 2003 — Asia, Africa Herbiphantes Tanasevitch, 1992 — Russia, Korea, Japan Heterolinyphia Wunderlich, 1973 — Bhutan, Nepal Heterotrichoncus Wunderlich, 1970 — Europe, Russia Hilaira Simon, 1884 — Asia, North America, Europe Himalafurca Tanasevitch, 2021 — Nepal Himalaphantes Tanasevitch, 1992 — Asia Holma Locket, 1974 — Angola Holmelgonia Jocqué & Scharff, 2007 — Africa Holminaria Eskov, 1991 — Russia, Mongolia, China Horcotes Crosby & Bishop, 1933 — United States, Russia, Canada Houshenzinus Tanasevitch, 2006 — China Hubertella Platnick, 1989 — Nepal Hybauchenidium Holm, 1973 — Russia, North America, Europe Hybocoptus Simon, 1884 — Algeria, Morocco, France Hylyphantes Simon, 1884 — Asia Hyperafroneta Blest, 1979 — New Zealand Hypomma Dahl, 1886 — Asia, Macedonia, Equatorial Guinea, United States Hypselistes Simon, 1894 — Asia, North America Hypselocara Millidge, 1991 — Venezuela Hypsocephalus Millidge, 1978 — France, Switzerland, Italy Ibadana Locket & Russell-Smith, 1980 — Nigeria, Cameroon Iberoneta Deeleman-Reinhold, 1984 — Spain Icariella Brignoli, 1979 — Greece Idionella Banks, 1893 — United States, Mexico Improphantes Saaristo & Tanasevitch, 1996 — Asia, Africa, Europe Incestophantes Tanasevitch, 1992 — Asia, Europe, North America Indophantes Saaristo & Tanasevitch, 2003 — Asia Intecymbium Miller, 2007 — Chile, Argentina Ipa Saaristo, 2007 — Asia, Europe Ipaoides Tanasevitch, 2008 — China Islandiana Braendegaard, 1932 — North America, Russia, Europe Ivielum Eskov, 1988 — Russia, Mongolia, Canada Jacksonella Millidge, 1951 — Cyprus, Greece, Korea Jalapyphantes Gertsch & Davis, 1946 — Mexico, Ecuador Janetschekia Schenkel, 1939 — Europe Javagone Tanasevitch, 2020 — Java Javanaria Tanasevitch, 2020 — Java Javanyphia Tanasevitch, 2020 — Java Jilinus Lin, Lopardo & Uhl, 2022 - Russia (Far East), China, Korea Johorea Locket, 1982 — Malaysia Juanfernandezia Koçak & Kemal, 2008 — Chile Kaestneria Wiehle, 1956 — Asia, North America Kagurargus Ono, 2007 — Japan Kalimagone Tanasevitch, 2017 — Malaysia Karita Tanasevitch, 2007 — Europe, Russia Kenocymbium Millidge & Russell-Smith, 1992 — Malaysia, Indonesia, Thailand Ketambea Millidge & Russell-Smith, 1992 — Asia Kikimora Eskov, 1988 — Finland, Russia Knischatiria Wunderlich, 1976 — Australia, Indonesia, Malaysia Koinothrix Jocqué, 1981 — Cape Verde Is. Kolymocyba Eskov, 1989 — Russia Kratochviliella Miller, 1938 — Europe Labicymbium Millidge, 1991 — South America Labulla Simon, 1884 — Europe, Russia Labullinyphia van Helsdingen, 1985 — Sri Lanka Labullula Strand, 1913 — Cameroon, Angola, Comoros Laetesia Simon, 1908 — Oceania, Thailand Lamellasia Tanasevitch, 2014 — Thailand Laminacauda Millidge, 1985 — South America, Panama Laminafroneta Merrett, 2004 — Africa Laogone Tanasevitch, 2014 — China, Laos Laperousea Dalmas, 1917 — Australia, New Zealand Lasiargus Kulczyński, 1894 — Asia Lepthyphantes Menge, 1866 — Asia, Africa, Europe, North America, Chile Leptorhoptrum Kulczyński, 1894 — Russia, Japan Leptothrix Menge, 1869 — Europe Lessertia Smith, 1908 — Spain, Africa, Canada, New Zealand Lessertinella Denis, 1947 — Europe Lidia Saaristo & Marusik, 2004 — Kyrgyzstan, Kazakhstan Limoneta Bosmans & Jocqué, 1983 — Cameroon, Kenya, South Africa Linyphantes Chamberlin & Ivie, 1942 — United States, Canada, Mexico Linyphia Latreille, 1804 — North America, Asia, South America, Central America, Africa, Europe, Oceania Locketidium Jocqué, 1981 — Malawi, Kenya, Tanzania Locketiella Millidge & Russell-Smith, 1992 — Indonesia Locketina Kocak & Kemal, 2006 — Indonesia, Malaysia Lomaita Bryant, 1948 — Dominican Republic Lophomma Menge, 1868 — United States, Russia Lotusiphantes Chen & Yin, 2001 — China Lucrinus O. Pickard-Cambridge, 1904 — South Africa Lygarina Simon, 1894 — South America Machadocara Miller, 1970 — Congo, Zambia Macrargus Dahl, 1886 — Europe, Asia Maculoncus Wunderlich, 1995 — Taiwan, Greece, Israel Malkinola Miller, 2007 — Chile Mansuphantes Saaristo & Tanasevitch, 1996 — Europe, Asia Maorineta Millidge, 1988 — New Zealand, Indonesia Maro O. Pickard-Cambridge, 1906 — North America, Asia Martensinus Wunderlich, 1973 — Nepal Masikia Millidge, 1984 — Russia, United States, Canada Maso Simon, 1884 — United States, Portugal, Algeria, Asia Masoncus Chamberlin, 1949 — United States, Canada Masonetta Chamberlin & Ivie, 1939 — United States Mecopisthes Simon, 1926 — Europe, Africa, Asia Mecynargoides Eskov, 1988 — Russia, Mongolia Mecynargus Kulczyński, 1894 — Asia, North America, Europe Mecynidis Simon, 1894 — Africa Megafroneta Blest, 1979 — New Zealand Megalepthyphantes Wunderlich, 1994 — Africa, Asia, Greece Mermessus O. Pickard-Cambridge, 1899 — North America, Caribbean, Central America, South America, Asia, South Africa, New Zealand Mesasigone Tanasevitch, 1989 — Asia Metafroneta Blest, 1979 — New Zealand Metaleptyphantes Locket, 1968 — Africa, Indonesia Metamynoglenes Blest, 1979 — New Zealand Metapanamomops Millidge, 1977 — Germany, Ukraine Metopobactrus Simon, 1884 — Europe, North America, Asia Micrargus Dahl, 1886 — North America, Europe, Asia, Uganda Microbathyphantes van Helsdingen, 1985 — Asia, Africa Microctenonyx Dahl, 1886 — Italy, Africa, United States, Oceania Microcyba Holm, 1962 — Africa Microlinyphia Gerhardt, 1928 — Africa, North America, Asia Microneta Menge, 1869 — Sweden, South America, North America, Papua New Guinea, Saint Vincent and the Grenadines, Asia Microplanus Millidge, 1991 — Colombia, Panama Midia Saaristo & Wunderlich, 1995 — Europe Miftengris Eskov, 1993 — Russia Millidgea Locket, 1968 — Angola Millidgella Kammerer, 2006 — Chile, Argentina Minicia Thorell, 1875 — Asia, Europe, Algeria Minyriolus Simon, 1884 — Argentina, Italy Mioxena Simon, 1926 — Congo, Kenya, Angola Mitrager van Helsdingen, 1985 — Indonesia Moebelia Dahl, 1886 — Germany, China Moebelotinus Wunderlich, 1995 — Russia, Mongolia Molestia Tu, Saaristo & Li, 2006 Monocephalus Smith, 1906 — Europe Monocerellus Tanasevitch, 1983 — Russia Montilaira Chamberlin, 1921 — United States Moreiraxena Miller, 1970 — Angola Moyosi Miller, 2007 — Guyana, Brazil, Argentina Mughiphantes Saaristo & Tanasevitch, 1999 — Asia, Europe Murphydium Jocqué, 1996 — Kenya, Somalia Mycula Schikora, 1994 — Germany, Austria, Italy Myrmecomelix Millidge, 1993 — Peru, Ecuador Mythoplastoides Crosby & Bishop, 1933 — United States Napometa Benoit, 1977 — St. Helena Nasoona Locket, 1982 — Asia, Venezuela Nasoonaria Wunderlich & Song, 1995 — Asia Nematogmus Simon, 1884 — Asia Nenilinium Eskov, 1988 — Russia, Mongolia Nentwigia Millidge, 1995 — Thailand, Indonesia Neocautinella Baert, 1990 — Ecuador, Peru, Bolivia Neodietrichia Özdikmen, 2008 — United States, Canada Neoeburnella Koçak, 1986 — Côte d'Ivoire Neomaso Forster, 1970 — Chile, Argentina, Brazil Neonesiotes Millidge, 1991 — Seychelles, Fiji, Samoa Neriene Blackwall, 1833 — Asia, Africa, North America, Europe Neserigone Eskov, 1992 — Russia, Japan Nesioneta Millidge, 1991 — Asia, Seychelles, Fiji Nihonella Ballarin & Yamasaki, 2021 — Japan Nippononeta Eskov, 1992 — Asia Nipponotusukuru Saito & Ono, 2001 — Japan Nispa Eskov, 1993 — Russia, Japan Notholepthyphantes Millidge, 1985 — Chile Nothophantes Merrett & Stevens, 1995 — Britain Notiogyne Tanasevitch, 2007 — Russia Notiohyphantes Millidge, 1985 — Mexico, South America Notiomaso Banks, 1914 — Chile, Argentina Notioscopus Simon, 1884 — South Africa, Asia Notolinga Lavery & Dupérré, 2019 — Argentina & Falkland Is. Novafroneta Blest, 1979 — New Zealand Novafrontina Millidge, 1991 — South America, Mexico Novalaetesia Millidge, 1988 — New Zealand Nusoncus Wunderlich, 2008 — Europe Oaphantes Chamberlin & Ivie, 1943 — United States Obrimona Strand, 1934 — Sri Lanka Obscuriphantes Saaristo & Tanasevitch, 2000 — Europe, Asia Oculocornia Oliger, 1985 — Russia Oedothorax Bertkau, 1883 — Europe, North America, Asia, Argentina, Africa Oia Wunderlich, 1973 — Asia Oilinyphia Ono & Saito, 1989 — China, Thailand, Japan Okhotigone Eskov, 1993 — Russia, China, Japan Onychembolus Millidge, 1985 — Chile, Argentina Ophrynia Jocqué, 1981 — Tanzania, Malawi, Cameroon Oreocyba Holm, 1962 — Kenya, Uganda Oreoneta Kulczyński, 1894 — Asia, North America, Europe Oreonetides Strand, 1901 — Asia, North America, Europe Oreophantes Eskov, 1984 — United States, Canada Orfeo Miller, 2007 — Brazil Origanates Crosby & Bishop, 1933 — United States Orsonwelles Hormiga, 2002 — Hawaii Oryphantes Hull, 1932 — North America, Asia Ostearius Hull, 1911 — South Africa, China, New Zealand Ouedia Bosmans & Abrous, 1992 — Europe, Algeria Pachydelphus Jocqué & Bosmans, 1983 — Gabon, Sierra Leone, Côte d'Ivoire Pacifiphantes Eskov & Marusik, 1994 — North America, Asia Pahangone Tanasevitch, 2018 — Malaysia Paikiniana Eskov, 1992 — Korea, China, Japan Palaeohyphantes Millidge, 1984 — Australia Palliduphantes Saaristo & Tanasevitch, 2001 — Europe, Asia, Africa Panamomops Simon, 1884 — Europe, Asia Paracornicularia Crosby & Bishop, 1931 — United States Paracymboides Tanasevitch, 2011 — India Paraeboria Eskov, 1990 — Russia Parafroneta Blest, 1979 — New Zealand Paraglyphesis Eskov, 1991 — Russia Paragongylidiellum Wunderlich, 1973 — India, Nepal Paraletes Millidge, 1991 — Peru, Brazil Parameioneta Locket, 1982 — Asia Parapelecopsis Wunderlich, 1992 — Portugal, Georgia Parasisis Eskov, 1984 — Asia Paratapinocyba Saito, 1986 — Japan Paratmeticus Marusik & Koponen, 2010 — Russia, Japan Parawubanoides Eskov & Marusik, 1992 — Russia, Mongolia Parbatthorax Tanasevitch, 2019 — Nepal Parhypomma Eskov, 1992 — Japan Paro Berland, 1942 — Austral Is. Parvunaria Tanasevitch, 2018 — Myanmar Patagoneta Millidge, 1985 — Chile Pecado Hormiga & Scharff, 2005 — Spain, Morocco, Algeria Pelecopsidis Bishop & Crosby, 1935 — United States Pelecopsis Simon, 1864 — Africa, Europe, North America, Asia Peponocranium Simon, 1884 — Asia, Europe Perlongipalpus Eskov & Marusik, 1991 — Russia, Mongolia Perregrinus Tanasevitch, 1992 — Asia, Canada Perro Tanasevitch, 1992 — Russia, Canada Phanetta Keyserling, 1886 — United States Phlattothrata Crosby & Bishop, 1933 — United States, Russia Phyllarachne Millidge & Russell-Smith, 1992 — Indonesia Piesocalus Simon, 1894 — Indonesia Piniphantes Saaristo & Tanasevitch, 1996 — Europe, Asia Pityohyphantes Simon, 1929 — North America, Asia Plaesianillus Simon, 1926 — France Platyspira Song & Li, 2009 — China Plectembolus Millidge & Russell-Smith, 1992 — Philippines, Malaysia, Indonesia Plesiophantes Heimer, 1981 — Russia, Georgia, Turkey Plicatiductus Millidge & Russell-Smith, 1992 — Indonesia Pocadicnemis Simon, 1884 — North America, Europe, Asia Pocobletus Simon, 1894 — Saint Vincent and the Grenadines, Costa Rica, Venezuela Poecilafroneta Blest, 1979 — New Zealand Poeciloneta Kulczyński, 1894 — Asia, North America Porrhomma Simon, 1884 — Asia, North America, Europe Praestigia Millidge, 1954 — Canada, Europe, Asia Primerigonina Wunderlich, 1995 — Panama Prinerigone Millidge, 1988 — Africa, Asia Priperia Simon, 1904 — Hawaii Procerocymbium Eskov, 1989 — Russia, Canada Proelauna Jocqué, 1981 — Angola, Tanzania, Malawi Proislandiana Tanasevitch, 1985 — Russia Promynoglenes Blest, 1979 — New Zealand Pronasoona Millidge, 1995 — Thailand, Malaysia Prosoponoides Millidge & Russell-Smith, 1992 — Asia Protoerigone Blest, 1979 — New Zealand Pseudafroneta Blest, 1979 — New Zealand Pseudocarorita Wunderlich, 1980 — Central Europe Pseudocyba Tanasevitch, 1984 — Russia, Kazakhstan Pseudohilaira Eskov, 1990 — Russia Pseudomaro Denis, 1966 — Europe Pseudomaso Locket & Russell-Smith, 1980 — Nigeria Pseudomicrargus Eskov, 1992 — Japan Pseudomicrocentria Miller, 1970 — South Africa, Malaysia Pseudoporrhomma Eskov, 1993 — Russia Pseudotyphistes Brignoli, 1972 — South America Pseudowubana Eskov & Marusik, 1992 — Russia, Mongolia Psilocymbium Millidge, 1991 — South America Putaoa Hormiga & Tu, 2008 — Taiwan, China Racata Millidge, 1995 — Indonesia, Thailand Rhabdogyna Millidge, 1985 — Chile Ringina Tambs-Lyche, 1954 — Crozet Is. Russocampus Tanasevitch, 2004 — Russia Ryojius Saito & Ono, 2001 — Korea, Japan, China Saaristoa Millidge, 1978 — Japan, United States Sachaliphantes Saaristo & Tanasevitch, 2004 — Asia Saitonia Eskov, 1992 — China, Japan, Korea Saloca Simon, 1926 — Turkey, Nepal, Russia Satilatlas Keyserling, 1886 — United States, Canada, Russia Sauron Eskov, 1995 — Russia, Kazakhstan Savignia Blackwall, 1833 — Asia, United States, Australia, Europe, Comoros Savigniorrhipis Wunderlich, 1992 — Azores Scandichrestus Wunderlich, 1995 — Sweden, Finland, Russia Sciastes Bishop & Crosby, 1938 — Europe, Russia, North America Scirites Bishop & Crosby, 1938 — United States, Canada Scironis Bishop & Crosby, 1938 — United States Scolecura Millidge, 1991 — Brazil, Colombia, Argentina Scolopembolus Bishop & Crosby, 1938 — United States Scotargus Simon, 1913 — Algeria, Russia Scotinotylus Simon, 1884 — Asia, North America, Europe Scutpelecopsis Marusik & Gnelitsa, 2009 — Asia, Romania Scylaceus Bishop & Crosby, 1938 — United States, Canada Scyletria Bishop & Crosby, 1938 — United States, Canada Selenyphantes Gertsch & Davis, 1946 — Mexico, Guatemala Semljicola Strand, 1906 — Asia, Europe, North America Sengletus Tanasevitch, 2008 — Egypt, Israel, Iran Shaanxinus Tanasevitch, 2006 — China Shanus Tanasevitch, 2006 — China Sibirocyba Eskov & Marusik, 1994 — Russia Silometopoides Eskov, 1990 — Asia, North America, Greenland Silometopus Simon, 1926 — Europe, Asia Simplicistilus Locket, 1968 — West & Central Africa Singatrichona Tanasevitch, 2019 — Singapore Sinolinyphia Wunderlich & Li, 1995 Sinopimoa Li & Wunderlich, 2008 — China Sintula Simon, 1884 — Asia, Europe, Africa Sisicottus Bishop & Crosby, 1938 — United States, Canada, Russia Sisicus Bishop & Crosby, 1938 — Russia, United States, Canada Sisis Bishop & Crosby, 1938 — United States, Canada Sisyrbe Bishop & Crosby, 1938 — United States Sitalcas Bishop & Crosby, 1938 — United States Smerasia Zhao & Li, 2014 — China Smermisia Simon, 1894 — South America, Costa Rica Smodix Bishop & Crosby, 1938 — United States, Canada Solenysa Simon, 1894 — Asia Soucron Crosby & Bishop, 1936 — United States, Canada Souessa Crosby & Bishop, 1936 — United States Souessoula Crosby & Bishop, 1936 — United States Sougambus Crosby & Bishop, 1936 — United States, Canada Souidas Crosby & Bishop, 1936 — United States Soulgas Crosby & Bishop, 1936 — United States Spanioplanus Millidge, 1991 — Venezuela, Peru Sphecozone O. Pickard-Cambridge, 1871 — South America, United States, Trinidad Spiralophantes Tanasevitch & Saaristo, 2006 — Nepal Spirembolus Chamberlin, 1920 — United States, Canada, Mexico Stemonyphantes Menge, 1866 — Asia, Ukraine, North America Sthelota Simon, 1894 — Panama, Guatemala Stictonanus Millidge, 1991 — Chile Strandella Oi, 1960 — Asia Strongyliceps Fage, 1936 — Kenya, Uganda Styloctetor Simon, 1884 — Europe, North America, Asia Subbekasha Millidge, 1984 — Canada Syedra Simon, 1884 — Europe, Asia, Congo Symmigma Crosby & Bishop, 1933 — United States Tachygyna Chamberlin & Ivie, 1939 — United States, Canada Taibainus Tanasevitch, 2006 — China Taibaishanus Tanasevitch, 2006 — China Tallusia Lehtinen & Saaristo, 1972 — Asia, Greece Tanasevitchia Marusik & Saaristo, 1999 — Russia Tapinocyba Simon, 1884 — Europe, Algeria, North America, Asia Tapinocyboides Wiehle, 1960 — India Tapinopa Westring, 1851 — United States, Europe, Asia Tapinotorquis Dupérré & Paquin, 2007 — United States, Canada Taranucnus Simon, 1884 — Europe, Asia, United States Tarsiphantes Strand, 1905 — Russia, Canada, Greenland Tchatkalophantes Tanasevitch, 2001 — Asia Tegulinus Tanasevitch, 2017 — Indonesia Tennesseellum Petrunkevitch, 1925 — United States Tenuiphantes Saaristo & Tanasevitch, 1996 — Asia, Europe, Africa, North America, South America, New Zealand Ternatus Sun, Li & Tu, 2012 Tessamoro Eskov, 1993 — Russia Thainetes Millidge, 1995 — Thailand Thaiphantes Millidge, 1995 — Thailand Thaleria Tanasevitch, 1984 — Russia, United States Thapsagus Simon, 1894 — Madagascar Thaumatoncus Simon, 1884 — Europe, Algeria, Israel Theoa Saaristo, 1995 — Asia, Seychelles Theoneta Eskov & Marusik, 1991 — Russia Theonina Simon, 1929 — Russia, Algeria Thyreobaeus Simon, 1889 — Madagascar Thyreosthenius Simon, 1884 — Russia Tibiaster Tanasevitch, 1987 — Kazakhstan Tibioploides Eskov & Marusik, 1991 — Asia, Estonia Tibioplus Chamberlin & Ivie, 1947 — Asia, United States Tiso Simon, 1884 — Canada, Greenland, Asia Tmeticodes Ono, 2010 — Japan Tmeticus Menge, 1868 — Asia, North America Tojinium Saito & Ono, 2001 — Japan Toltecaria Miller, 2007 — Mexico Tomohyphantes Millidge, 1995 — Indonesia Toschia Caporiacco, 1949 — Africa Totua Keyserling, 1891 — Brazil Trachyneta Holm, 1968 — Congo, Malawi Traematosisis Bishop & Crosby, 1938 — United States Trematocephalus Dahl, 1886 — Asia, France Trichobactrus Wunderlich, 1995 — Mongolia Trichoncoides Denis, 1950 — France, Asia Trichoncus Simon, 1884 — Europe, Africa, Asia Trichoncyboides Wunderlich, 2008 — Switzerland, Germany, Austria Trichopterna Kulczyński, 1894 — Asia, Europe, Africa Trichopternoides Wunderlich, 2008 — Europe Triplogyna Millidge, 1991 — Brazil, Argentina, Colombia Troglohyphantes Joseph, 1881 — Europe, Algeria, Asia Troxochrota Kulczyński, 1894 — Russia Troxochrus Simon, 1884 — Europe, Asia, Angola Tubercithorax Eskov, 1988 — Russia Tunagyna Chamberlin & Ivie, 1933 — Russia, Canada, United States Turbinellina Millidge, 1993 — Chile, Argentina Turinyphia van Helsdingen, 1982 — China, Korea, Japan Tusukuru Eskov, 1993 — United States, Russia Tutaibo Chamberlin, 1916 — North America, South America, Guatemala Tybaertiella Jocqué, 1979 — Côte d'Ivoire, Nigeria, Ethiopia Typhistes Simon, 1894 — Sri Lanka, Ethiopia, South Africa Typhlonyphia Kratochvíl, 1936 — Croatia Typhochrestinus Eskov, 1990 — Russia Typhochrestoides Eskov, 1990 — Russia Typhochrestus Simon, 1884 — Europe, Africa, Asia, North America Uahuka Berland, 1935 — Marquesas Is. Uapou Berland, 1935 — Marquesas Is. Ulugurella Jocqué & Scharff, 1986 — Tanzania Ummeliata Strand, 1942 — Asia Uralophantes Esyunin, 1992 — Ukraine, Russia Ussurigone Eskov, 1993 — Russia Uusitaloia Marusik, Koponen & Danilov, 2001 — Russia Vagiphantes Saaristo & Tanasevitch, 2004 — Central Asia Venia Seyfulina & Jocqué, 2009 — Kenya Vermontia Millidge, 1984 — United States, Canada, Russia Vesicapalpus Millidge, 1991 — Brazil, Argentina Vietnagone Tanasevitch, 2019 — China, Vietnam Viktorium Eskov, 1988 — Russia Vittatus Zhao & Li, 2014 — China Wabasso Millidge, 1984 — Russia, North America, Greenland Walckenaeria Blackwall, 1833 — Europe, Asia, Africa, North America, Central America, Colombia, Cuba Walckenaerianus Wunderlich, 1995 — Asia, Bulgaria Wiehlea Braun, 1959 — Western Europe Wiehlenarius Eskov, 1990 — Russia, Europe Wubana Chamberlin, 1919 — United States Wubanoides Eskov, 1986 — Russia, Japan, Mongolia Xim Ibarra-Núñez, Chamé-Vázquez & Maya-Morales, 2021 — Mexico Yakutopus Eskov, 1990 — Russia Yuelushannus Irfan et al., 2020 — China Zerogone Eskov & Marusik, 1994 — Russia Zhezhoulinyphia Irfan, Zhou & Peng, 2019 — China Zilephus Simon, 1902 — Argentina Zornella Jackson, 1932 — North America, Asia Zygottus Chamberlin, 1949 — United States Gallery
Biology and health sciences
Spiders
Animals
445914
https://en.wikipedia.org/wiki/Sharecropping
Sharecropping
Sharecropping is a legal arrangement in which a landowner allows a tenant (sharecropper) to use the land in return for a share of the crops produced on that land. Sharecropping is not to be conflated with tenant farming, providing the tenant a higher economic and social status. Sharecropping has a long history and there are a wide range of different situations and types of agreements that have used a form of the system. Some are governed by tradition, and others by law. The French métayage, the Catalan masoveria, the Castilian mediero, the Slavic połownictwo and izdolshchina, the Italian mezzadria, and the Islamic system of muzara‘a (المزارعة), are examples of legal systems that have supported sharecropping. Overview Under a sharecropping system, landowners provided a share of land to be worked by the sharecropper, and usually provided other necessities such as housing, tools, seed, or working animals. Local merchants usually provide food and other supplies to the sharecropper on credit. In exchange for the land and supplies, the cropper would pay the owner a share of the crop at the end of the season, typically one-half to two-thirds. The cropper used his share to pay off their debt to the merchant. If there was any cash left over, the cropper kept it—but if their share came to less than what they owed, they remained in debt. A new system of credit, the crop lien, became closely associated with sharecropping. Under this system, a planter or merchant extended a line of credit to the sharecropper while taking the year's crop as collateral. The sharecropper could then draw food and supplies all year long. When the crop was harvested, the planter or merchants who held the lien sold the harvest for the sharecropper and settled the debt. Sociologist Jeffery M. Paige made a distinction between centralized sharecropping found on cotton plantations and the decentralized sharecropping with other crops. The former is characterized by long lasting tenure. Tenants are tied to the landlord through the plantation store. This form of tenure tends to be replaced by paid salaries as markets penetrate. Decentralized sharecropping involves virtually no role for the landlord: plots are scattered, peasants manage their own labor and the landowners do not manufacture the crops. This form of tenure becomes more common when markets penetrate. Farmers who farmed land belonging to others but owned their own mule and plow were called tenant farmers; they owed the landowner a smaller share of their crops, as the landowner did not have to provide them with as much in the way of supplies. Application by region Historically, sharecropping occurred extensively in Scotland, Ireland and colonial Africa. Use of the sharecropper system has also been identified in England (as the practice of "farming to halves"). It was widely used in the Southern United States during the Reconstruction era (1865–1877) that followed the American Civil War, which was economically devastating to the Southern states. It is still used in many rural poor areas of the world today, notably in Pakistan, India, and Bangladesh. Africa In settler colonies of colonial Africa, sharecropping was a feature of the agricultural life. White farmers, who owned most of the land, were frequently unable to work the whole of their farm for lack of capital. They, therefore, had African farmers to work the excess on a sharecropping basis. In South Africa the 1913 Natives' Land Act outlawed the ownership of land by Africans in areas designated for white ownership and effectively reduced the status of most sharecroppers to tenant farmers and then to farm laborers. In the 1960s, generous subsidies to white farmers meant that most farmers could afford to work their entire farms, and sharecropping faded out. The arrangement has reappeared in other African countries in modern times, including Ghana and Zimbabwe. Economic historian Pius S. Nyambara argued that Eurocentric historiographical devices such as "feudalism" or "slavery" often qualified by weak prefixes like "semi-" or "quasi-" are not helpful in understanding the antecedents and functions of sharecropping in Africa. United States Prior to the Civil War, sharecropping is known to have existed in Mississippi and is believed to have been in place in Tennessee. However, it was not until the economic upheaval caused by the American Civil War and the end of slavery during and after Reconstruction that it became widespread in the South. It is theorized that sharecropping in the United States originated in the Natchez District, roughly centered in Adams County, Mississippi with its county seat, Natchez. After the war, plantations and other lands throughout the South were seized by the federal government. In January 1865, General William T. Sherman issued Special Field Orders No. 15, which announced that he would temporarily grant newly freed families 40 acres of this seized land on the islands and coastal regions of Georgia. Many believed that this policy would be extended to all formerly enslaved people and their families as repayment for their treatment at the end of the war. In the summer of 1865, President Andrew Johnson, as one of the first acts of Reconstruction, instead ordered all land under federal control be returned to the owners from whom it had been seized. Southern landowners thus found themselves with a great deal of land but no liquid assets to pay for labor. They also maintained the "belief that gangs afforded the most efficient means of labor organization", something nearly all formerly enslaved people resisted. Preferring "to organize themselves into kin groups", as well as "minimize chances for white male-black female contact by removing their female kin from work environments supervised closely by whites", black southerners were "determined to resist the old slave ways". Not with standing, many formerly enslaved people, now called freedmen, having no land or other assets of their own, needed to work to support their families. A sharecropping system centered on cotton, a major cash crop, developed as a result. Large plantations were subdivided into plots that could be worked by sharecroppers. Initially, sharecroppers in the American South were almost all formerly enslaved black people, but eventually cash-strapped indigent white farmers were integrated into the system. During Reconstruction, the federal Freedmen's Bureau ordered the arrangements for freedmen and wrote and enforced their contracts. American sharecroppers worked a section of the plantation independently. In South Carolina, Georgia, Alabama and Mississippi, the dominant crop was usually cotton. In other areas it could be tobacco, rice, or sugar. At harvest time the crop was sold and the cropper received half of cash paid for the crop on his parcel. Sharecroppers also often received their farming tools and all other goods from the landowner they were contracted with. Landowners dictated decisions relating to the crop mix, and sharecroppers were often in agreements to sell their portion of the crop back to the landowner, thus being subjected to manipulated prices. In addition to this, landowners, threatening to not renew the lease at the end of the growing season, were able to apply pressure to their tenants. Sharecropping often proved economically problematic, as the landowners held significant economic control. In the Reconstruction Era, sharecropping was one of few options for penniless freedmen to support themselves and their families. Other solutions included the crop-lien system (where the farmer was extended credit for seed and other supplies by the merchant), a rent labor system (where the farmer rents the land but keeps their entire crop), and the wage system (worker earns a fixed wage but keeps none of their crop). Sharecropping as historically practiced in the American South was more economically productive than the gang system plantations using enslaved workers, though less productive than modern agricultural techniques. Sharecropping continued to be a significant institution in many states for decades following the Civil War. By the early 1930s, there were 5.5 million white tenant farmers, sharecroppers, and mixed cropping/laborers in the United States; and 3 million Blacks. In Tennessee, sharecroppers operated approximately one-third of all farm units in the state in the 1930s, with white people making up two thirds or more of the sharecroppers. In Mississippi, by 1900, 36% of all white farmers were tenants or sharecroppers, while 85% of black farmers were. In Georgia, fewer than 16,000 farms were operated by black owners in 1910, while, at the same time, African-Americans managed 106,738 farms as tenants. Around this time, sharecroppers began to form unions protesting against poor treatment, beginning in Tallapoosa County, Alabama in 1931 and Arkansas in 1934. Membership in the Southern Tenant Farmers Union included both blacks and poor whites, who used meetings, protests, and labor strikes to push for better treatment. The success of these actions frightened and enraged landlords, who responded with aggressive tactics. Landless farmers who fought the sharecropping system were socially denounced, harassed by legal and illegal means, and physically attacked by officials, landlords' agents, or in extreme cases, angry mobs. Sharecroppers' strikes in Arkansas and the Missouri Bootheel, the 1939 Missouri Sharecroppers' Strike, were documented in the newsreel Oh Freedom After While. The plight of a sharecropper was addressed in the song Sharecropper's Blues, recorded by Charlie Barnet and His Orchestra in 1944.The sharecropping system in the U.S. increased during the Great Depression with the creation of tenant farmers following the failure of many small farms throughout the Dustbowl. Traditional sharecropping declined after mechanization of farm work became economical beginning in the late 1930s and early 1940s. As a result, many sharecroppers were forced off the farms, and migrated to cities to work in factories, or became migrant workers in the Western United States during World War II. By the end of the 1960s, sharecropping had disappeared in the United States. Sharecropping and socioeconomic status About two-thirds of sharecroppers were white, the rest black. Sharecroppers, the poorest of the poor, organized for better conditions. The racially integrated Southern Tenant Farmers Union made gains for sharecroppers in the 1930s. Sharecropping had diminished in the 1940s due to the Great Depression, farm mechanization, and other factors. Impacts Sharecropping may have been harmful to tenants, with many cases of high interest rates, unpredictable harvests, and unscrupulous landlords and merchants often keeping tenant farm families severely indebted. The debt was often compounded year on year leaving the cropper vulnerable to intimidation and shortchanging. Nevertheless, it appeared to be inevitable, with no serious alternative unless the croppers left agriculture. Landlords opt for sharecropping to avoid the administrative costs and shirking that occurs on plantations and haciendas. It is preferred to cash tenancy because cash tenants take all the risks, and any harvest failure will hurt them and not the landlord. Therefore, they tend to demand lower rents than sharecroppers. Some economists have argued that sharecropping is not as exploitative as it is often perceived. John Heath and Hans P. Binswanger write that "evidence from around the world suggests that sharecropping is often a way for differently endowed enterprises to pool resources to mutual benefit, overcoming credit restraints and helping to manage risk." Sharecropping agreements can be made fairly, as a form of tenant farming or sharefarming that has a variable rental payment, paid in arrears. There are three different types of contracts. Workers can rent plots of land from the owner for a certain sum and keep the whole crop. Workers work on the land and earn a fixed wage from the land owner but keep some of the crop. No money changes hands but the worker and land owner each keep a share of the crop. According to sociologist Edward Royce, "adherents of the neoclassical approach" argued that sharecropping incentivized laborers by giving them a vested interest in the crop. American plantations were wary of this interest, as they felt that would lead to African Americans demanding rights of partnership. Many black laborers denied the unilateral authority that landowners hoped to achieve, further complicating relations between landowners and sharecroppers. Sharecropping may allow women to have access to arable land, albeit not as owners, in places where ownership rights are vested only in men. Economic theories of share tenancy The theory of share tenancy was long dominated by Alfred Marshall's famous footnote in Book VI, Chapter X.14 of Principles where he illustrated the inefficiency of agricultural share-contracting. Steven N.S. Cheung (1969), challenged this view, showing that with sufficient competition and in the absence of transaction costs, share tenancy will be equivalent to competitive labor markets and therefore efficient. He also showed that in the presence of transaction costs, share-contracting may be preferred to either wage contracts or rent contracts—due to the mitigation of labor shirking and the provision of risk sharing. Joseph Stiglitz (1974, 1988), suggested that if share tenancy is only a labor contract, then it is only pairwise-efficient and that land-to-the-tiller reform would improve social efficiency by removing the necessity for labor contracts in the first place. Reid (1973), Murrel (1983), Roumasset (1995) and Allen and Lueck (2004) provided transaction cost theories of share-contracting, wherein tenancy is more of a partnership than a labor contract and both landlord and tenant provide multiple inputs. It has also been argued that the sharecropping institution can be explained by factors such as informational asymmetry (Hallagan, 1978; Allen, 1982; Muthoo, 1998), moral hazard (Reid, 1976; Eswaran and Kotwal, 1985; Ghatak and Pandey, 2000), intertemporal discounting (Roy and Serfes, 2001), price fluctuations (Sen, 2011) or limited liability (Shetty, 1988; Basu, 1992; Sengupta, 1997; Ray and Singh, 2001).
Technology
Agriculture_2
null
445934
https://en.wikipedia.org/wiki/Leghorn%20chicken
Leghorn chicken
The Leghorn, or Livornese, is a breed of chicken originating in Tuscany, in central Italy. Birds were first exported to North America in 1828 from the Tuscan port city of Livorno, on the western coast of Italy. They were initially called "Italians", but by 1865 the breed was known as "Leghorn", the traditional anglicisation of "Livorno". The breed was introduced to Britain from the United States in 1870. White Leghorns are commonly used as layer chickens in many countries of the world. Other Leghorn varieties are less common. History The origins of the Leghorn are not clear; it appears to derive from light breeds originating in rural Tuscany. The name comes from Leghorn, the traditional anglicisation of Livorno, the Tuscan port from which the first birds were exported to North America. The date of the first exports is variously reported as 1828, "about 1830" and 1852. They were initially known as "Italians"; they were first referred to as "Leghorns" in 1865, in Worcester, Massachusetts. The Leghorn was included in the American Standard of Perfection in 1874, with three colours: black, white and brown (light and dark). Rose comb light and dark brown were added in 1883, and rose comb white in 1886. Single comb buff and silver followed in 1894, and red, black-tailed red, and Columbian in 1929. In 1981 rose comb black, buff, silver, and golden duckwing were added. The breed was first introduced to Britain from the United States in 1870, and from there re-exported to Italy. White Leghorns that had won first prize at the 1868 New York Show were imported to Britain in 1870, and brown Leghorns from 1872. These birds were small, not exceeding in weight; weight was increased by cross-breeding with Minorca and Malay stock. Pyle Leghorns were first bred in Britain in the 1880s; gold and silver duckwings originated there a few years later, from crosses with Phoenix or Japanese Yokohama birds. Buff Leghorns were first seen in Denmark in 1885, and in England in 1888. Characteristics In Italy, where the Livorno breed standard is recent, ten colour varieties are recognised. There is a separate Italian standard for the German Leghorn variety, the Italiana (German: Italiener). The Fédération française des volailles (the French poultry federation) divides the breed into four types: the American white, the English white, the old type (golden-salmon) and the modern type, for which seventeen colour variants are listed for full-size birds, and fourteen for bantams; it also recognises an autosexing variety, the Cream Legbar. Both the American Poultry Association and the American Bantam Association recognize a number of Leghorn varieties including white, red, black-tailed red, light brown, dark brown, black, buff, Columbian, buff Columbian, barred, and silver. In Britain, the Leghorn Club recognises eighteen colours: golden duckwing, silver duckwing, partridge, brown, buff, exchequer, Columbian, pyle, white, black, blue, mottled, cuckoo, blue-red, lavender, red, crele, and buff Columbian. Most Leghorns have single combs; a rose comb is permitted in some countries, but not in Italy. The legs are bright yellow, and the ear-lobes white. The Italian standard gives a weight range of for cocks, for hens. According to the British standard, fully grown Leghorn cocks weigh , hens ; cockerels weigh and pullets ; for bantams the maximum weight is for cocks and for hens. Ring size is for cocks, for hens. Use Leghorns are good layers of white eggs, laying an average of 280 per year and sometimes reaching 300–320, with a weight of at least . White Leghorns have been much used to create highly productive egg-laying hybrids for commercial and industrial operations.
Biology and health sciences
Chickens
Animals
445974
https://en.wikipedia.org/wiki/Callitropsis%20nootkatensis
Callitropsis nootkatensis
Callitropsis nootkatensis, formerly known as Cupressus nootkatensis (syn. Xanthocyparis nootkatensis, Chamaecyparis nootkatensis), is a species of tree in the cypress family native to the coastal regions of northwestern North America. This species goes by many common names including: Nootka cypress, yellow cypress, Alaska cypress, Nootka cedar, yellow cedar, Alaska cedar, and Alaska yellow cedar. The specific epithet nootkatensis is derived from the species being from the area of Nootka Sound on the west coast of Vancouver Island, Canada. Both locations are named for the older European name Nootka, given the Nuu-chah-nulth First Nation. Description Callitropsis nootkatensis is an evergreen conifer growing up to tall, exceptionally , with diameters up to . The bark is thin, smooth and purplish when young, turning flaky and gray. The branches are commonly pendulous, with foliage in flat sprays and dark green scale-leaves measuring long. The cones, maturing biannually, have 4 (occasionally 6) scales, and resemble the cones of Cupressus lusitanica (another species which can show foliage in flat sprays), except being somewhat smaller, typically in diameter; each scale has a pointed triangular bract about 1.5–2 mm long, again similar to other Cupressus and unlike the crescent-shaped, non-pointed bract on the scales of Chamaecyparis cones. The winged seeds are small, thus dispersing at a close range; additionally, only a small percentage is viable. The Caren Range on the west coast of British Columbia is home to the oldest Nootka cypress specimens in the world, with one specimen found to be 1,834 years old; some specimens may be over 3,000 years old. Callitropsis nootkatensis is one of the parents of the hybrid Leyland cypress; the other parent, Monterey cypress (Hesperocyparis macrocarpa), was also considered to be in the genus Cupressus, but in the North American Hesperocyparis clade, which has generally been found to be phylogenetically closer to C. nootkatensis than the Old World clade Cupressus sensu stricto. Taxonomy First described in the genus Cupressus as Cupressus nootkatensis in 1824 based on a specimen collected "ad Sinum Nootka dictum", which translates to "said Bay of Nootka". It was transferred to Chamaecyparis in 1841 on the basis of its foliage being in flattened sprays, as in other Chamaecyparis, but unlike most (though not all) other Cupressus species. However, this placement does not fit with the morphology and phenology of the cones, which are far more like Cupressus, maturing in two years rather than one. Genetic evidence, published by Gadek et al., strongly supported its return to Cupressus and exclusion from Chamaecyparis. Farjon et al. (2002) transferred it to a new genus Xanthocyparis, together with the newly discovered Vietnamese golden cypress (Xanthocyparis vietnamensis); this species is remarkably similar to Nootka cypress and the treatment has many arguments in its favour, as while they are not related to Chamaecyparis, neither do they fit fully in Cupressus despite the many similarities. Little et al. confirmed this relationship with further evidence and pointed out that an earlier nomenclatural combination in the genus Callitropsis existed, as Callitropsis nootkatensis (D.Don) Oerst., published in 1864 but overlooked or ignored by other subsequent authors. Little et al. therefore synonymised Xanthocyparis with Callitropsis, the correct name for these species under the ICBN when treated in a distinct genus. The name Xanthocyparis has now been proposed for conservation, and the 2011 International Botanical Congress followed that recommendation. In 2010, Mao et al. performed a more detailed molecular analysis and placed Nootka cypress back in Cupressus. This was disputed, as the tree would compose a monophyletic subgenus, but the Gymnosperm Database suggested that it could comprise a monotypic genus as Callitropsis nootkatensis. In 2021, a molecular study by Stull et al. found the species to indeed belong to the distinct genus Callitropsis and recovered this as the sister genus to Hesperocyparis. The clade comprising both was found to be sister to Xanthocyparis (containing only the Vietnamese golden cypress), and the clade containing the three genera was found to be sister to a clade containing Juniperus and Cupressus sensu stricto. Distribution and habitat The species grows in moist areas of coastal mountains of the Pacific Northwest, including those of the Cascades, from the Kenai Peninsula in Alaska to the Klamath Mountains in northernmost California. It can be found at elevations higher than those reached by Thuja plicata (western redcedar), sometimes in a krummholz form, and even occupying very rocky sites (near the California-Oregon border). It can be found at elevations of in Southeast Alaska and between from coastal British Columbia into Oregon. Isolated groves near Nelson, British Columbia, and John Day, Oregon, may be the descendants of local populations dating to the Last Glacial Period. Ecology The tree benefits from annual precipitation exceeding , particularly in deep snow though with temperatures not often dropping below . Snow tends not to break the flexible branches. It is shade tolerant, but less so than associated mountain hemlock (Tsuga mertensiana) and Pacific silver fir (Abies amabilis), and grows slowly. Anti-fungal chemicals within the tree aid in its longevity. It is also rarely afflicted by insects, although is susceptible to heart rot. In Alaska, where the tree is primarily referred to as "yellow cedar", extensive research has been conducted into large-scale die-offs of yellow cedar stands. These studies have concluded that the tree has depended upon heavy coastal snowpacks to insulate its shallow roots from cold Arctic winters. The impacts of climate change have resulted in thinner, less-persistent snowpacks, in turn causing increased susceptibility to freeze damage. This mortality has been observed over 7% of the species range, covering approximately 10 degrees of latitude from northern southeast Alaska to southern British Columbia. Substantial future mortality is likely due to warming temperatures and decreasing snowpacks. The U.S. Fish & Wildlife Service is reviewing whether to designate the species as threatened or endangered. Uses The Nootka cypress is used extensively by the indigenous peoples of the Pacific Northwest Coast, along with another cypress, Thuja plicata (western redcedar). While the wood and inner bark of western redcedar was preferred for larger projects like houses and canoes, the stronger inner bark of Nootka cypress was used for smaller vessels and utensils, including canoe paddles and baskets, as well as thread for clothing and blankets. This species has been considered to be one of the finest timber trees in the world and has been exported to China during the last century. The wood has been used for flooring, interior finish and shipbuilding. The tree has extreme heartwood qualities that make it one of the most desired sources of firewood on the West Coast. It burns very hot and lasts a long time as embers. A tree can still be used for firewood up to 100 years after its death. Construction The various physical properties of the wood make it an attractive material for both general construction and boatbuilding. Due to its slow growth it is hard and, like other cypress woods, it is durable; it therefore offers good dimensional stability and is resistant to weather, insects, and contact with soil. It works easily with hand or machine tools, turning and carving quite well. It can be fastened with glues, screws, and nails. Nootka cypress's texture, uniform color, and straight grain will take a fine finish. It resists splintering and wears smoothly over time. When fresh cut it has a somewhat unpleasant bitter scent, but when seasoned it has barely any discernible odor, hence its traditional use in face masks. Due to its expense, it is used mainly for finished carpentry. Typical uses include exterior siding, shingles, decking, exposed beams, glue-laminated beams, paneling, cabinetry, and millwork. In historic preservation it can be used as a substitute for Thuja plicata (western redcedar) and Taxodium distichum (bald cypress), due to current difficulties in obtaining quality timber of those species due to environmental concern and past over-exploitation, although this applies equally to Nootka cypress. Other uses for Nootka cypress include saunas, and battery containers due to its resistance to acids. Traditionally, paddles, masks, dishes, and bows were made from the wood. Landscaping The drooping branchlets give the tree a graceful weeping appearance. It makes an attractive specimen tree in parks and open spaces. It can also be used as a tall hedge. It will grow in USDA plant hardiness zones 5–9, but can be difficult to grow. Best growth is in light or heavy soil, preferably well drained, and in climates with cool summers. It prefers semi-shade to full sun. It can also be used in bonsai. Under the synonym Xanthocyparis nootkatensis the cultivar C. nootkatensis 'Pendula' has gained the Royal Horticultural Society's Award of Garden Merit. In Indigenous culture A legend amongst the Nootka peoples of the Hesquiaht First Nation tells of the origins of the Nootka cypress. In the legend, a raven encounters three young women drying salmon on the beach. He asks the women if they are afraid of being alone, or of bears, wolves, and other animals. Each woman responded "no". But when asked about owls, the women were indeed afraid of owls. Hearing this, the trickster raven hid in the forests, and made the calls of an owl. The terrified women ran up the mountains, but turned into Nootka cypress trees when they were out of breath. According to the Nootka, this is why Nootka cypress grows on the sides of mountains, and also why the bark is silky like a woman's hair, and the young trunk is smooth like a woman's body. In Tlingit culture the story of Natsilane describes how a Nootka cypress was used to carve the world's first killer whale. Gallery
Biology and health sciences
Cupressaceae
Plants
445980
https://en.wikipedia.org/wiki/Von%20Neumann%20universe
Von Neumann universe
In set theory and related branches of mathematics, the von Neumann universe, or von Neumann hierarchy of sets, denoted by V, is the class of hereditary well-founded sets. This collection, which is formalized by Zermelo–Fraenkel set theory (ZFC), is often used to provide an interpretation or motivation of the axioms of ZFC. The concept is named after John von Neumann, although it was first published by Ernst Zermelo in 1930. The rank of a well-founded set is defined inductively as the smallest ordinal number greater than the ranks of all members of the set. In particular, the rank of the empty set is zero, and every ordinal has a rank equal to itself. The sets in V are divided into the transfinite hierarchy Vα, called the cumulative hierarchy, based on their rank. Definition The cumulative hierarchy is a collection of sets Vα indexed by the class of ordinal numbers; in particular, Vα is the set of all sets having ranks less than α. Thus there is one set Vα for each ordinal number α. Vα may be defined by transfinite recursion as follows: Let V0 be the empty set: For any ordinal number β, let Vβ+1 be the power set of Vβ: For any limit ordinal λ, let Vλ be the union of all the V-stages so far: A crucial fact about this definition is that there is a single formula φ(α,x) in the language of ZFC that states "the set x is in Vα". The sets Vα are called stages or ranks. The class V is defined to be the union of all the V-stages: Rank of a set The rank of a set S is the smallest α such that In other words, is the set of sets with rank ≤α. The stage Vα can also be characterized as the set of sets with rank strictly less than α, regardless of whether α is 0, a successor ordinal, or a limit ordinal: This gives an equivalent definition of Vα by transfinite recursion. Substituting the above definition of Vα back into the definition of the rank of a set gives a self-contained recursive definition: In other words, Finite and low cardinality stages of the hierarchy The first five von Neumann stages V0 to V4 may be visualized as follows. (An empty box represents the empty set. A box containing only an empty box represents the set containing only the empty set, and so forth.) This sequence exhibits tetrational growth. The set V5 contains 216 = 65536 elements; the set V6 contains 265536 elements, which very substantially exceeds the number of atoms in the known universe; and for any natural n, the set Vn+1 contains 2 ⇈ n elements using Knuth's up-arrow notation. So the finite stages of the cumulative hierarchy cannot be written down explicitly after stage 5. The set Vω has the same cardinality as ω. The set Vω+1 has the same cardinality as the set of real numbers. Applications and interpretations Applications of V as models for set theories If ω is the set of natural numbers, then Vω is the set of hereditarily finite sets, which is a model of set theory without the axiom of infinity. Vω+ω is the universe of "ordinary mathematics", and is a model of Zermelo set theory (but not a model of ZF). A simple argument in favour of the adequacy of Vω+ω is the observation that Vω+1 is adequate for the integers, while Vω+2 is adequate for the real numbers, and most other normal mathematics can be built as relations of various kinds from these sets without needing the axiom of replacement to go outside Vω+ω. If κ is an inaccessible cardinal, then Vκ is a model of Zermelo–Fraenkel set theory (ZFC) itself, and Vκ+1 is a model of Morse–Kelley set theory. (Note that every ZFC model is also a ZF model, and every ZF model is also a Z model.) Interpretation of V as the "set of all sets" V is not "the set of all (naive) sets" for two reasons. First, it is not a set; although each individual stage Vα is a set, their union V is a proper class. Second, the sets in V are only the well-founded sets. The axiom of foundation (or regularity) demands that every set be well founded and hence in V, and thus in ZFC every set is in V. But other axiom systems may omit the axiom of foundation or replace it by a strong negation (an example is Aczel's anti-foundation axiom). These non-well-founded set theories are not commonly employed, but are still possible to study. A third objection to the "set of all sets" interpretation is that not all sets are necessarily "pure sets", which are constructed from the empty set using power sets and unions. Zermelo proposed in 1908 the inclusion of urelements, from which he constructed a transfinite recursive hierarchy in 1930. Such urelements are used extensively in model theory, particularly in Fraenkel-Mostowski models. Hilbert's paradox The von Neumann universe satisfies the following two properties: for every set . for every subset . Indeed, if , then for some ordinal . Any stage is a transitive set, hence every is already , and so every subset of is a subset of . Therefore, and . For unions of subsets, if , then for every , let be the smallest ordinal for which . Because by assumption is a set, we can form the limit . The stages are cumulative, and therefore again every is . Then every is also , and so and . Hilbert's paradox implies that no set with the above properties exists . For suppose was a set. Then would be a subset of itself, and would belong to , and so would . But more generally, if , then . Hence, , which is impossible in models of ZFC such as itself. Interestingly, is a subset of if, and only if, is a member of . Therefore, we can consider what happens if the union condition is replaced with . In this case, there are no known contradictions, and any Grothendieck universe satisfies the new pair of properties. However, whether Grothendieck universes exist is a question beyond ZFC. V and the axiom of regularity The formula V = ⋃αVα is often considered to be a theorem, not a definition. Roitman states (without references) that the realization that the axiom of regularity is equivalent to the equality of the universe of ZF sets to the cumulative hierarchy is due to von Neumann. The existential status of V Since the class V may be considered to be the arena for most of mathematics, it is important to establish that it "exists" in some sense. Since existence is a difficult concept, one typically replaces the existence question with the consistency question, that is, whether the concept is free of contradictions. A major obstacle is posed by Gödel's incompleteness theorems, which effectively imply the impossibility of proving the consistency of ZF set theory in ZF set theory itself, provided that it is in fact consistent. The integrity of the von Neumann universe depends fundamentally on the integrity of the ordinal numbers, which act as the rank parameter in the construction, and the integrity of transfinite induction, by which both the ordinal numbers and the von Neumann universe are constructed. The integrity of the ordinal number construction may be said to rest upon von Neumann's 1923 and 1928 papers. The integrity of the construction of V by transfinite induction may be said to have then been established in Zermelo's 1930 paper. History The cumulative type hierarchy, also known as the von Neumann universe, is claimed by Gregory H. Moore (1982) to be inaccurately attributed to von Neumann. The first publication of the von Neumann universe was by Ernst Zermelo in 1930. Existence and uniqueness of the general transfinite recursive definition of sets was demonstrated in 1928 by von Neumann for both Zermelo-Fraenkel set theory and von Neumann's own set theory (which later developed into NBG set theory). In neither of these papers did he apply his transfinite recursive method to construct the universe of all sets. The presentations of the von Neumann universe by Bernays and Mendelson both give credit to von Neumann for the transfinite induction construction method, although not for its application to the construction of the universe of ordinary sets. The notation V is not a tribute to the name of von Neumann. It was used for the universe of sets in 1889 by Peano, the letter V signifying "Verum", which he used both as a logical symbol and to denote the class of all individuals. Peano's notation V was adopted also by Whitehead and Russell for the class of all sets in 1910. The V notation (for the class of all sets) was not used by von Neumann in his 1920s papers about ordinal numbers and transfinite induction. Paul Cohen explicitly attributes his use of the letter V (for the class of all sets) to a 1940 paper by Gödel, although Gödel most likely obtained the notation from earlier sources such as Whitehead and Russell. Philosophical perspectives There are two approaches to understanding the relationship of the von Neumann universe V to ZFC (along with many variations of each approach, and shadings between them). Roughly, formalists will tend to view V as something that flows from the ZFC axioms (for example, ZFC proves that every set is in V). On the other hand, realists are more likely to see the von Neumann hierarchy as something directly accessible to the intuition, and the axioms of ZFC as propositions for whose truth in V we can give direct intuitive arguments in natural language. A possible middle position is that the mental picture of the von Neumann hierarchy provides the ZFC axioms with a motivation (so that they are not arbitrary), but does not necessarily describe objects with real existence.
Mathematics
Set theory
null
446146
https://en.wikipedia.org/wiki/Juniperus%20communis
Juniperus communis
Juniperus communis, the common juniper, is a species of small tree or shrub in the cypress family Cupressaceae. An evergreen conifer, it has the largest geographical range of any woody plant, with a circumpolar distribution throughout the cool temperate Northern Hemisphere. Description Juniperus communis is highly variable in form, ranging from —rarely —tall to a low, often prostrate spreading shrub in exposed locations. It has needle-like leaves in whorls of three; the leaves are green, with a single white stomatal band on the inner surface. It never attains the scale-like adult foliage of other members of the genus. It is dioecious, with male and female cones on separate plants so requiring wind pollination to transfer pollen from male to female cones. Male trees or shrubs naturally live longer than female trees or shrubs; a male tree or shrub can live more than 2000 years. The male cones are yellow, long, and fall soon after shedding their pollen in March–April. The fruit are berry-like cones known as juniper berries. They are initially green, ripening in 18 months to purple-black with a blue waxy coating; they are spherical, diameter, and usually have three (occasionally six) fleshy fused scales, each scale with a single seed. The seeds are dispersed when birds eat the cones, digesting the fleshy scales and passing the hard, unwinged seeds in their droppings. Chemistry The juniper berry oil is composed largely of monoterpene hydrocarbons such as α-pinene, myrcene, sabinene, limonene and β-pinene. Subspecies As to be expected from its wide range, J. communis is highly variable, with several infraspecific taxa; delimitation between the taxa is still uncertain, with genetic data not matching morphological data well. subsp. communis – Common juniper. Usually an erect shrub or small tree; leaves long; cones small, 5–8 mm, usually shorter than the leaves; found at low to moderate altitude in temperate climates subsp. communis var. communis – Europe, most of northern Asia subsp. communis var. depressa – North America, Sierra Nevada in California subsp. communis var. hemisphaerica – Mediterranean mountains subsp. communis var. nipponica – Japan (status uncertain, often treated as J. rigida var. nipponica) subsp. alpina – alpine juniper (syn. J. c. subsp. nana, J. c. var. saxatilis Pallas, J. sibirica Burgsd.). Usually a prostrate ground-hugging shrub; leaves short, 3–8 mm; cones often larger, 7–12 mm, usually longer than the leaves; found in subarctic areas and high altitude alpine zones in temperate areas subsp. alpina var. alpina – Greenland, Europe and Asia subsp. alpina var. megistocarpa – Eastern Canada (doubtfully distinct from var. alpina) subsp. alpina var. jackii – Western North America (doubtfully distinct from var. alpina) Some botanists treat subsp. alpina at the lower rank of variety, in which case the correct name is J. communis var. saxatilis Pallas, though the name J. communis var. montana is also occasionally cited; others, primarily in eastern Europe and Russia, sometimes treat it as a distinct species J. sibirica Burgsd. (syn. J. nana Willd., J. alpina S.F.Gray). Distribution and habitat The species has the largest geographical range of any woody plant, with a circumpolar distribution throughout the cool temperate Northern Hemisphere from the Arctic south in mountains to around 30°N latitude in North America, Europe and Asia. Relict populations can be found in the Atlas Mountains of Africa. J. communis is one of Ireland's longest established plants. Cultivation Juniperus communis is cultivated in the horticulture trade and used as an evergreen ornamental shrub in gardens. The following cultivars gained the Royal Horticultural Society's Award of Garden Merit in 1993: Juniperus communis 'Compressa' Juniperus communis 'Green Carpet' (prostrate shrub) Juniperus communis 'Hibernica' (Irish juniper) Juniperus communis 'Repanda' (prostrate shrub) Other cultivars in use include: Juniperus communis 'Fontän' Juniperus communis 'Green Carpet' Juniperus communis 'Hornibrookii' Juniperus communis 'Kantarell' Juniperus communis 'Repanda' Juniperus communis 'Vemboö' Uses Crafts It is too small to have any general lumber usage. In Scandinavia, however, juniper wood is used for making containers for storing small quantities of dairy products such as butter and cheese, and also for making wooden butter knives. It was also frequently used for trenails in wooden shipbuilding by shipwrights for its tough properties. In Estonia juniper wood is valued for its long lasting and pleasant aroma, very decorative natural structure of wood (growth rings) as well as good physical properties of wood due to slow growth rate of juniper and resulting dense and strong wood. Various decorative items (often eating utensils) are common in most Estonian handicraft shops and households. According to the old tradition, on Easter Monday Kashubian (Northern Poland) boys chase girls whipping their legs gently with juniper twigs. This is to bring good fortune in love to the chased girls. Juniper wood, especially burl wood, is frequently used to make knife handles for French pocketknives such as the Laguiole. Culinary Its astringent blue-black seed cones, commonly known as juniper berries, are too bitter to eat raw and are usually sold dried and used to flavour meats, sauces, and stuffings. They are generally crushed before use to release their flavour. Since juniper berries have a strong taste, they should be used sparingly. They are generally used to enhance meat with a strong flavour, such as game, including game birds, or tongue. The cones are used to flavour certain beers and gin (the word "gin" derives from an Old French word meaning "juniper"). In Finland, juniper is used as a key ingredient in making sahti, a traditional Finnish ale. Also the Slovak alcoholic beverage Borovička and Dutch Jenever are flavoured with juniper berry or its extract. Archaeological evidence suggests that the use of juniper in brewing may date back to at least the early medieval period. Juniper remains have been found at migration period and early Merovingian sites in southwestern Germany, indicating it may have been used to flavor beverages like beer as early as the 3rd to 6th centuries AD. Juniper is used in the traditional farmhouse ales of Norway, Sweden, Finland, Estonia, and Latvia. In Norway, the beer is brewed with juniper infusion instead of water, while in the other countries the juniper twigs are mainly used as filters to prevent the crushed malts from clogging the outlet of the lauter tun. The use of juniper in farmhouse brewing has been common in much of northern Europe, seemingly for a very long time. Traditional medicine Juniper berries have long been used as medicine by many cultures including the Navajo people. Western American tribes combined the berries of J. communis with Berberis root bark in a herbal tea. Native Americans also used juniper berries as a female contraceptive. Medicine Juniper leaves were found to harbor fungi with potent anti-fungal compounds, including ibrexafungerp, which is now FDA approved to treat fungal infections.
Biology and health sciences
Cupressaceae
Plants
446150
https://en.wikipedia.org/wiki/Juniperus%20virginiana
Juniperus virginiana
Juniperus virginiana, also known as eastern redcedar, red cedar, Virginian juniper, eastern juniper, red juniper, and other local names, is a species of juniper native to eastern North America from southeastern Canada to the Gulf of Mexico and east of the Great Plains. Farther west it is replaced by the related Juniperus scopulorum (Rocky Mountain juniper) and to the southwest by Juniperus ashei (Ashe juniper). It is not to be confused with Thuja occidentalis (eastern white cedar). Description Juniperus virginiana is a dense slow-growing coniferous evergreen tree with a conical or subcylindrical shaped crown that may never become more than a bush on poor soil, but is ordinarily from tall, with a short trunk in diameter, rarely to in height and in diameter. The oldest tree reported, from West Virginia, was 940 years old. The bark is reddish-brown, fibrous, and peels off in narrow strips. The leaves are of two types; sharp, spreading needle-like juvenile leaves long, and tightly adpressed scale-like adult leaves long; they are arranged in opposite decussate pairs or occasionally whorls of three. The juvenile leaves are found on young plants up to 3 years old, and as scattered shoots on adult trees, usually in shade. The seed cones are long, berry-like, dark purple-blue with a white wax cover giving an overall sky-blue color (though the wax often rubs off); they contain one to three (rarely up to four) seeds, and are mature in 6–8 months from pollination. The juniper berry is an important winter food for many birds, which disperse the wingless seeds. The pollen cones are long and broad, shedding pollen in late winter or early spring. The trees are usually dioecious, with pollen and seed cones on separate trees, yet some are monoecious. There are two varieties, which intergrade where they meet: Juniperus virginiana var. virginiana is called eastern juniper / redcedar. It is found in eastern North America, from Maine, west to southern Ontario and South Dakota, south to northernmost Florida and southwest into the post oak savannah of east-central Texas. Cones are larger, ; scale leaves are acute at apex and bark is red-brown. Juniperus virginiana var. silicicola (Small) E.Murray (syn. Sabina silicicola Small, Juniperus silicicola (Small) L.H.Bailey) is known as southern or sand juniper / redcedar. Its variety name means "flint-dweller", from Latin and . Habitat is along the Atlantic and Gulf coasts from the extreme southeastern corner of Virginia, south to central Florida and west to southeast Texas. Cones are smaller, ; scale leaves are blunt at apex and the bark is orange-brown. It is treated by some authors at the lower rank of variety, while others treat it as a distinct species. Ecology Eastern Red Cedar is a pioneer species, meaning that it is one of the first trees to repopulate disturbed sites. It is unusually long lived among pioneer species, with the potential to live over 900 years. It is commonly found in prairies or oak barrens, old pastures, or limestone hills, often along highways and near recent construction sites. It is an alternate host for cedar–apple rust, an economically significant fungal disease of apples, and some management strategies recommend the removal of J. virginiana near apple orchards Eastern Red Cedar grows in a wide range of climatic and soil conditions. The tree is extremely tolerant of drought due to its extensive, fibrous root system and reduced leaf area. It can be found from droughty, rocky soils with few nutrients to rich alluvial soils with abundant moisture. However, Eastern Red Cedar is almost never dominant on such rich mesic sites due to intense competition with faster growing, more shade tolerant hardwood trees. Outside of its native range it is considered an invasive species, and it can be aggressive even within its range. It is fire-intolerant, and was previously controlled by periodic wildfires. Low branches near the ground burn and provide a ladder that allows fire to engulf the whole tree. Grasses recover quickly from low severity fires that are characteristic of prairies that kept the trees at bay. With the urbanization of prairies, the fires have been stopped with roads, plowed fields, and other fire breaks, allowing J. virginiana and other trees to invade. Trees are destructive to grasslands if left unchecked, and are actively being eliminated by cutting and prescribed burning. The trees also burn very readily, and dense populations were blamed for the rapid spread of wildfires in drought stricken Oklahoma and Texas in 2005 and 2006. On the Great Plains, expanding red cedar populations are altering the plains ecosystem: a majority of the region's bird species are not present in areas where the tree's land cover exceeds 10 percent, and most small mammal species are not present where land cover exceeds 30 percent. Eastern juniper benefits from increased CO2 levels, unlike the grasses with which it competes. Many grasses are C4 plants that concentrate CO2 levels in their bundle sheaths to increase the efficiency of RuBisCO, the enzyme responsible for photosynthesis, while junipers are C3 plants that rely on (and may benefit from) the natural CO2 concentrations of the environment, although they are less efficient at fixing CO2 in general. Alterations of prairie ecosystems by J. virginiana include outcompeting forage species in pastureland. The low branches and wide base occupy a significant portion of land area. The thick foliage blocks out most light, so few plants can live under the canopy. The needles that fall raise the pH of the soil, making it alkaline, which holds nutrients such as phosphorus, making it harder for plants to absorb them. However, studies have found that Juniperus virginiana forests that replace grasslands have a statistically insignificant decrease to a significant increase in levels of soil nitrogen. J. virginiana forests have higher overall nitrogen use efficiency (NUE), despite the common grassland species Andropogon gerardi having a far higher NUE during photosynthesis (PNUE). The forests store much greater amounts of carbon in both biomass and soil, with most of the additional carbon stored aboveground. There is no significant difference in soil microbial activity. Cedar waxwings are fond of juniper berries. It takes about 12 minutes for their seeds to pass through the birds' guts, and seeds that have been consumed by this bird have levels of germination roughly three times higher than those of seeds the birds did not eat. Many other birds such as turkeys and bluebirds, along with many mammals such as rabbits, foxes, raccoons, and coyotes also consume them. Virginia Juniper's compact, evergreen foliage makes it favorable for bird nests and as a winter shelter location for birds and mammals. Some species of small mammals live exclusively in red cedar forests. Pollen The pollen of Juniperus virginiana var. virginiana is a known allergen. The nominate variety is native to Eastern North America, north of Mexico, with the pollen releasing at various points in the spring, variable by latitude and elevation. Uses The fragrant, finely grained, soft, brittle, very light, pinkish to brownish red heartwood is very durable, even in contact with soil. Because of its resistance to decay, the wood is often used for fence posts. Moths avoid the aromatic wood, and therefore it is in demand as lining for clothes chests and closets, which are often denominated "cedar closets" and "cedar chests." If correctly prepared, excellent English longbows, flatbows, and Native American sinew-backed bows can be made from it. It is marketed as "eastern redcedar" and "aromatic cedar." The best portions of the heartwood are one of the few woods that are suitable for making pencils, however the supply had so diminished by the 1940s that the wood of the incense-cedar largely replaced it. Part of the commercially available cedar oil is produced by steam distillation from wood shavings. It contains a wide variety of terpenes. The three major components, alpha-cedrene, thujopsene and cedrol, constitute more than 60% of the essential oil. The fruits also yield an essential oil which contains mostly D-Limonene. The oil derived from foliage and twigs has two main constituents: safrole and limonene. One minor compound is the podophyllotoxin, a non-alkaloid toxin lignan. Native American tribes have historically used poles of juniper wood to demarcate agreed tribal hunting territories. French traders named Baton Rouge, Louisiana, which denotes "red stick," from the reddish color of these poles. Some nations continue to use it ceremonially. The Cahokia Woodhenge series of timber circles that the pre-Columbian Mississippian culture in western Illinois erected were constructed of massive logs of eastern juniper. One iteration of such a circle, Woodhenge III, which is thought to have been constructed circa 1000 AD, had 48 posts in the circle of in diameter and a 49th pole in the center. Among many Native American cultures, the smoke of burning eastern juniper is believed to expel evil spirits prior to conducting a ceremony, such as a healing ceremony. During the Dust Bowl drought of the 1930s, the Prairie States Forest Project encouraged farmers to plant shelterbelts, i.e. wind breaks, of eastern juniper throughout the Great Plains of the US. The trees thrive in adverse conditions. Tolerant of both drought and cold, they grow well in rocky, sandy, and clayey soils. Competition between individual trees is minimal, and therefore they can be closely planted in rows, in which situation they still grow to full height, creating a solid windbreak in a short time. A number of cultivars have been selected for horticulture, including 'Canaertii' (narrow conical; female) 'Corcorcor' (with a dense, erect crown; female), 'Goldspire' (narrow conical with yellow foliage), and 'Kobold' (dwarf). Some cultivars previously listed under this species, notably 'Skyrocket', are actually cultivars of J. scopulorum. In the Arkansas, Missouri, and Oklahoma Ozarks, eastern juniper is commonly used as a Christmas tree. This is the most widely used wood for making blocks for recorders. There are numerous properties that it possesses that make it uniquely suitable for this, such as good moisture absorption, low expansion when wet (so it does not crack the recorder head), and mild antiseptic properties. Eastern red cedar is considered effective as a shelter-belt tree and for erosion control. Being coniferous, red cedar has dense evergreen foliage which makes it an ideal windbreak. The tree's extensive root system allows it to survive drought, and helps to retain surrounding topsoil during dry, windy conditions.
Biology and health sciences
Cupressaceae
Plants
12686237
https://en.wikipedia.org/wiki/Incision%20and%20drainage
Incision and drainage
Incision and drainage (I&D), also known as clinical lancing, are minor surgical procedures to release pus or pressure built up under the skin, such as from an abscess, boil, or infected paranasal sinus. It is performed by treating the area with an antiseptic, such as iodine-based solution, and then making a small incision to puncture the skin using a sterile instrument such as a sharp needle or a pointed scalpel. This allows the pus to escape by draining out through the incision. Good medical practice for large abdominal abscesses requires insertion of a drainage tube, preceded by insertion of a peripherally inserted central catheter line to enable readiness of treatment for possible septic shock. Adjunct antibiotics Uncomplicated cutaneous abscesses do not need antibiotics after successful drainage. In incisional abscesses For incisional abscesses, it is recommended that incision and drainage is followed by covering the area with a thin layer of gauze followed by sterile dressing. The dressing should be changed and the wound irrigated with normal saline at least twice each day. In addition, it is recommended to administer an antibiotic active against staphylococci and streptococci, preferably vancomycin when there is a risk of methicillin-resistant Staphylococcus aureus. The wound can be allowed to close by secondary intention. Alternatively, if the infection is cleared and healthy granulation tissue is evident at the base of the wound, the edges of the incision may be reapproximated, such as by using butterfly stitches, staples or sutures.
Biology and health sciences
Surgery
Health
20610136
https://en.wikipedia.org/wiki/Reproductive%20system
Reproductive system
The reproductive system of an organism, also known as the genital system, is the biological system made up of all the anatomical organs involved in sexual reproduction. Many non-living substances such as fluids, hormones, and pheromones are also important accessories to the reproductive system. Unlike most organ systems, the sexes of differentiated species often have significant differences. These differences allow for a combination of genetic material between two individuals, which allows for the possibility of greater genetic fitness of the offspring. Animals In mammals, the major organs of the reproductive system include the external genitalia (penis and vulva) as well as a number of internal organs, including the gamete-producing gonads (testicles and ovaries). Diseases of the human reproductive system are very common and widespread, particularly communicable sexually transmitted infections. Most other vertebrates have similar reproductive systems consisting of gonads, ducts, and openings. However, there is a great diversity of physical adaptations as well as reproductive strategies in every group of vertebrates. Vertebrates Vertebrates share key elements of their reproductive systems. They all have gamete-producing organs known as gonads. In females, these gonads are then connected by oviducts to an opening to the outside of the body, typically the cloaca, but sometimes to a unique pore such as a vagina. Humans The human reproductive system usually involves internal fertilization by sexual intercourse. During this process, the male inserts their erect penis into the female's vagina and ejaculates semen, which contains sperm. The sperm then travels through the vagina and cervix into the uterus or fallopian tubes for fertilization of the ovum. Upon successful fertilization and implantation, gestation of the fetus then occurs within the female's uterus for approximately nine months, this process is known as pregnancy in humans. Gestation ends with childbirth, delivery following labor. Labor consists of the muscles of the uterus contracting, the cervix dilating, and the baby passing out the vagina (the female genital organ). Human's babies and children are nearly helpless and require high levels of parental care for many years. One important type of parental care is the use of the mammary glands in the female breasts to nurse the baby. The female reproductive system has two functions: The first is to produce egg cells, and the second is to protect and nourish the offspring until birth. The male reproductive system has one function, and it is to produce and deposit sperm. Humans have a high level of sexual differentiation. In addition to differences in nearly every reproductive organ, numerous differences typically occur in secondary sexual characteristics. Male The male reproductive system is a series of organs located outside of the body and around the pelvic region of a male that contribute towards the reproduction process. The primary direct function of the male reproductive system is to provide the male sperm for fertilization of the ovum. The major reproductive organs of the male can be grouped into three categories. The first category is sperm production and storage. Production takes place in the testicles, which are housed in the temperature regulating scrotum, immature sperm then travel to the epididymides for development and storage. The second category is the ejaculatory fluid-producing glands which include the seminal vesicles, prostate, and the vasa deferentia. The final category are those used for copulation, and deposition of the spermatozoa (sperm) within the male, these include the penis, urethra, vas deferens, and Cowper's gland. Major secondary sex characteristics include larger, more muscular stature, deepened voice, facial and body hair, broad shoulders, and development of an Adam's apple. An important sexual hormone of males is androgen, and particularly testosterone. The testes release a hormone that controls the development of sperm. This hormone is also responsible for the development of physical characteristics in men such as facial hair and a deep voice. Female The human female reproductive system is a series of organs primarily located inside of the body and around the pelvic region of a female that contribute towards the reproductive process. The human female reproductive system contains three main parts: the vulva, which leads to the vagina, the vaginal opening, to the uterus; the uterus, which holds the developing fetus; and the ovaries, which produce the female's ova. The breasts are involved during the parenting stage of reproduction, but in most classifications they are not considered to be part of the female reproductive system. The vagina meets the outside at the vulva, which also includes the labia, clitoris and urethra; during intercourse, this area is lubricated by mucus secreted by the Bartholin's glands. The vagina is attached to the uterus through the cervix, while the uterus is attached to the ovaries via the fallopian tubes. Each ovary contains hundreds of ova (singular ovum). Approximately every 28 days, the pituitary gland releases a hormone that stimulates some of the ova to develop and grow. One ovum is released and it passes through the fallopian tube into the uterus. Hormones produced by the ovaries prepare the uterus to receive the ovum. The ovum will move through her fallopian tubes and awaits the sperm for fertilization to occur. When this does not occur, i.e. no sperm for fertilization, the lining of the uterus, called the endometrium, and unfertilized ova are shed each cycle through the process of menstruation. If the ovum is fertilized by sperm, it will attach to the endometrium and embryonic development will begin. Other mammals Most mammal reproductive systems are similar, however, there are some notable differences between the non-human mammals and humans. For instance, most male mammals have a penis which is stored internally until erect, and most have a penis bone or baculum. Additionally, both males and females of most species do not remain continually sexually fertile as humans do and the females of most mammalian species don't grow permanent mammaries like human females do either. Like humans, most groups of mammals have descended testicles found within a scrotum, however, others have descended testicles that rest on the ventral body wall, and a few groups of mammals, such as elephants, have undescended testicles found deep within their body cavities near their kidneys. The reproductive system of marsupials is unique in that the female has two vaginae, both of which open externally through one orifice but lead to different compartments within the uterus; males usually have a two-pronged penis, which corresponds to the females' two vaginae. Marsupials typically develop their offspring in an external pouch containing teats to which their newborn young (joeys) attach themselves for post uterine development. Also, marsupials have a unique prepenial scrotum. The long newborn joey instinctively crawls and wriggles the , while clinging to fur, on the way to its mother's pouch. In regards to males, the mammalian penis has a similar structure in reptiles and a small percentage of birds while the scrotum is only present in mammals. Regarding females, the vulva is unique to mammals with no homologue in birds, reptiles, amphibians, or fish. The clitoris, however, can be found in some reptiles and birds. In place of the uterus and vagina, non-mammal vertebrate groups have an unmodified oviduct leading directly to a cloaca, which is a shared exit-hole for gametes, urine, and feces. Monotremes (i.e. platypus and echidnas), a group of egg-laying mammals, also lack a uterus, vagina, and vulva, and in that respect have a reproductive system resembling that of a reptile. Dogs In domestic canines, sexual maturity (puberty) occurs between the ages of 6 and 12 months for both males and females, although this can be delayed until up to two years of age for some large breeds. Horses The mare's reproductive system is responsible for controlling gestation, birth, and lactation, as well as her estrous cycle and mating behavior. The stallion's reproductive system is responsible for his sexual behavior and secondary sex characteristics (such as a large crest). Even-toed ungulates Birds Male and female birds have a cloaca, an opening through which eggs, sperm, and wastes pass. Intercourse is performed by pressing the lips of the cloacae together, which is sometimes known as an intromittent organ which is known as a phallus that is analogous to the mammals' penis. The female lays amniotic eggs in which the young fetus continues to develop after it leaves the female's body. Unlike most vertebrates, female birds typically have only one functional ovary and oviduct. As a group, birds, like mammals, are noted for their high level of parental care. Reptiles Reptiles are almost all sexually dimorphic, and exhibit internal fertilization through the cloaca. Some reptiles lay eggs while others are ovoviviparous (animals that deliver live young). Reproductive organs are found within the cloaca of reptiles. Most male reptiles have copulatory organs, which are usually retracted or inverted and stored inside the body. In turtles and crocodilians, the male has a single median penis-like organ, while male snakes and lizards each possess a pair of penis-like organs. Amphibians Most amphibians exhibit external fertilization of eggs, typically within the water, though some amphibians such as caecilians have internal fertilization. All have paired, internal gonads, connected by ducts to the cloaca. Fish Fish exhibit a wide range of different reproductive strategies. Most fish, however, are oviparous and exhibit external fertilization. In this process, females use their cloaca to release large quantities of their gametes, called spawn into the water and one or more males release "milt", a white fluid containing many sperm over the unfertilized eggs. Other species of fish are oviparous and have internal fertilization aided by pelvic or anal fins that are modified into an intromittent organ analogous to the human penis. A small portion of fish species are either viviparous or ovoviviparous, and are collectively known as livebearers. Fish gonads are typically pairs of either ovaries or testicles. Most fish are sexually dimorphic but some species are hermaphroditic or unisexual. Invertebrates Invertebrates have an extremely diverse array of reproductive systems, the only commonality may be that they all lay eggs. Also, aside from cephalopods and arthropods, nearly all other invertebrates are hermaphroditic and exhibit external fertilization. Cephalopods All cephalopods are sexually dimorphic and reproduce by laying eggs. Most cephalopods have semi-internal fertilization, in which the male places his gametes inside the female's mantle cavity or pallial cavity to fertilize the ova found in the female's single ovary. Likewise, male cephalopods have only a single testicle. In the female of most cephalopods the nidamental glands aid in development of the egg. The "penis" in most unshelled male cephalopods (Coleoidea) is a long and muscular end of the gonoduct used to transfer spermatophores to a modified arm called a hectocotylus. That in turn is used to transfer the spermatophores to the female. In species where the hectocotylus is missing, the "penis" is long and able to extend beyond the mantle cavity and transfer the spermatophores directly to the female. Insects Most insects reproduce oviparously, i.e. by laying eggs. The eggs are produced by the female in a pair of ovaries. Sperm, produced by the male in one testis or more commonly two, is transmitted to the female during mating by means of external genitalia. The sperm is stored within the female in one or more spermathecae. At the time of fertilization, the eggs travel along oviducts to be fertilized by the sperm and are then expelled from the body ("laid"), in most cases via an ovipositor. Arachnids Arachnids may have one or two gonads, which are located in the abdomen. The genital opening is usually located on the underside of the second abdominal segment. In most species, the male transfers sperm to the female in a package, or spermatophore. Complex courtship rituals have evolved in many arachnids to ensure the safe delivery of the sperm to the female. Arachnids usually lay yolky eggs, which hatch into immatures that resemble adults. Scorpions, however, are either ovoviviparous or viviparous, depending on species, and bear live young. Plants Among all living organisms, flowers, which are the reproductive structures of angiosperms, are the most varied physically and show a correspondingly great diversity in methods of reproduction. Plants that are not flowering plants (green algae, mosses, liverworts, hornworts, ferns and gymnosperms such as conifers) also have complex interplays between morphological adaptation and environmental factors in their sexual reproduction. The breeding system, or how the sperm from one plant fertilizes the ovum of another, depends on the reproductive morphology, and is the single most important determinant of the genetic structure of nonclonal plant populations. Christian Konrad Sprengel (1793) studied the reproduction of flowering plants and for the first time it was understood that the pollination process involved both biotic and abiotic interactions. Fungi Fungal reproduction is complex, reflecting the differences in lifestyles and genetic makeup within this diverse kingdom of organisms. It is estimated that a third of all fungi reproduce using more than one method of propagation; for example, reproduction may occur in two well-differentiated stages within the life cycle of a species, the teleomorph and the anamorph. Environmental conditions trigger genetically determined developmental states that lead to the creation of specialized structures for sexual or asexual reproduction. These structures aid reproduction by efficiently dispersing spores or spore-containing propagules.
Biology and health sciences
Reproductive system
null
20610164
https://en.wikipedia.org/wiki/Wasp
Wasp
A wasp is any insect of the narrow-waisted suborder Apocrita of the order Hymenoptera which is neither a bee nor an ant; this excludes the broad-waisted sawflies (Symphyta), which look somewhat like wasps, but are in a separate suborder. The wasps do not constitute a clade, a complete natural group with a single ancestor, as bees and ants are deeply nested within the wasps, having evolved from wasp ancestors. Wasps that are members of the clade Aculeata can sting their prey. The most commonly known wasps, such as yellowjackets and hornets, are in the family Vespidae and are eusocial, living together in a nest with an egg-laying queen and non-reproducing workers. Eusociality is favoured by the unusual haplodiploid system of sex determination in Hymenoptera, as it makes sisters exceptionally closely related to each other. However, the majority of wasp species are solitary, with each adult female living and breeding independently. Females typically have an ovipositor for laying eggs in or near a food source for the larvae, though in the Aculeata the ovipositor is often modified instead into a sting used for defense or prey capture. Wasps play many ecological roles. Some are predators or pollinators, whether to feed themselves or to provision their nests. Many, notably the cuckoo wasps, are kleptoparasites, laying eggs in the nests of other wasps. Many of the solitary wasps are parasitoidal, meaning they lay eggs on or in other insects (any life stage from egg to adult) and often provision their own nests with such hosts. Unlike true parasites, the wasp larvae eventually kill their hosts. Solitary wasps parasitize almost every pest insect, making wasps valuable in horticulture for biological pest control of species such as whitefly in tomatoes and other crops. Wasps first appeared in the fossil record in the Jurassic, and diversified into many surviving superfamilies by the Cretaceous. They are a successful and diverse group of insects with tens of thousands of described species; wasps have spread to all parts of the world except for the polar regions. The largest social wasp is the Asian giant hornet, at up to in length; among the largest solitary wasps is a group of species known as tarantula hawks, along with the giant scoliid of Indonesia (Megascolia procer). The smallest wasps are solitary parasitoid wasps in the family Mymaridae, including the world's smallest known insect, with a body length of only , and the smallest known flying insect, only long. Wasps have appeared in literature from Classical times, as the eponymous chorus of old men in Aristophanes' 422 BC comedy The Wasps, and in science fiction from H. G. Wells's 1904 novel The Food of the Gods and How It Came to Earth, featuring giant wasps with three-inch-long stings. The name 'Wasp' has been used for many warships and other military equipment. Taxonomy and phylogeny Paraphyletic grouping The wasps are a cosmopolitan paraphyletic grouping of hundreds of thousands of species, consisting of the narrow-waisted clade Apocrita without the ants and bees. The Hymenoptera also contain the somewhat wasplike but unwaisted Symphyta, the sawflies. The term wasp is sometimes used more narrowly for members of the Vespidae, which includes several eusocial wasp lineages, such as yellowjackets (the genera Vespula and Dolichovespula), hornets (genus Vespa), and members of the subfamily Polistinae. Fossils Hymenoptera in the form of Symphyta (Xyelidae) first appeared in the fossil record in the Lower Triassic. Apocrita, wasps in the broad sense, appeared in the Jurassic, and had diversified into many of the extant superfamilies by the Cretaceous; they appear to have evolved from the Symphyta. Fig wasps with modern anatomical features first appeared in the Lower Cretaceous of the Crato Formation in Brazil, some 65 million years before the first fig trees. The Vespidae include the extinct genus Palaeovespa, seven species of which are known from the Eocene rocks of the Florissant fossil beds of Colorado and from fossilised Baltic amber in Europe. Also found in Baltic amber are crown wasps of the genus Electrostephanus. Diversity Wasps are a diverse group, estimated at well over a hundred thousand described species around the world, and a great many more as yet undescribed. For example, almost every one of some 1000 species of tropical fig trees has its own specific fig wasp (Chalcidoidea) that has co-evolved with it and pollinates it. Many wasp species are parasitoids; the females deposit eggs on or in a host arthropod on which the larvae then feed. Some larvae start off as parasitoids, but convert at a later stage to consuming the plant tissues that their host is feeding on. In other species, the eggs are laid directly into plant tissues and form galls, which protect the developing larvae from predators, but not necessarily from other parasitic wasps. In some species, the larvae are predatory themselves; the wasp eggs are deposited in clusters of eggs laid by other insects, and these are then consumed by the developing wasp larvae. The largest social wasp is the Asian giant hornet, at up to in length. The various tarantula hawk wasps are of a similar size and can overpower a spider many times its own weight, and move it to its burrow, with a sting that is excruciatingly painful to humans. The solitary giant scoliid, Megascolia procer, with a wingspan of 11.5 cm, has subspecies in Sumatra and Java; it is a parasitoid of the Atlas beetle Chalcosoma atlas. The female giant ichneumon wasp Megarhyssa macrurus is long including its very long but slender ovipositor which is used for boring into wood and inserting eggs. The smallest wasps are solitary parasitoid wasps in the family Mymaridae, including the world's smallest known insect, Dicopomorpha echmepterygis (139 micrometres long) and Kikiki huna with a body length of only 158 micrometres, the smallest known flying insect. There are estimated to be 100,000 species of ichneumonoid wasps in the families Braconidae and Ichneumonidae. These are almost exclusively parasitoids, mostly using other insects as hosts. Another family, the Pompilidae, is a specialist parasitoid of spiders. Some wasps are even parasitoids of parasitoids; the eggs of Euceros are laid beside lepidopteran larvae and the wasp larvae feed temporarily on their haemolymph, but if a parasitoid emerges from the host, the hyperparasites continue their life cycle inside the parasitoid. Parasitoids maintain their extreme diversity through narrow specialism. In Peru, 18 wasp species were found living on 14 fly species in only two species of Gurania climbing squash. Sociality Social wasps Of the dozens of extant wasp families, only the family Vespidae contains social species, primarily in the subfamilies Vespinae and Polistinae. With their powerful stings and conspicuous warning coloration, often in black and yellow, social wasps are frequent models for Batesian mimicry by non-stinging insects, and are themselves involved in mutually beneficial Müllerian mimicry of other distasteful insects including bees and other wasps. All species of social wasps construct their nests using some form of plant fiber (mostly wood pulp) as the primary material, though this can be supplemented with mud, plant secretions (e.g., resin), and secretions from the wasps themselves; multiple fibrous brood cells are constructed, arranged in a honeycombed pattern, and often surrounded by a larger protective envelope. Wood fibres are gathered from weathered wood, softened by chewing and mixing with saliva. The placement of nests varies from group to group; yellow jackets such as Dolichovespula media and D. sylvestris prefer to nest in trees and shrubs; Protopolybia exigua attaches its nests on the underside of leaves and branches; Polistes erythrocephalus chooses sites close to a water source. Other wasps, like Agelaia multipicta and Vespula germanica, like to nest in cavities that include holes in the ground, spaces under homes, wall cavities or in lofts. While most species of wasps have nests with multiple combs, some species, such as Apoica flavissima, only have one comb. The length of the reproductive cycle depends on latitude; Polistes erythrocephalus, for example, has a much longer (up to 3 months longer) cycle in temperate regions. Solitary wasps The vast majority of wasp species are solitary insects. Having mated, the adult female forages alone and if it builds a nest, does so for the benefit of its own offspring. Some solitary wasps nest in small groups alongside others of their species, but each is involved in caring for its own offspring (except for such actions as stealing other wasps' prey or laying in other wasp's nests). There are some species of solitary wasp that build communal nests, each insect having its own cell and providing food for its own offspring, but these wasps do not adopt the division of labour and the complex behavioural patterns adopted by eusocial species. Adult solitary wasps spend most of their time in preparing their nests and foraging for food for their young, mostly insects or spiders. Their nesting habits are more diverse than those of social wasps. Many species dig burrows in the ground. Mud daubers and pollen wasps construct mud cells in sheltered places. Potter wasps similarly build vase-like nests from mud, often with multiple cells, attached to the twigs of trees or against walls. Predatory wasp species normally subdue their prey by stinging it, and then either lay their eggs on it, leaving it in place, or carry it back to their nest where an egg may be laid on the prey item and the nest sealed, or several smaller prey items may be deposited to feed a single developing larva. Apart from providing food for their offspring, no further maternal care is given. Members of the family Chrysididae, the cuckoo wasps, are kleptoparasites and lay their eggs in the nests of unrelated host species. Biology Anatomy Like all insects, wasps have a hard exoskeleton which protects their three main body parts, the head, the mesosoma (including the thorax and the first segment of the abdomen) and the metasoma. There is a narrow waist, the petiole, joining the first and second segments of the abdomen. The two pairs of membranous wings are held together by small hooks and the forewings are larger than the hind ones; in some species, the females have no wings. In females there is usually a rigid ovipositor which may be modified for injecting venom, piercing or sawing. It either extends freely or can be retracted, and may be developed into a stinger for both defence and for paralysing prey. In addition to their large compound eyes, wasps have several simple eyes known as ocelli, which are typically arranged in a triangle just forward of the vertex of the head. Wasps possess mandibles adapted for biting and cutting, like those of many other insects, such as grasshoppers, but their other mouthparts are formed into a suctorial proboscis, which enables them to drink nectar. The larvae of wasps resemble maggots, and are adapted for life in a protected environment; this may be the body of a host organism or a cell in a nest, where the larva either eats the provisions left for it or, in social species, is fed by the adults. Such larvae have soft bodies with no limbs, and have a blind gut (presumably so that they do not foul their cell). Diet Adult solitary wasps mainly feed on nectar, but the majority of their time is taken up by foraging for food for their carnivorous young, mostly insects or spiders. Apart from providing food for their larval offspring, no maternal care is given. Some wasp species provide food for the young repeatedly during their development (progressive provisioning). Others, such as potter wasps (Eumeninae) and sand wasps (Ammophila, Sphecidae), repeatedly build nests which they stock with a supply of immobilised prey such as one large caterpillar, laying a single egg in or on its body, and then sealing up the entrance (mass provisioning). Predatory and parasitoidal wasps subdue their prey by stinging it. They hunt a wide variety of prey, mainly other insects (including other Hymenoptera), both larvae and adults. The Pompilidae specialize in catching spiders to provision their nests. Some social wasps are omnivorous, feeding on fallen fruit, nectar, and carrion such as dead insects. Adult male wasps sometimes visit flowers to obtain nectar. Some wasps, such as Polistes fuscatus, commonly return to locations where they previously found prey to forage. In many social species, the larvae exude copious amounts of salivary secretions that are avidly consumed by the adults. These include both sugars and amino acids, and may provide essential protein-building nutrients that are otherwise unavailable to the adults (who cannot digest proteins). Sex determination In wasps, as in other Hymenoptera, sex is determined by a haplodiploid system, which means that females are unusually closely related to their sisters, enabling kin selection to favour the evolution of eusocial behaviour. Females are diploid, meaning that they have 2n chromosomes and develop from fertilized eggs. Males, called drones, have a haploid (n) number of chromosomes and develop from an unfertilized egg. Wasps store sperm inside their body and control its release for each individual egg as it is laid; if a female wishes to produce a male egg, she simply lays the egg without fertilizing it. Therefore, under most conditions in most species, wasps have complete voluntary control over the sex of their offspring. Experimental infection of Muscidifurax uniraptor with the bacterium Wolbachia induced thelytokous reproduction and an inability to produce fertile, viable male offspring. Inbreeding avoidance Females of the solitary wasp parasitoid Venturia canescens can avoid mating with their brothers through kin recognition. In experimental comparisons, the probability that a female will mate with an unrelated male was about twice as high as the chance of her mating with brothers. Female wasps appear to recognize siblings on the basis of a chemical signature carried or emitted by males. Sibling-mating avoidance reduces inbreeding depression that is largely due to the expression of homozygous deleterious recessive mutations. Ecology As pollinators While the vast majority of wasps play no role in pollination, a few species can effectively transport pollen and pollinate several plant species. Since wasps generally do not have a fur-like covering of soft hairs and a special body part for pollen storage (pollen basket) as some bees do, pollen does not stick to them well. However it has been shown that even without hairs, several wasp species are able to effectively transport pollen, therefore contributing for potential pollination of several plant species. Pollen wasps in the subfamily Masarinae gather nectar and pollen in a crop inside their bodies, rather than on body hairs like bees, and pollinate flowers of Penstemon and the water leaf family, Hydrophyllaceae. The Agaonidae (fig wasps) are the only pollinators of nearly 1000 species of figs, and thus are crucial to the survival of their host plants. Since the wasps are equally dependent on their fig trees for survival, the coevolved relationship is fully mutualistic. As parasitoids Most solitary wasps are parasitoids. As adults, those that do feed typically only take nectar from flowers. Parasitoid wasps are extremely diverse in habits, many laying their eggs in inert stages of their host (egg or pupa), sometimes paralysing their prey by injecting it with venom through their ovipositor. They then insert one or more eggs into the host or deposit them upon the outside of the host. The host remains alive until the parasitoid larvae pupate or emerge as adults. The Ichneumonidae are specialized parasitoids, often of Lepidoptera larvae deeply buried in plant tissues, which may be woody. For this purpose, they have exceptionally long ovipositors; they detect their hosts by smell and vibration. Some of the largest species, including Rhyssa persuasoria and Megarhyssa macrurus, parasitise horntails, large sawflies whose adult females also have impressively long ovipositors. Some parasitic species have a mutualistic relationship with a polydnavirus that weakens the host's immune system and replicates in the oviduct of the female wasp. One family of chalcidoid wasps, the Eucharitidae, has specialized as parasitoids of ants, most species hosted by one genus of ant. Eucharitids are among the few parasitoids that have been able to overcome ants' effective defences against parasitoids. As parasites Many species of wasp, including especially the cuckoo or jewel wasps (Chrysididae), are kleptoparasites, laying their eggs in the nests of other wasp species to exploit their parental care. Most such species attack hosts that provide provisions for their immature stages (such as paralyzed prey items), and they either consume the provisions intended for the host larva, or wait for the host to develop and then consume it before it reaches adulthood. An example of a true brood parasite is the paper wasp Polistes sulcifer, which lays its eggs in the nests of other paper wasps (specifically Polistes dominula), and whose larvae are then fed directly by the host. Sand wasps Ammophila often save time and energy by parasitising the nests of other females of their own species, either kleptoparasitically stealing prey, or as brood parasites, removing the other female's egg from the prey and laying their own in its place. According to Emery's rule, social parasites, especially among insects, tend to parasitise species or genera to which they are closely related. For example, the social wasp Dolichovespula adulterina parasitises other members of its genus such as D. norwegica and D. arenaria. As predators Many wasp lineages, including those in the families Vespidae, Crabronidae, Sphecidae, and Pompilidae, attack and sting prey items that they use as food for their larvae; while Vespidae usually macerate their prey and feed the resulting bits directly to their brood, most predatory wasps paralyze their prey and lay eggs directly upon the bodies, and the wasp larvae consume them. Apart from collecting prey items to provision their young, many wasps are also opportunistic feeders, and will suck the body fluids of their prey. Although vespid mandibles are adapted for chewing and they appear to be feeding on the organism, they are often merely macerating it into submission. The impact of the predation of wasps on economic pests is difficult to establish. The roughly 140 species of beewolf (Philanthinae) hunt bees, including honeybees, to provision their nests; the adults feed on nectar and pollen. As models for mimics With their powerful stings and conspicuous warning coloration, social wasps are the models for many species of mimic. Two common cases are Batesian mimicry, where the mimic is harmless and is essentially bluffing, and Müllerian mimicry, where the mimic is also distasteful, and the mimicry can be considered mutual. Batesian mimics of wasps include many species of hoverfly and the wasp beetle. Many species of wasp are involved in Müllerian mimicry, as are many species of bee. As prey While wasp stings deter many potential predators, bee-eaters (in the bird family Meropidae) specialise in eating stinging insects, making aerial sallies from a perch to catch them, and removing the venom from the stinger by repeatedly brushing the prey firmly against a hard object, such as a twig. The honey buzzard attacks the nests of social hymenopterans, eating wasp larvae; it is the only known predator of the dangerous Asian giant hornet or "yak-killer" (Vespa mandarinia). Likewise, roadrunners are the only real predators of tarantula hawk wasps. Relationship with humans As pests Social wasps are considered pests when they become excessively common, or nest close to buildings. People are most often stung in late summer and early autumn, when wasp colonies stop breeding new workers; the existing workers search for sugary foods and are more likely to come into contact with humans. Wasp nests made in or near houses, such as in roof spaces, can present a danger as the wasps may sting if people come close to them. Stings are usually painful rather than dangerous, but in rare cases, people may suffer life-threatening anaphylactic shock. In horticulture Some species of parasitic wasp, especially in groups such as Aphelinidae, Braconidae, Mymaridae, and Trichogrammatidae, are exploited commercially to provide biological control of insect pests. One of the first species to be used was Encarsia formosa, a parasitoid of a range of species of whitefly. It entered commercial use in the 1920s in Europe, was overtaken by chemical pesticides in the 1940s, and again received interest from the 1970s. Encarsia is being tested in greenhouses to control whitefly pests of tomato and cucumber, and to a lesser extent of aubergine (eggplant), flowers such as marigold, and strawberry. Several species of parasitic wasp are natural predators of aphids and can help to control them. For instance, Aphidius matricariae is used to control the peach-potato aphid. In sport Wasps RFC was an English professional rugby union team originally based in London but later playing in Coventry; the name dates from 1867 at a time when names of insects were fashionable for clubs. The club's first kit was black with yellow stripes. The club has an amateur side called Wasps FC. Among the other clubs bearing the name are a basketball club in Wantirna, Australia, and Alloa Athletic F.C., a football club in Scotland. In fashion Wasps have been modelled in jewellery since at least the nineteenth century, when diamond and emerald wasp brooches were made in gold and silver settings. A fashion for wasp waisted female silhouettes with sharply cinched waistlines emphasizing the wearer's hips and bust arose repeatedly in the nineteenth and twentieth centuries. In literature The Ancient Greek playwright Aristophanes wrote the comedy play Σφῆκες (Sphēkes), The Wasps, first put on in 422 BC. The "wasps" are the chorus of old jurors. H. G. Wells made use of giant wasps in his novel The Food of the Gods and How It Came to Earth (1904): Wasp (1957) is a science fiction book by the English writer Eric Frank Russell; it is generally considered Russell's best novel. In Stieg Larsson's book The Girl Who Played with Fire (2006) and its film adaptation, Lisbeth Salander has adopted her kickboxing ringname, "The Wasp", as her hacker handle and has a wasp tattoo on her neck, indicating her high status among hackers, unlike her real world situation, and that like a small but painfully stinging wasp, she could be dangerous. Parasitoidal wasps played an indirect role in the nineteenth-century evolution debate. The Ichneumonidae contributed to Charles Darwin's doubts about the nature and existence of a well-meaning and all-powerful Creator. In an 1860 letter to the American naturalist Asa Gray, Darwin wrote: In military names With its powerful sting and familiar appearance, the wasp has given its name to many ships, aircraft and military vehicles. Nine ships and one shore establishment of the Royal Navy have been named , the first an 8-gun sloop launched in 1749. Eleven ships of the United States Navy have similarly borne the name , the first a merchant schooner acquired by the Continental Navy in 1775. The eighth of these, an aircraft carrier, gained two Second World War battle stars, prompting Winston Churchill to remark "Who said a Wasp couldn't sting twice?" In the Second World War, a German self-propelled howitzer was named Wespe, while the British developed the Wasp flamethrower from the Bren Gun Carrier. In aerospace, the Westland Wasp was a military helicopter developed in England in 1958 and used by the Royal Navy and other navies. The AeroVironment Wasp III is a miniature UAV developed for United States Air Force special operations.
Biology and health sciences
Hymenoptera
null
20611030
https://en.wikipedia.org/wiki/Ejaculation
Ejaculation
Ejaculation is the discharge of semen (the ejaculate; normally containing sperm) from the penis through the urethra. It is the final stage and natural objective of male sexual stimulation, and an essential component of natural conception. After forming an erection, many men emit pre-ejaculatory fluid during stimulation prior to ejaculating. Ejaculation involves involuntary contractions of the pelvic floor and is normally linked with orgasm. It is a normal part of male human sexual development. It can occur spontaneously during sleep (a nocturnal emission or "wet dream"). In rare cases, ejaculation occurs because of prostatic disease. Anejaculation is the condition of being unable to ejaculate. Dysejaculation is an ejaculation that is painful or uncomfortable. Retrograde ejaculation is the backward flow of semen into the bladder rather than out of the urethra. Premature ejaculation happens shortly after initiating sexual activity, and hinders prolonged sexual intercourse. A vasectomy alters the composition of the ejaculate as a form of birth control. Phases Stimulation The normal precursor to ejaculation is sexual arousal of the male, leading to the erection of the penis, though not all arousals or erections lead to ejaculation, and ejaculation does not require erection. Penile sexual stimulation during masturbation or vaginal, anal, oral, manual, or non-penetrative sexual activity may provide the necessary stimulus for a man to achieve orgasm and ejaculation. With regard to intravaginal ejaculation latency, men typically reach orgasm five to seven minutes after the start of penile-vaginal intercourse, taking into account their desire and that of their partners, but 10 minutes is also a common intravaginal ejaculation latency. Prolonged stimulation either through foreplay (kissing, petting and direct stimulation of erogenous zones before penetration during intercourse) or stroking (during masturbation) leads to adequate arousal and production of pre-ejaculatory fluid. Infectious agents (including HIV) can be present in pre-ejaculate. Emission phase Once the penis has achieved sufficient stimulation for the man to reach orgasm, ejaculation begins. The initial stage of ejaculation, called emission, is controlled by a reflex in the sympathetic spinal cord. Sperm undergo their final developmental changes within the epididymis, where they are held until being ejaculated. Expulsion phase Ejaculation reaches its peak in the expulsion phase, which involves the discharge of semen from the urethral opening. This ejection is driven by coordinated contractions of the pelvic muscles, including the bulbospongiosus and pubococcygeus muscles. For the semen to be expelled out of the penis, the bladder neck stays shut while the external urethral sphincter is relaxed. These rhythmic contractions are part of the male orgasm under the control of a spinal reflex at the level of the spinal nerves S2–4 via the pudendal nerve. Although the external sphincter and pelvic muscles can be voluntarily controlled, any voluntary control during semen expulsion is not evident. The expulsion phase is considered an extension of the emission phase, triggered by reaching a certain level of spinal nerve activation. The typical male orgasm lasts several seconds. Premature ejaculation is when ejaculation occurs before it is desired. Otherwise, if a man is unable to ejaculate after prolonged sexual stimulation in spite of his desire, it is called delayed ejaculation or anorgasmia. An orgasm that is not accompanied by ejaculation is known as a dry orgasm. At start of orgasm, pulses of semen begin to flow from the urethra, reach a peak of discharge and then diminish in flow. The typical orgasm consists of 10 to 15 contractions, although the man may not be consciously aware of so many. After the first contraction, ejaculation continues to completion involuntarily. During this stage ejaculation cannot be stopped. The rate of contractions gradually slows throughout the orgasm. Initial contractions occur on average every 0.6 seconds with an increasing increment of 0.1 seconds per contraction. Contractions of most men proceed at regular rhythmic intervals through their duration. Many men also experience irregular contractions at the end of the orgasm. Ejaculation usually begins during the first or second contraction of orgasm. For most men, the first ejection occurs during the second contraction, which is typically the largest, expelling 40% or more of total semen discharge. After this peak, the quantity of semen emitted by the penis diminishes as the contractions lessen in intensity. The muscle contractions of the orgasm can continue after ejaculation with no additional semen discharge. A small sample study of seven men showed an average of seven spurts of semen followed by an average of 10 more contractions with no semen expelled. This study also found a high correlation between number of spurts of semen and total ejaculate volume, i.e., larger semen volumes resulted from additional pulses of semen rather than larger individual spurts. Alfred Kinsey measured the distance of ejaculation, in "some hundreds" of men. In three-quarters of men tested, ejaculate "is propelled with so little force that the liquid is not carried more than a minute distance beyond the tip of the penis." In contrast to those test subjects, Kinsey noted "In other males the semen may be propelled from a matter of some inches to a foot or two, or even as far as five or six and (rarely) eight feet". Masters and Johnson report ejaculation distance to be no greater than . During the series of contractions that accompany ejaculation, semen is propelled from the urethra at , close to . Refractory period Most men experience a refractory period immediately following an orgasm, during which they are unable to achieve another erection, and a longer period before they are capable of achieving another ejaculation. During this time a male feels a deep and often pleasurable sense of relaxation, usually in the groin and thighs. The length of the refractory period varies considerably, even for a given individual. Age affects the recovery time, with younger men recovering faster than older men, though not always. Whereas some men have refractory periods of 15 minutes or more, others are able to experience sexual arousal immediately after ejaculation. A short recovery period may allow partners to continue sexual play relatively uninterrupted by ejaculation. Some men may experience their penis becoming hypersensitive to stimulation after ejaculation, which can make sexual stimulation unpleasant even while they may be sexually aroused. Some men are able to achieve multiple orgasms, with or without the typical sequence of ejaculation and refractory period. Some of those men report not noticing refractory periods, or are able to maintain erection by "sustaining sexual activity with a full erection until they passed their refractory time for orgasm when they proceeded to have a second or third orgasm". Volume The force and amount of semen that is ejected during ejaculation varies widely among men, containing between 0.1 and 10 milliliters (for comparison, a teaspoon holds 5 ml and a tablespoon, 15 ml). Adult semen volume is affected by the time that has passed since his previous ejaculation; larger semen volumes develop with longer abstinence. The duration of the stimulation leading to ejaculation can affect the volume. Abnormally low semen volume is known as hypospermia and abnormally high semen volume is called hyperspermia. One possible underlying cause of low volume or complete lack of semen is ejaculatory duct obstruction. It is normal for the amount of semen to diminish with age. Quality The number of sperm in an ejaculation varies widely, depending on many factors including the time since the previous ejaculation, age, stress levels, and testosterone. Longer time of sexual stimulation immediately preceding ejaculation can result in higher concentration of sperm. An unusually low sperm count, distinguished from low semen volume, is known as oligospermia, and the absence of any sperm from the semen is termed azoospermia. Development During puberty The first ejaculation in males often occurs about 12 months after the onset of puberty, generally through masturbation or nocturnal emission (wet dreams). This first semen volume is small. The typical ejaculation over the following three months produces less than 1 ml of semen. The semen produced during early puberty is also typically clear. After ejaculation this early semen remains jellylike and, unlike semen from mature males, fails to liquefy. A summary of semen development is shown in Table 1. Most first ejaculations (90%) lack sperm. Of the few early ejaculations that do contain sperm, the majority of sperm (97%) lack motion. The remaining sperm (3%) have abnormal motion. As the male proceeds through puberty, the semen develops mature characteristics with increasing quantities of normal sperm. Semen produced 12 to 14 months after the first ejaculation liquefies after a short period of time. Within 24 months of the first ejaculation, the semen volume and the quantity and characteristics of the sperm match that of adult male semen. Ejaculate is jellylike and fails to liquefy. Most samples liquefy. Some remain jellylike. Ejaculate liquefies within an hour. Control from the central nervous system There is a central pattern generator in the spinal cord, made up of groups of spinal interneurons, that is involved in the rhythmic response of ejaculation. This is known as the spinal generator for ejaculation. To map the neuronal activation of the brain during the ejaculatory response, researchers have studied the expression of c-Fos, a proto-oncogene expressed in neurons in response to stimulation by hormones and neurotransmitters. Expression of c-Fos in the following areas has been observed: medial preoptic area (MPOA) lateral septum, bed nucleus of the stria terminalis paraventricular nucleus of hypothalamus (PVN) ventromedial nucleus of the hypothalamus, medial amygdala ventral premammillary nuclei ventral tegmental area central tegmental field mesencephalic central gray peripeduncular nuclei parvocellular subparafascicular nucleus (SPF) within the posterior thalamus Hands-free ejaculation Although uncommon, some men can achieve ejaculations during masturbation without any manual stimulation. Such men usually do it by tensing and flexing their abdominal and buttocks muscles along with vigorous fantasizing. Others may do it by relaxing the area around the penis, which may result in harder erections especially when extremely aroused. Hands-free ejaculation can also be achieved by prostate stimulation alone, either internally (with the use of sex toys, fingers or performing anal sex or pegging) or externally (such as perineum massages), although prostate orgasms without ejaculation (dry orgasms) are also possible. Perineum pressing and retrograde ejaculation Perineum pressing results in an ejaculation which is purposefully held back by pressing on either the perineum or the urethra to force the seminal fluid to remain inside. In such a scenario, the seminal fluid stays inside the body and goes to the bladder. Some people do this to avoid making a mess by keeping all the semen inside. As a medical condition, it is called retrograde ejaculation. Health issues For most men, no detrimental health effects have been determined from ejaculation itself or from frequent ejaculations, though sexual activity in general can have health or psychological consequences. A small fraction of men have a disease called postorgasmic illness syndrome (POIS), which causes severe muscle pain throughout the body and other symptoms immediately following ejaculation. The symptoms last for up to a week. Some doctors speculate that the frequency of POIS "in the population may be greater than has been reported in the academic literature", and that many POIS sufferers are undiagnosed. It is not clear whether frequent ejaculation has any effect on the risk of prostate cancer. Two large studies examining the issue were "Ejaculation Frequency and Subsequent Risk of Prostate Cancer" and "Sexual Factors and Prostate Cancer". These suggest that frequent ejaculation after puberty offers some reduction of the risk of prostate cancer. The US study involving 29,342 US men aged 46 to 81 years suggested that "high ejaculation frequency was related to decreased risk of total prostate cancer". An Australian study involving 1,079 men with prostate cancer and 1,259 healthy men found that "there is evidence that the more frequently men ejaculate between the ages of 20 and 50, the less likely they are to develop prostate cancer": Other animals In mammals and birds, multiple ejaculation is commonplace. During copulation, the two sides of a short-beaked echidna's penis are used sequentially. Alternating between the two sides allows for persistent stimulation to induce ejaculation without impeding the refractory period. In stallions, ejaculation is accompanied by a motion of the tail known as "tail flagging". When a male wolf ejaculates, his final pelvic thrust may be slightly prolonged. A male rhesus monkey usually ejaculates less than 15 seconds after sexual penetration. The first report and footage of spontaneous ejaculation in an aquatic mammal was recorded in a wild Indo-Pacific bottlenose dolphin near Mikura Island, Japan, in 2012. In horses, sheep, and cattle, ejaculation occurs within a few seconds, but in boars, it can last for five to thirty minutes. Ejaculation in boars is stimulated when the spiral-shaped penis interlocks with the female's cervix. A mature boar can produce of semen during one ejaculation. In llamas and alpacas, ejaculation occurs continuously during copulation. The semen of male dogs is ejaculated in three separate phases. The last phase of a male canine's ejaculation occurs during the copulatory tie, and contains mostly prostatic fluid.
Biology and health sciences
Animal reproduction
Biology
749012
https://en.wikipedia.org/wiki/Current%20source
Current source
A current source is an electronic circuit that delivers or absorbs an electric current which is independent of the voltage across it. A current source is the dual of a voltage source. The term current sink is sometimes used for sources fed from a negative voltage supply. Figure 1 shows the schematic symbol for an ideal current source driving a resistive load. There are two types. An independent current source (or sink) delivers a constant current. A dependent current source delivers a current which is proportional to some other voltage or current in the circuit. Background |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Voltage source | Current source |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Controlled voltage source | Controlled current source |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Battery of cells | Single cell An ideal current source generates a current that is independent of the voltage changes across it. An ideal current source is a mathematical model, which real devices can approach very closely. If the current through an ideal current source can be specified independently of any other variable in a circuit, it is called an independent current source. Conversely, if the current through an ideal current source is determined by some other voltage or current in a circuit, it is called a dependent or controlled current source. Symbols for these sources are shown in Figure 2. The internal resistance of an ideal current source is infinite. An independent current source with zero current is identical to an ideal open circuit. The voltage across an ideal current source is completely determined by the circuit it is connected to. When connected to a short circuit, there is zero voltage and thus zero power delivered. When connected to a load resistance, the current source manages the voltage in such a way as to keep the current constant; so in an ideal current source the voltage across the source approaches infinity as the load resistance approaches infinity (an open circuit). No physical current source is ideal. For example, no physical current source can operate when applied to an open circuit. There are two characteristics that define a current source in real life. One is its internal resistance and the other is its compliance voltage. The compliance voltage is the maximum voltage that the current source can supply to a load. Over a given load range, it is possible for some types of real current sources to exhibit nearly infinite internal resistance. However, when the current source reaches its compliance voltage, it abruptly stops being a current source. In circuit analysis, a current source having finite internal resistance is modeled by placing the value of that resistance across an ideal current source (the Norton equivalent circuit). However, this model is only useful when a current source is operating within its compliance voltage. Implementations Passive current source The simplest non-ideal current source consists of a voltage source in series with a resistor. The amount of current available from such a source is given by the ratio of the voltage across the voltage source to the resistance of the resistor (Ohm's law; ). This value of current will only be delivered to a load with zero voltage drop across its terminals (a short circuit, an uncharged capacitor, a charged inductor, a virtual ground circuit, etc.) The current delivered to a load with nonzero voltage (drop) across its terminals (a linear or nonlinear resistor with a finite resistance, a charged capacitor, an uncharged inductor, a voltage source, etc.) will always be different. It is given by the ratio of the voltage drop across the resistor (the difference between the exciting voltage and the voltage across the load) to its resistance. For a nearly ideal current source, the value of the resistor should be very large but this implies that, for a specified current, the voltage source must be very large (in the limit as the resistance and the voltage go to infinity, the current source will become ideal and the current will not depend at all on the voltage across the load). Thus, efficiency is low (due to power loss in the resistor) and it is usually impractical to construct a 'good' current source this way. Nonetheless, it is often the case that such a circuit will provide adequate performance when the specified current and load resistance are small. For example, a 5 V voltage source in series with a 4.7 kΩ resistor will provide an approximately constant current of to a load resistance in the range of 50 to 450 Ω. A Van de Graaff generator is an example of such a high voltage current source. It behaves as an almost constant current source because of its very high output voltage coupled with its very high output resistance and so it supplies the same few microamperes at any output voltage up to hundreds of thousands of volts (or even tens of megavolts) for large laboratory versions. Active current sources without negative feedback In these circuits the output current is not monitored and controlled by means of negative feedback. Current-stable nonlinear implementation They are implemented by active electronic components (transistors) having current-stable nonlinear output characteristic when driven by steady input quantity (current or voltage). These circuits behave as dynamic resistors changing their present resistance to compensate current variations. For example, if the load increases its resistance, the transistor decreases its present output resistance (and vice versa) to keep up a constant total resistance in the circuit. Active current sources have many important applications in electronic circuits. They are often used in place of ohmic resistors in analog integrated circuits (e.g., a differential amplifier) to generate a current that depends slightly on the voltage across the load. The common emitter configuration driven by a constant input current or voltage and common source (common cathode) driven by a constant voltage naturally behave as current sources (or sinks) because the output impedance of these devices is naturally high. The output part of the simple current mirror is an example of such a current source widely used in integrated circuits. The common base, common gate and common grid configurations can serve as constant current sources as well. A JFET can be made to act as a current source by tying its gate to its source. The current then flowing is the of the FET. These can be purchased with this connection already made and in this case the devices are called current regulator diodes or constant current diodes or current limiting diodes (CLD). Alternatively, an enhancement-mode N-channel MOSFET (metal–oxide–semiconductor field-effect transistor) could be used instead of a JFET in the circuits listed below for similar functionality. Following voltage implementation An example: bootstrapped current source. Voltage compensation implementation The simple resistor passive current source is ideal only when the voltage across it is zero; so voltage compensation by applying parallel negative feedback might be considered to improve the source. Operational amplifiers with feedback effectively work to minimise the voltage across their inputs. This results in making the inverting input a virtual ground, with the current running through the feedback, or load, and the passive current source. The input voltage source, the resistor, and the op-amp constitutes an "ideal" current source with value, . The transimpedance amplifier and an op-amp inverting amplifier are typical implementations of this idea. The floating load is a serious disadvantage of this circuit solution. Current compensation implementation A typical example are Howland current source and its derivative Deboo integrator. In the last example (Fig. 1), the Howland current source consists of an input voltage source, , a positive resistor, R, a load (the capacitor, C, acting as impedance ) and a negative impedance converter INIC ( and the op-amp). The input voltage source and the resistor R constitute an imperfect current source passing current, through the load (Fig. 3 in the source). The INIC acts as a second current source passing "helping" current, , through the load. As a result, the total current flowing through the load is constant and the circuit impedance seen by the input source is increased. However the Howland current source isn't widely used because it requires the four resistors to be perfectly matched, and its impedance drops at high frequencies. The grounded load is an advantage of this circuit solution. Current sources with negative feedback They are implemented as a voltage follower with series negative feedback driven by a constant input voltage source (i.e., a negative feedback voltage stabilizer). The voltage follower is loaded by a constant (current sensing) resistor acting as a simple current-to-voltage converter connected in the feedback loop. The external load of this current source is connected somewhere in the path of the current supplying the current sensing resistor but out of the feedback loop. The voltage follower adjusts its output current flowing through the load so that to make the voltage drop across the current sensing resistor R equal to the constant input voltage . Thus the voltage stabilizer keeps up a constant voltage drop across a constant resistor; so, a constant current flows through the resistor and respectively through the load. If the input voltage varies, this arrangement will act as a voltage-to-current converter (voltage-controlled current source, VCCS); it can be thought as a reversed (by means of negative feedback) current-to-voltage converter. The resistance R determines the transfer ratio (transconductance). Current sources implemented as circuits with series negative feedback have the disadvantage that the voltage drop across the current sensing resistor decreases the maximal voltage across the load (the compliance voltage). Simple transistor current sources Constant current diode The simplest constant-current source or sink is formed from one component: a JFET with its gate attached to its source. Once the drain-source voltage reaches a certain minimum value, the JFET enters saturation where current is approximately constant. This configuration is known as a constant-current diode, as it behaves much like a dual to the constant voltage diode (Zener diode) used in simple voltage sources. Due to the large variability in saturation current of JFETs, it is common to also include a source resistor (shown in the adjacent image) which allows the current to be tuned down to a desired value. Zener diode current source In this bipolar junction transistor (BJT) implementation (Figure 4) of the general idea above, a Zener voltage stabilizer (R1 and DZ1) drives an emitter follower (Q1) loaded by a constant emitter resistor (R2) sensing the load current. The external (floating) load of this current source is connected to the collector so that almost the same current flows through it and the emitter resistor (they can be thought of as connected in series). The transistor, Q1, adjusts the output (collector) current so as to keep the voltage drop across the constant emitter resistor, R2, almost equal to the relatively constant voltage drop across the Zener diode, DZ1. As a result, the output current is almost constant even if the load resistance and/or voltage vary. The operation of the circuit is considered in details below. A Zener diode, when reverse biased (as shown in the circuit) has a constant voltage drop across it irrespective of the current flowing through it. Thus, as long as the Zener current () is above a certain level (called holding current), the voltage across the Zener diode () will be constant. Resistor, R1, supplies the Zener current and the base current () of NPN transistor (Q1). The constant Zener voltage is applied across the base of Q1 and emitter resistor, R2. Voltage across () is given by , where is the base-emitter drop of Q1. The emitter current of Q1 which is also the current through R2 is given by Since is constant and is also (approximately) constant for a given temperature, it follows that is constant and hence is also constant. Due to transistor action, emitter current, , is very nearly equal to the collector current, , of the transistor (which in turn, is the current through the load). Thus, the load current is constant (neglecting the output resistance of the transistor due to the Early effect) and the circuit operates as a constant current source. As long as the temperature remains constant (or doesn't vary much), the load current will be independent of the supply voltage, R1 and the transistor's gain. R2 allows the load current to be set at any desirable value and is calculated by where is typically 0.65 V for a silicon device. ( is also the emitter current and is assumed to be the same as the collector or required load current, provided is sufficiently large). Resistance is calculated as where = 1.2 to 2 (so that is low enough to ensure adequate ), and is the lowest acceptable current gain for the particular transistor type being used. LED current source The Zener diode can be replaced by any other diode; e.g., a light-emitting diode LED1 as shown in Figure 5. The LED voltage drop () is now used to derive the constant voltage and also has the additional advantage of tracking (compensating) changes due to temperature. is calculated as and as , where ID is the LED current Transistor current source with diode compensation Temperature changes will change the output current delivered by the circuit of Figure 4 because is sensitive to temperature. Temperature dependence can be compensated using the circuit of Figure 6 that includes a standard diode, D, (of the same semiconductor material as the transistor) in series with the Zener diode as shown in the image on the left. The diode drop () tracks the changes due to temperature and thus significantly counteracts temperature dependence of the CCS. Resistance is now calculated as Since , (In practice, is never exactly equal to and hence it only suppresses the change in rather than nulling it out.) is calculated as (the compensating diode's forward voltage drop, , appears in the equation and is typically 0.65 V for silicon devices.) Note that this only works well if DZ1 is a reference diode or another stable voltage source. Together with 'normal' Zener diodes especially with lower Zener voltages (<5V) the diode might even worsen overall temperature dependency. Current mirror with emitter degeneration Series negative feedback is also used in the two-transistor current mirror with emitter degeneration. Negative feedback is a basic feature in some current mirrors using multiple transistors, such as the Widlar current source and the Wilson current source. Constant current source with thermal compensation One limitation with the circuits in Figures 5 and 6 is that the thermal compensation is imperfect. In bipolar transistors, as the junction temperature increases the drop (voltage drop from base to emitter) decreases. In the two previous circuits, a decrease in will cause an increase in voltage across the emitter resistor, which in turn will cause an increase in collector current drawn through the load. The end result is that the amount of 'constant' current supplied is at least somewhat dependent on temperature. This effect is mitigated to a large extent, but not completely, by corresponding voltage drops for the diode, D1, in Figure 6, and the LED, LED1, in Figure 5. If the power dissipation in the active device of the CCS is not small and/or insufficient emitter degeneration is used, this can become a non-trivial issue. Imagine in Figure 5, at power up, that the LED has 1 V across it driving the base of the transistor. At room temperature there is about 0.6 V drop across the junction and hence 0.4 V across the emitter resistor, giving an approximate collector (load) current of amps. Now imagine that the power dissipation in the transistor causes it to heat up. This causes the drop (which was 0.6 V at room temperature) to drop to, say, 0.2 V. Now the voltage across the emitter resistor is 0.8 V, twice what it was before the warmup. This means that the collector (load) current is now twice the design value! This is an extreme example of course, but serves to illustrate the issue. The circuit to the left overcomes the thermal problem (see also, current limiting). To see how the circuit works, assume the voltage has just been applied at V+. Current runs through R1 to the base of Q1, turning it on and causing current to begin to flow through the load into the collector of Q1. This same load current then flows out of Q1's emitter and consequently through to ground. When this current through to ground is sufficient to cause a voltage drop that is equal to the drop of Q2, Q2 begins to turn on. As Q2 turns on it pulls more current through its collector resistor, R1, which diverts some of the injected current in the base of Q1, causing Q1 to conduct less current through the load. This creates a negative feedback loop within the circuit, which keeps the voltage at Q1's emitter almost exactly equal to the drop of Q2. Since Q2 is dissipating very little power compared to Q1 (since all the load current goes through Q1, not Q2), Q2 will not heat up any significant amount and the reference (current setting) voltage across will remain steady at ≈0.6 V, or one diode drop above ground, regardless of the thermal changes in the drop of Q1. The circuit is still sensitive to changes in the ambient temperature in which the device operates as the BE voltage drop in Q2 varies slightly with temperature. Op-amp current sources The simple transistor current source from Figure 4 can be improved by inserting the base-emitter junction of the transistor in the feedback loop of an op-amp (Figure 7). Now the op-amp increases its output voltage to compensate for the drop. The circuit is actually a buffered non-inverting amplifier driven by a constant input voltage. It keeps up this constant voltage across the constant sense resistor. As a result, the current flowing through the load is constant as well; it is exactly the Zener voltage divided by the sense resistor. The load can be connected either in the emitter (Figure 7) or in the collector (Figure 4) but in both the cases it is floating as in all the circuits above. The transistor is not needed if the required current doesn't exceed the sourcing ability of the op-amp. The article on current mirror discusses another example of these so-called gain-boosted current mirrors. Voltage regulator current sources The general negative feedback arrangement can be implemented by an IC voltage regulator (LM317 voltage regulator on Figure 8). As with the bare emitter follower and the precise op-amp follower above, it keeps up a constant voltage drop (1.25 V) across a constant resistor (1.25 Ω); so, a constant current (1 A) flows through the resistor and the load. The LED is on when the voltage across the load exceeds 1.8 V (the indicator circuit introduces some error). The grounded load is an important advantage of this solution. Curpistor tubes Nitrogen-filled glass tubes with two electrodes and a calibrated Becquerel (decays per second) amount of 226Ra offer a constant number of charge carriers per second for conduction, which determines the maximum current the tube can pass over a voltage range from 25 to 500 V. Current and voltage source comparison Most sources of electrical energy (mains electricity, a battery, etc.) are best modeled as voltage sources, however some (notably solar cells) are better modeled using current sources. Sometimes it is easier to view a current source as a voltage source and vice versa (see conversion in Figure 9) using Norton's and Thévenin's theorems. Voltage sources provide an almost-constant output voltage as long as the current drawn from the source is within the source's capabilities. An ideal voltage source loaded by an open circuit (i.e., an infinite impedance) will provide no current (and hence no power). But when the load resistance approaches zero (a short circuit), the current (and thus power) approach infinity. Such a theoretical device has a zero ohm output impedance in series with the source. Real-world voltage sources instead have a non-zero output impedance, which is preferably very low (often much less than 1 ohm). Conversely, a current source provides a constant current, as long as the impedance of the load is sufficiently lower than the current source's parallel impedance (which is preferably very high and ideally infinite). In the case of transistor current sources, impedances of a few megohms (at low frequencies) are typical. Because power is current squared times resistance, as a load resistance connected to a current source approaches zero (a short circuit), the current and thus power both approach zero. Ideal current sources don't exist. Hypothetically connecting one to an ideal open circuit would create the paradox of running a constant, non-zero current (from the current source) through an element with a defined zero current (the open circuit). As the load resistance of an ideal current source approaches infinity (an open circuit), the voltage across the load would approach infinity (because voltage equals current times resistance), and hence the power drawn would also approach infinity. The current of a real current source connected to an open circuit would instead flow through the current source's internal parallel impedance (and be wasted as heat). Similarly, ideal voltage sources don't exist. Hypothetically connecting one to an ideal short circuit would result a similar paradox of finite non-zero voltage across an element with defined zero voltage (the short circuit). Just like how voltage sources should not be connected in parallel to another voltage source with different voltages, a current source also should not be connected in series to another current source. Note, some circuits use elements that are similar but not identical to voltage or current sources and may work when connected in these manners that are disallowed for actual current or voltage sources. Also, just like voltage sources may be connected in series to add their voltages, current sources may be connected in parallel to add their currents. Charging a capacitor Because the charge on a capacitor is equal to the integral of current with respect to time, an ideal constant current source charges a capacitor linearly with time, regardless of any series resistance. The Wilkinson analog-to-digital converter, for instance, uses this linear behavior to measure an unknown voltage by measuring the amount of time it takes a current source to charge a capacitor to that voltage. A voltage source instead charges a capacitor through a resistor non-linearly with time, because the charging current from the voltage source decreases exponentially with time.
Technology
Components
null
750772
https://en.wikipedia.org/wiki/Cooling%20tower
Cooling tower
A cooling tower is a device that rejects waste heat to the atmosphere through the cooling of a coolant stream, usually a water stream, to a lower temperature. Cooling towers may either use the evaporation of water to remove heat and cool the working fluid to near the wet-bulb air temperature or, in the case of dry cooling towers, rely solely on air to cool the working fluid to near the dry-bulb air temperature using radiators. Common applications include cooling the circulating water used in oil refineries, petrochemical and other chemical plants, thermal power stations, nuclear power stations and HVAC systems for cooling buildings. The classification is based on the type of air induction into the tower: the main types of cooling towers are natural draft and induced draft cooling towers. Cooling towers vary in size from small roof-top units to very large hyperboloid structures that can be up to tall and in diameter, or rectangular structures that can be over tall and long. Hyperboloid cooling towers are often associated with nuclear power plants, although they are also used in many coal-fired plants and to some extent in some large chemical and other industrial plants. The steam turbine is what necessitates the cooling tower. Although these large towers are very prominent, the vast majority of cooling towers are much smaller, including many units installed on or near buildings to discharge heat from air conditioning. Cooling towers are also often thought to emit smoke or harmful fumes by the general public and environmental activists, when in reality the emissions from those towers mostly do not contribute to carbon footprint, consisting solely of water vapor. History Cooling towers originated in the 19th century through the development of condensers for use with the steam engine. Condensers use relatively cool water, via various means, to condense the steam coming out of the cylinders or turbines. This reduces the back pressure, which in turn reduces the steam consumption, and thus the fuel consumption, while at the same time increasing power and recycling boiler water. However, the condensers require an ample supply of cooling water, without which they are impractical. While water usage is not an issue with marine engines, it forms a significant limitation for many land-based systems. By the turn of the 20th century, several evaporative methods of recycling cooling water were in use in areas lacking an established water supply, as well as in urban locations where municipal water mains may not be of sufficient supply, reliable in times of high demand, or otherwise adequate to meet cooling needs. In areas with available land, the systems took the form of cooling ponds; in areas with limited land, such as in cities, they took the form of cooling towers. These early towers were positioned either on the rooftops of buildings or as free-standing structures, supplied with air by fans or relying on natural airflow. An American engineering textbook from 1911 described one design as "a circular or rectangular shell of light plate—in effect, a chimney stack much shortened vertically (20 to 40 ft. high) and very much enlarged laterally. At the top is a set of distributing troughs, to which the water from the condenser must be pumped; from these it trickles down over "mats" made of wooden slats or woven wire screens, which fill the space within the tower". A hyperboloid cooling tower was patented by the Dutch engineers Frederik van Iterson and Gerard Kuypers in the Netherlands on August 16, 1916. The first hyperboloid reinforced concrete cooling towers were built by the Dutch State Mine (DSM) Emma in 1918 in Heerlen. The first ones in the United Kingdom were built in 1924 at Lister Drive power station in Liverpool, England. On both locations they were built to cool water used at a coal-fired electrical power station. According to a Gas Technology Institute (GTI) report, the indirect–dew-point evaporative-cooling Maisotsenko Cycle (M-Cycle) is a theoretically sound method of reducing a working fluid to the ambient fluid’s dew point, which is lower than the ambient fluid’s wet-bulb temperature. The M-cycle utilizes the psychrometric energy (or the potential energy) available from the latent heat of water evaporating into the air. While its current manifestation is as the M-Cycle HMX for air conditioning, through engineering design this cycle could be applied as a heat- and moisture-recovery device for combustion devices, cooling towers, condensers, and other processes involving humid gas streams. The consumption of cooling water by inland processing and power plants is estimated to reduce power availability for the majority of thermal power plants by 2040–2069. In 2021, researchers presented a method for steam recapture. The steam is charged using an ion beam, and then captured in a wire mesh of opposite charge. The water's purity exceeded EPA potability standards. Classification by use Heating, ventilation and air conditioning (HVAC) An HVAC (heating, ventilating, and air conditioning) cooling tower is used to dispose of ("reject") unwanted heat from a chiller. Liquid-cooled chillers are normally more energy efficient than air-cooled chillers due to heat rejection to tower water at or near wet-bulb temperatures. Air-cooled chillers must reject heat at the higher dry-bulb temperature, and thus have a lower average reverse–Carnot-cycle effectiveness. In hot climates, large office buildings, hospitals, and schools typically use cooling towers in their air conditioning systems. Generally, industrial cooling towers are much larger than HVAC towers. HVAC use of a cooling tower pairs the cooling tower with a liquid-cooled chiller or liquid-cooled condenser. A ton of air-conditioning is defined as the removal of . The equivalent ton on the cooling tower side actually rejects about due to the additional waste-heat–equivalent of the energy needed to drive the chiller's compressor. This equivalent ton is defined as the heat rejection in cooling or of water by , which amounts to , assuming a chiller coefficient of performance (COP) of 4.0. This COP is equivalent to an energy efficiency ratio (EER) of 14. Cooling towers are also used in HVAC systems that have multiple water source heat pumps that share a common piping water loop. In this type of system, the water circulating inside the water loop removes heat from the condenser of the heat pumps whenever the heat pumps are working in the cooling mode, then the externally mounted cooling tower is used to remove heat from the water loop and reject it to the atmosphere. By contrast, when the heat pumps are working in heating mode, the condensers draw heat out of the loop water and reject it into the space to be heated. When the water loop is being used primarily to supply heat to the building, the cooling tower is normally shut down (and may be drained or winterized to prevent freeze damage), and heat is supplied by other means, usually from separate boilers. Industrial cooling towers Industrial cooling towers can be used to remove heat from various sources such as machinery or heated process material. The primary use of large, industrial cooling towers is to remove the heat absorbed in the circulating cooling water systems used in power plants, petroleum refineries, petrochemical plants, natural gas processing plants, food processing plants, semi-conductor plants, and for other industrial facilities such as in condensers of distillation columns, for cooling liquid in crystallization, etc. The circulation rate of cooling water in a typical 700 MWth coal-fired power plant with a cooling tower amounts to about 71,600 cubic metres an hour (315,000 US gallons per minute) and the circulating water requires a supply water make-up rate of perhaps 5 percent (i.e., 3,600 cubic metres an hour, equivalent to one cubic metre every second). If that same plant had no cooling tower and used once-through cooling water, it would require about 100,000 cubic metres an hour A large cooling water intake typically kills millions of fish and larvae annually, as the organisms are impinged on the intake screens. A large amount of water would have to be continuously returned to the ocean, lake or river from which it was obtained and continuously re-supplied to the plant. Furthermore, discharging large amounts of hot water may raise the temperature of the receiving river or lake to an unacceptable level for the local ecosystem. Elevated water temperatures can kill fish and other aquatic organisms (see thermal pollution), or can also cause an increase in undesirable organisms such as invasive species of zebra mussels or algae. A cooling tower serves to dissipate the heat into the atmosphere instead, so that wind and air diffusion spreads the heat over a much larger area than hot water can distribute heat in a body of water. Evaporative cooling water cannot be used for subsequent purposes (other than rain somewhere), whereas surface-only cooling water can be re-used. Some coal-fired and nuclear power plants located in coastal areas do make use of once-through ocean water. But even there, the offshore discharge water outlet requires very careful design to avoid environmental problems. Petroleum refineries may also have very large cooling tower systems. A typical large refinery processing 40,000 metric tonnes of crude oil per day ( per day) circulates about 80,000 cubic metres of water per hour through its cooling tower system. The world's tallest cooling tower is the tall cooling tower of the Pingshan II Power Station in Huaibei, Anhui Province, China. Classification by build Package type These types of cooling towers are factory preassembled, and can be simply transported on trucks, as they are compact machines. The capacity of package type towers is limited and, for that reason, they are usually preferred by facilities with low heat rejection requirements such as food processing plants, textile plants, some chemical processing plants, or buildings like hospitals, hotels, malls, automotive factories, etc. Due to their frequent use in or near residential areas, sound level control is a relatively more important issue for package type cooling towers. Field-erected type Facilities such as power plants, steel processing plants, petroleum refineries, or petrochemical plants usually install field-erected type cooling towers due to their greater capacity for heat rejection. Field-erected towers are usually much larger in size compared to the package type cooling towers. A typical field-erected cooling tower has a pultruded fiber-reinforced plastic (FRP) structure, FRP cladding, a mechanical unit for air draft, and a drift eliminator. Heat transfer methods With respect to the heat transfer mechanism employed, the main types are: Wet cooling towers or open-circuit Cooling Tower or evaporative cooling towers operate on the principle of evaporative cooling. The working coolant (usually water) is the evaporated fluid, and is exposed to the elements. Closed circuit cooling towers (also called fluid coolers) pass the working coolant through a large heat exchanger, usually a radiator, upon which clean water is sprayed and a fan-induced draft applied. The resulting heat transfer performance is close to that of a wet cooling tower, with the advantage of protecting the working fluid from environmental exposure and contamination. Adiabatic cooling towers spray water into the incoming air or onto a cardboard pad to cool the air before it passes over an air-cooled heat exchanger. Adiabatic cooling towers use less water than other cooling towers but do not cool the fluid as close to the wet bulb temperature. Most adiabatic cooling towers are also hybrid cooling towers. Dry cooling towers (or dry coolers) are closed circuit cooling towers which operate by heat transfer through a heat exchanger that separates the working coolant from ambient air, such as in a radiator, utilizing convective heat transfer. They do not use evaporation and are air-cooled heat exchangers. Hybrid cooling towers or wet-dry cooling towers are closed circuit cooling towers that can switch between wet or adiabatic and dry operation. This helps balance water and energy savings across a variety of weather conditions. Some hybrid cooling towers can switch between dry, wet, and adiabatic modes. Thermal efficiencies up to 92% have been observed in hybrid cooling towers. In a wet cooling tower (or open circuit cooling tower), the warm water can be cooled to a temperature lower than the ambient air dry-bulb temperature, if the air is relatively dry (see dew point and psychrometrics). As ambient air is drawn past a flow of water, a small portion of the water evaporates, and the energy required to evaporate that portion of the water is taken from the remaining mass of water, thus reducing its temperature. Approximately of heat energy is absorbed for the evaporated water. Evaporation results in saturated air conditions, lowering the temperature of the water processed by the tower to a value close to wet-bulb temperature, which is lower than the ambient dry-bulb temperature, the difference determined by the initial humidity of the ambient air. To achieve better performance (more cooling), a medium called fill is used to increase the surface area and the time of contact between the air and water flows. Splash fill consists of material placed to interrupt the water flow causing splashing. Film fill is composed of thin sheets of material (usually PVC) upon which the water flows. Both methods create increased surface area and time of contact between the fluid (water) and the gas (air), to improve heat transfer. Air flow generation methods With respect to drawing air through the tower, there are three types of cooling towers: Natural draftUtilizes buoyancy via a tall chimney. Warm, moist air naturally rises due to the density differential compared to the dry, cooler outside air. Warm moist air is less dense than drier air at the same pressure. This moist air buoyancy produces an upwards current of air through the tower. Mechanical draftUses power-driven fan motors to force or draw air through the tower. Induced draftA mechanical draft tower with a fan at the discharge (at the top) which pulls air up through the tower. The fan induces hot moist air out the discharge. This produces low entering and high exiting air velocities, reducing the possibility of recirculation in which discharged air flows back into the air intake. This fan/fin arrangement is also known as draw-through. Forced draftA mechanical draft tower with a blower type fan at the intake. The fan forces air into the tower, creating high entering and low exiting air velocities. The low exiting velocity is much more susceptible to recirculation. With the fan on the air intake, the fan is more susceptible to complications due to freezing conditions. Another disadvantage is that a forced draft design typically requires more motor horsepower than an equivalent induced draft design. The benefit of the forced draft design is its ability to work with high static pressure. Such setups can be installed in more-confined spaces and even in some indoor situations. This fan/fin geometry is also known as blow-through. Fan assisted natural draftA hybrid type that appears like a natural draft setup, though airflow is assisted by a fan. Hyperboloid cooling tower On 16 August 1916, Frederik van Iterson took out the UK patent (108,863) for Improved Construction of Cooling Towers of Reinforced Concrete. The patent was filed on 9 August 1917, and published on 11 April 1918. In 1918, DSM built the first hyperboloid natural-draft cooling tower at the Staatsmijn Emma, to his design. Hyperboloid (sometimes incorrectly known as hyperbolic) cooling towers have become the design standard for all natural-draft cooling towers because of their structural strength and minimum usage of material. The hyperboloid shape also aids in accelerating the upward convective air flow, improving cooling efficiency. These designs are popularly associated with nuclear power plants. However, this association is misleading, as the same kind of cooling towers are often used at large coal-fired power plants and some geothermal plants as well. The steam turbine is what necessitates the cooling tower. Conversely, not all nuclear power plants have cooling towers, and some instead cool their working fluid with lake, river or ocean water. Categorization by air-to-water flow Crossflow Typically lower initial and long-term cost, mostly due to pump requirements. Crossflow is a design in which the airflow is directed perpendicular to the water flow (see diagram at left). Airflow enters one or more vertical faces of the cooling tower to meet the fill material. Water flows (perpendicular to the air) through the fill by gravity. The air continues through the fill and thus past the water flow into an open plenum volume. Lastly, a fan forces the air out into the atmosphere. A distribution or hot water basin consisting of a deep pan with holes or nozzles in its bottom is located near the top of a crossflow tower. Gravity distributes the water through the nozzles uniformly across the fill material. Cross Flow V/s Counter Flow Advantages of the crossflow design: Gravity water distribution allows smaller pumps and maintenance while in use. Non-pressurized spray simplifies variable flow. Disadvantages of the crossflow design: More prone to freezing than counterflow designs. Variable flow is useless in some conditions. More prone to dirt buildup in the fill than counterflow designs, especially in dusty or sandy areas. Counterflow In a counterflow design, the air flow is directly opposite to the water flow (see diagram at left). Air flow first enters an open area beneath the fill media, and is then drawn up vertically. The water is sprayed through pressurized nozzles near the top of the tower, and then flows downward through the fill, opposite to the air flow. Advantages of the counterflow design: Spray water distribution makes the tower more freeze-resistant. Breakup of water in spray makes heat transfer more efficient. Disadvantages of the counterflow design: Typically higher initial and long-term cost, primarily due to pump requirements. Difficult to use variable water flow, as spray characteristics may be negatively affected. Typically noisier, due to the greater water fall height from the bottom of the fill into the cold water basin Common aspects Common aspects of both designs: The interactions of the air and water flow allow a partial equalization of temperature, and evaporation of water. The air, now saturated with water vapor, is discharged from the top of the cooling tower. A "collection basin" or "cold water basin" is used to collect and contain the cooled water after its interaction with the air flow. Both crossflow and counterflow designs can be used in natural draft and in mechanical draft cooling towers. Wet cooling tower material balance Quantitatively, the material balance around a wet, evaporative cooling tower system is governed by the operational variables of make-up volumetric flow rate, evaporation and windage losses, draw-off rate, and the concentration cycles. In the adjacent diagram, water pumped from the tower basin is the cooling water routed through the process coolers and condensers in an industrial facility. The cool water absorbs heat from the hot process streams which need to be cooled or condensed, and the absorbed heat warms the circulating water (C). The warm water returns to the top of the cooling tower and trickles downward over the fill material inside the tower. As it trickles down, it contacts ambient air rising up through the tower either by natural draft or by forced draft using large fans in the tower. That contact causes a small amount of the water to be lost as windage or drift (W) and some of the water (E) to evaporate. The heat required to evaporate the water is derived from the water itself, which cools the water back to the original basin water temperature and the water is then ready to recirculate. The evaporated water leaves its dissolved salts behind in the bulk of the water which has not been evaporated, thus raising the salt concentration in the circulating cooling water. To prevent the salt concentration of the water from becoming too high, a portion of the water is drawn off or blown down (D) for disposal. Fresh water make-up (M) is supplied to the tower basin to compensate for the loss of evaporated water, the windage loss water and the draw-off water. Using these flow rates and concentration dimensional units: A water balance around the entire system is then: Since the evaporated water (E) has no salts, a chloride balance around the system is: and, therefore: From a simplified heat balance around the cooling tower: Windage (or drift) losses (W) is the amount of total tower water flow that is entrained in the flow of air to the atmosphere. From large-scale industrial cooling towers, in the absence of manufacturer's data, it may be assumed to be: W = 0.3 to 1.0 percent of C for a natural draft cooling tower without windage drift eliminators W = 0.1 to 0.3 percent of C for an induced draft cooling tower without windage drift eliminators W = about 0.005 percent of C (or less) if the cooling tower has windage drift eliminators W = about 0.0005 percent of C (or less) if the cooling tower has windage drift eliminators and uses sea water as make-up water. Cycles of concentration Cycle of concentration represents the accumulation of dissolved minerals in the recirculating cooling water. Discharge of draw-off (or blowdown) is used principally to control the buildup of these minerals. The chemistry of the make-up water, including the amount of dissolved minerals, can vary widely. Make-up waters low in dissolved minerals such as those from surface water supplies (lakes, rivers etc.) tend to be aggressive to metals (corrosive). Make-up waters from ground water supplies (such as wells) are usually higher in minerals, and tend to be scaling (deposit minerals). Increasing the amount of minerals present in the water by cycling can make water less aggressive to piping; however, excessive levels of minerals can cause scaling problems. As the cycles of concentration increase, the water may not be able to hold the minerals in solution. When the solubility of these minerals have been exceeded they can precipitate out as mineral solids and cause fouling and heat exchange problems in the cooling tower or the heat exchangers. The temperatures of the recirculating water, piping and heat exchange surfaces determine if and where minerals will precipitate from the recirculating water. Often a professional water treatment consultant will evaluate the make-up water and the operating conditions of the cooling tower and recommend an appropriate range for the cycles of concentration. The use of water treatment chemicals, pretreatment such as water softening, pH adjustment, and other techniques can affect the acceptable range of cycles of concentration. Concentration cycles in the majority of cooling towers usually range from 3 to 7. In the United States, many water supplies use well water which has significant levels of dissolved solids. On the other hand, one of the largest water supplies, for New York City, has a surface rainwater source quite low in minerals; thus cooling towers in that city are often allowed to concentrate to 7 or more cycles of concentration. Since higher cycles of concentration represent less make-up water, water conservation efforts may focus on increasing cycles of concentration. Highly treated recycled water may be an effective means of reducing cooling tower consumption of potable water, in regions where potable water is scarce. Maintenance Clean visible dirt & debris from the cold water basin and surfaces with any visible biofilm (i.e., slime). Disinfectant and other chemical levels in cooling towers and hot tubs should be continuously maintained and regularly monitored. Regular checks of water quality (specifically the aerobic bacteria levels) using dipslides should be taken as the presence of other organisms can support legionella by producing the organic nutrients that it needs to thrive. Water treatment Besides treating the circulating cooling water in large industrial cooling tower systems to minimize scaling and fouling, the water should be filtered to remove particulates, and also be dosed with biocides and algaecides to prevent growths that could interfere with the continuous flow of the water. Under certain conditions, a biofilm of micro-organisms such as bacteria, fungi and algae can grow very rapidly in the cooling water, and can reduce the heat transfer efficiency of the cooling tower. Biofilm can be reduced or prevented by using sodium chlorite or other chlorine based chemicals. A normal industrial practice is to use two biocides, such as oxidizing and non-oxidizing types to complement each other's strengths and weaknesses, and to ensure a broader spectrum of attack. In most cases, a continual low level oxidizing biocide is used, then alternating to a periodic shock dose of non-oxidizing biocides. Algaecides and biocides Algaecides, as their name might suggest, is intended to kill algae and other related plant-like microbes in the water. Biocides can reduce other living matter that remains, improving the system and keeping clean and efficient water usage in a cooling tower. One of the most common options when it comes to biocides for your water is bromine. Scale inhibitors Among the issues that cause the most damage and strain to a water tower's systems is scaling. When an unwanted material or contaminant in the water builds up in a certain area, it can create deposits that grow over time. This can cause issues ranging from the narrowing of pipes to total blockages and equipment failures. The water consumption of the cooling tower comes from Drift, Bleed-off, Evaporation loss, The water that is immediately replenished into the cooling tower due to loss is called Make-up Water. The function of make-up water is to make machinery and equipment run safely and stably. Legionnaires' disease Another very important reason for using biocides in cooling towers is to prevent the growth of Legionella, including species that cause legionellosis or Legionnaires' disease, most notably L. pneumophila, or Mycobacterium avium. The various Legionella species are the cause of Legionnaires' disease in humans and transmission is via exposure to aerosols—the inhalation of mist droplets containing the bacteria. Common sources of Legionella include cooling towers used in open recirculating evaporative cooling water systems, domestic hot water systems, fountains, and similar disseminators that tap into a public water supply. Natural sources include freshwater ponds and creeks. French researchers found that Legionella bacteria travelled up to through the air from a large contaminated cooling tower at a petrochemical plant in Pas-de-Calais, France. That outbreak killed 21 of the 86 people who had a laboratory-confirmed infection. Drift (or windage) is the term for water droplets of the process flow allowed to escape in the cooling tower discharge. Drift eliminators are used in order to hold drift rates typically to 0.001–0.005% of the circulating flow rate. A typical drift eliminator provides multiple directional changes of airflow to prevent the escape of water droplets. A well-designed and well-fitted drift eliminator can greatly reduce water loss and potential for Legionella or water treatment chemical exposure. Also, about every six months, inspect the conditions of the drift eliminators making sure there are no gaps to allow the free flow of dirt. The US Centers for Disease Control and Prevention (CDC) does not recommend that health-care facilities regularly test for the Legionella pneumophila bacteria. Scheduled microbiologic monitoring for Legionella remains controversial because its presence is not necessarily evidence of a potential for causing disease. The CDC recommends aggressive disinfection measures for cleaning and maintaining devices known to transmit Legionella, but does not recommend regularly-scheduled microbiologic assays for the bacteria. However, scheduled monitoring of potable water within a hospital might be considered in certain settings where persons are highly susceptible to illness and mortality from Legionella infection (e.g. hematopoietic stem cell transplantation units, or solid organ transplant units). Also, after an outbreak of legionellosis, health officials agree that monitoring is necessary to identify the source and to evaluate the efficacy of biocides or other prevention measures. Studies have found Legionella in 40% to 60% of cooling towers. Terminology Windage or DriftWater droplets that are carried out of the cooling tower with the exhaust air. Drift droplets have the same concentration of impurities as the water entering the tower. The drift rate is typically reduced by employing baffle-like devices, called drift eliminators, through which the air must travel after leaving the fill and spray zones of the tower. Drift can also be reduced by using warmer entering cooling tower temperatures. Blow-outWater droplets blown out of the cooling tower by wind, generally at the air inlet openings. Water may also be lost, in the absence of wind, through splashing or misting. Devices such as wind screens, louvers, splash deflectors and water diverters are used to limit these losses. PlumeThe stream of saturated exhaust air leaving the cooling tower. The plume is visible when water vapor it contains condenses in contact with cooler ambient air, like the saturated air in one's breath fogs on a cold day. Under certain conditions, a cooling tower plume may present fogging or icing hazards to its surroundings. Note that the water evaporated in the cooling process is "pure" water, in contrast to the very small percentage of drift droplets or water blown out of the air inlets. Draw-off or blow-downThe portion of the circulating water flow that is removed (usually discharged to a drain) in order to maintain the amount of total dissolved solids (TDS) and other impurities at an acceptably low level. Higher TDS concentration in solution may result from greater cooling tower efficiency. However the higher the TDS concentration, the greater the risk of scale, biological growth, and corrosion. The amount of blow-down is primarily regulated by measuring by the electrical conductivity of the circulating water. Biological growth, scaling, and corrosion can be prevented by chemicals (respectively, biocide, sulfuric acid, corrosion inhibitor). On the other hand, the only practical way to decrease the electrical conductivity is by increasing the amount of blow-down discharge and subsequently increasing the amount of clean make-up water. Zero bleed for cooling towers, also called zero blow-down for cooling towers, is a process for significantly reducing the need for bleeding water with residual solids from the system by enabling the water to hold more solids in solution. Make-upThe water that must be added to the circulating water system in order to compensate for water losses such as evaporation, drift loss, blow-out, blow-down, etc. NoiseSound energy emitted by a cooling tower and heard (recorded) at a given distance and direction. The sound is generated by the impact of falling water, by the movement of air by fans, the fan blades moving in the structure, vibration of the structure, and the motors, gearboxes or drive belts. ApproachThe approach is the difference in temperature between the cooled-water temperature and the entering-air wet bulb temperature (twb). Since the cooling towers are based on the principles of evaporative cooling, the maximum cooling tower efficiency depends on the wet bulb temperature of the air. The wet-bulb temperature is a type of temperature measurement that reflects the physical properties of a system with a mixture of a gas and a vapor, usually air and water vapor RangeThe range is the temperature difference between the warm water inlet and cooled water exit. FillInside the tower, fills are added to increase contact surface as well as contact time between air and water, to provide better heat transfer. The efficiency of the tower depends on the selection and amount of fill. There are two types of fills that may be used: Film type fill (causes water to spread into a thin film) Splash type fill (breaks up falling stream of water and interrupts its vertical progress) Full-flow filtrationFull-flow filtration continuously strains particulates out of the entire system flow. For example, in a 100-ton system, the flow rate would be roughly 300 gal/min. A filter would be selected to accommodate the entire 300 gal/min flow rate. In this case, the filter typically is installed after the cooling tower on the discharge side of the pump. While this is the ideal method of filtration, for higher flow systems it may be cost-prohibitive. Side-stream filtrationSide-stream filtration, although popular and effective, does not provide complete protection. With side-stream filtration, a portion of the water is filtered continuously. This method works on the principle that continuous particle removal will keep the system clean. Manufacturers typically package side-stream filters on a skid, complete with a pump and controls. For high flow systems, this method is cost-effective. Properly sizing a side-stream filtration system is critical to obtain satisfactory filter performance, but there is some debate over how to properly size the side-stream system. Many engineers size the system to continuously filter the cooling tower basin water at a rate equivalent to 10% of the total circulation flow rate. For example, if the total flow of a system is 1,200 gal/min (a 400-ton system), a 120 gal/min side-stream system is specified. Cycle of concentrationMaximum allowed multiplier for the amount of miscellaneous substances in circulating water compared to the amount of those substances in make-up water. Treated timberA structural material for cooling towers which was largely abandoned in the early 2000s. It is still used occasionally due to its low initial costs, in spite of its short life expectancy. The life of treated timber varies a lot, depending on the operating conditions of the tower, such as frequency of shutdowns, treatment of the circulating water, etc. Under proper working conditions, the estimated life of treated timber structural members is about 10 years. LeachingThe loss of wood preservative chemicals by the washing action of the water flowing through a wood structure cooling tower. Pultruded FRPA common structural material for smaller cooling towers, fibre-reinforced plastic (FRP) is known for its high corrosion-resistance capabilities. Pultruded FRP is produced using pultrusion technology, and has become the most common structural material for small cooling towers. It offers lower costs and requires less maintenance compared to reinforced concrete, which is still in use for large structures. Fog production Under certain ambient conditions, plumes of water vapor can be seen rising out of the discharge from a cooling tower, and can be mistaken as smoke from a fire. If the outdoor air is at or near saturation, and the tower adds more water to the air, saturated air with liquid water droplets can be discharged, which is seen as fog. This phenomenon typically occurs on cool, humid days, but is rare in many climates. Fog and clouds associated with cooling towers can be described as homogenitus, as with other clouds of man-made origin, such as contrails and ship tracks. This phenomenon can be prevented by decreasing the relative humidity of the saturated discharge air. For that purpose, in hybrid towers, saturated discharge air is mixed with heated low relative humidity air. Some air enters the tower above drift eliminator level, passing through heat exchangers. The relative humidity of the dry air is even more decreased instantly as being heated while entering the tower. The discharged mixture has a relatively lower relative humidity and the fog is invisible. Cloud formation Issues related to applied meteorology of cooling towers, including the assessment of the impact of cooling towers on cloud enhancement were considered in a series of models and experiments. One of the results by Haman's group indicated significant dynamic influences of the condensation trails on the surrounding atmosphere, manifested in temperature and humidity disturbances. The mechanism of these influences seemed to be associated either with the airflow over the trail as an obstacle or with vertical waves generated by the trail, often at a considerable altitude above it. Salt emission pollution When wet cooling towers with seawater make-up are installed in various industries located in or near coastal areas, the drift of fine droplets emitted from the cooling towers contain nearly 6% sodium chloride which deposits on the nearby land areas. This deposition of sodium salts on the nearby agriculture and vegetative lands can convert them into sodic saline or sodic alkaline soils depending on the nature of the soil and enhance the sodicity of ground and surface water. The salt deposition problem from such cooling towers aggravates where pollution control standards are not imposed or not implemented to minimize the drift emissions from wet cooling towers using seawater make-up. Respirable suspended particulate matter, of less than 10 micrometers (μm) in size, can be present in the drift from cooling towers. Larger particles above 10 μm in size are generally filtered out in the nose and throat via cilia and mucus but particulate matter smaller than 10 μm, referred to as PM10, can settle in the bronchi and lungs and cause health problems. Similarly, particles smaller than 2.5 μm, (PM2.5), tend to penetrate into the gas exchange regions of the lung, and very small particles (less than 100 nanometers) may pass through the lungs to affect other organs. Though the total particulate emissions from wet cooling towers with fresh water make-up is much less, they contain more PM10 and PM2.5 than the total emissions from wet cooling towers with sea water make-up. This is due to lesser salt content in fresh water drift (below 2,000 ppm) compared to the salt content of sea water drift (60,000 ppm). Use as a flue-gas stack At some modern power stations equipped with flue gas purification, such as the Großkrotzenburg Power Station and the Rostock Power Station, the cooling tower is also used as a flue-gas stack (industrial chimney), thus saving the cost of a separate chimney structure. At plants without flue gas purification, problems with corrosion may occur, due to reactions of raw flue gas with water to form acids. Sometimes, natural draft cooling towers are constructed with structural steel in place of concrete (RCC) when the construction time of natural draft cooling tower is exceeding the construction time of the rest of the plant or the local soil is of poor strength to bear the heavy weight of RCC cooling towers or cement prices are higher at a site to opt for cheaper natural draft cooling towers made of structural steel. Operation in freezing weather Some cooling towers (such as smaller building air conditioning systems) are shut down seasonally, drained, and winterized to prevent freeze damage. During the winter, other sites continuously operate cooling towers with water leaving the tower. Basin heaters, tower draindown, and other freeze protection methods are often employed in cold climates. Operational cooling towers with malfunctions can freeze during very cold weather. Typically, freezing starts at the corners of a cooling tower with a reduced or absent heat load. Severe freezing conditions can create growing volumes of ice, resulting in increased structural loads which can cause structural damage or collapse. To prevent freezing, the following procedures are used: The use of water modulating by-pass systems is not recommended during freezing weather. In such situations, the control flexibility of variable speed motors, two-speed motors, and/or two-speed motors multi-cell towers should be considered a requirement. Do not operate the tower unattended. Remote sensors and alarms may be installed to monitor tower conditions. Do not operate the tower without a heat load. Basin heaters may be used to keep the water in the tower pan at an above-freezing temperature. Heat trace ("heating tape") is a resistive heating element that is installed along water pipes to prevent freezing in cold climates. Maintain design water flow rate over the tower fill. Manipulate or reduce airflow to maintain water temperature above freezing point. Fire hazard Cooling towers constructed in whole or in part of combustible materials can support internal fire propagation. Such fires can become very intense, due to the high surface-volume ratio of the towers, and fires can be further intensified by natural convection or fan-assisted draft. The resulting damage can be sufficiently severe to require the replacement of the entire cell or tower structure. For this reason, some codes and standards recommend that combustible cooling towers be provided with an automatic fire sprinkler system. Fires can propagate internally within the tower structure when the cell is not in operation (such as for maintenance or construction), and even while the tower is in operation, especially those of the induced-draft type, because of the existence of relatively dry areas within the towers. Structural stability Being very large structures, cooling towers are susceptible to wind damage, and several spectacular failures have occurred in the past. At Ferrybridge power station on 1 November 1965, the station was the site of a major structural failure, when three of the cooling towers collapsed owing to vibrations in winds. Although the structures had been built to withstand higher wind speeds, the shape of the cooling towers caused westerly winds to be funneled into the towers themselves, creating a vortex. Three out of the original eight cooling towers were destroyed, and the remaining five were severely damaged. The towers were later rebuilt and all eight cooling towers were strengthened to tolerate adverse weather conditions. Building codes were changed to include improved structural support, and wind tunnel tests were introduced to check tower structures and configuration.
Technology
Electricity generation and distribution
null
751489
https://en.wikipedia.org/wiki/Summit
Summit
A summit is a point on a surface that is higher in elevation than all points immediately adjacent to it. The topographic terms acme, apex, peak (mountain peak), and zenith are synonymous. The term (mountain top) is generally used only for a mountain peak that is located at some distance from the nearest point of higher elevation. For example, a big, massive rock next to the main summit of a mountain is not considered a summit. Summits near a higher peak, with some prominence or isolation, but not reaching a certain cutoff value for the quantities, are often considered subsummits (or subpeaks) of the higher peak, and are considered part of the same mountain. A pyramidal peak is an exaggerated form produced by ice erosion of a mountain top. For summits that are permanently covered in significant layers of ice, the height may be measured by the highest point of rock (rock height) or the highest point of permanent solid ice (snow height). The highest summit in the world is Mount Everest with a height of above sea level. The first official ascent was made by Tenzing Norgay and Sir Edmund Hillary. They reached the mountain's peak in 1953. Whether a highest point is classified as a summit, a sub peak or a separate mountain is subjective. The International Climbing and Mountaineering Federation's definition of a 4,000 m peak is that it has a prominence of or more; it is a mountain summit if it has a prominence of at least . Otherwise, it is a subpeak. Summit may also refer to the highest point along a line, trail, or route. In many parts of the Western United States, the term summit is used for the highest point along a road, highway, or railroad, more commonly referred to as a pass. For example, the highest point along Interstate 80 in California is referred to as Donner Summit and the highest point on Interstate 5 is Siskiyou Mountain Summit. This can lead to confusion as to whether a labeled "summit" is a pass or a peak. Gallery
Physical sciences
Montane landforms
Earth science
751771
https://en.wikipedia.org/wiki/Distribution%20board
Distribution board
A distribution board (also known as panelboard, circuit breaker panel, breaker panel, electric panel, fuse box or DB box) is a component of an electricity supply system that divides an electrical power feed into subsidiary circuits while providing a protective fuse or circuit breaker for each circuit in a common enclosure. Normally, a main switch, and in recent boards, one or more residual-current devices (RCDs) or residual current breakers with overcurrent protection (RCBOs) are also incorporated. In the United Kingdom, a distribution board designed for domestic installations is known as a consumer unit. North America North American distribution boards are generally housed in sheet metal enclosures, with the circuit breakers positioned in two columns operable from the front. Some panelboards are provided with a door covering the breaker switch handles, but all are constructed with a dead front; that is to say the front of the enclosure (whether it has a door or not) prevents the operator of the circuit breakers from contacting live electrical parts within. Busbars carry the current from incoming line (hot) conductors to the breakers, which are secured to the bus with either a bolt-on connection (using a threaded screw) or a plug-in connection using a retaining clip. Panelboards are more common in commercial and industrial applications and employ bolt-on breakers. Residential and light commercial panels are generally referred to as load centers and employ plug-in breakers. The neutral conductors are secured to a neutral bus using screw terminals. The branch circuit bonding conductors are secured to a terminal block attached directly to the panelboard enclosure, which is itself grounded. During servicing of the distribution board, when the cover has been removed and the cables are visible, American panelboards commonly have some live parts exposed. In Canadian service entrance panelboards the main switch or circuit breaker is located in a service box, a section of the enclosure separated from the rest of the panelboard, so that when the main switch or breaker is switched off no live parts are exposed when servicing the branch circuits. Breaker arrangement Breakers are usually arranged in two columns. In a U.S.-style board, breaker positions are numbered left-to-right, along each row from top to bottom. This numbering system is universal with numerous competitive manufacturers of breaker panels. Each row is fed from a different line (A, B, and C below), to allow 2- or 3-pole common-trip breakers to have one pole on each phase. In North America, it is common to wire large permanently installed equipment line-to-line. This takes two slots in the panel (two-pole) and gives a voltage of 240 V for split-phase electric power, or 208 V for three-phase power. Interior The photograph to the right shows the interior of a residential service panelboard manufactured by General Electric. The three service conductors—two 'hot' lines and one neutral—can be seen coming in at the top. The neutral wire is connected to the neutral busbar to the left with all the white wires, and the two hot wires are attached to the main breaker. Below the main breaker are the two bus bars carrying the current between the main breaker and the two columns of branch circuit breakers, with each respective circuit's red and black hot wires leading off. Three wires (hot black, neutral white, and bare ground) can be seen exiting the left side of the enclosure running directly to a NEMA 5-15 electrical receptacle with a power cord plugged into it. The incoming bare, stranded ground wire can be seen near the bottom of the neutral bus bar. The photograph on the left shows a dual panel configuration: a main panel on the right (with front cover in place) and a subpanel on the left (with cover removed). The subpanel is fed by two large hot wires and a neutral wire running through the angled conduit near the top of the panels. This configuration appears to display two violations of the current U.S. National Electric Code: the main panel does not have a grounding conductor (here it is fed through the subpanel instead) and the subpanel neutral bar is bonded to the ground bar (these should be separate bars after the first service disconnect, which in this case is the main panel). Fuse boxes A common design of fuse box that was featured in homes built from 1940 through 1965 was the 60-amp fuse box that included four plug fuses (i.e. the Edison base) for branch circuits and one or more fuse blocks containing cartridge fuses for purposes such as major appliance circuits. After 1965, the more substantial 100 A panel with three-wire (230 V) service became common; a fuse box could have fuse blocks for the main shut-off and an electric range circuit plus a number of plug fuses (Edison base or Type S) for individual circuits. United Kingdom This picture shows the interior of a typical distribution panel in the United Kingdom. The three incoming phase wires connect to the busbars via a main switch in the centre of the panel. On each side of the panel are two busbars, for neutral and earth. The incoming neutral connects to the lower busbar on the right side of the panel, which is in turn connected to the neutral busbar at the top left. The incoming earth wire connects to the lower busbar on the left side of the panel, which is in turn connected to the earth busbar at the top right. The cover has been removed from the lower-right neutral bar; the neutral bar on the left side has its cover in place. Down the left side of the phase busbars are two two-pole RCBOs and two single-pole breakers, one unused. The two-pole RCBOs in the picture are not connected across two phases, but have supply-side neutral connections exiting behind the phase busbars. Down the right side of the busbars are a single-pole breaker, a two-pole RCBO and a three-pole breaker. The illustrated panel includes a great deal of unused space; it is likely that the manufacturer produces 18- and 24-position versions of this panel using the same chassis. Larger commercial, public, and industrial installations generally use three-phase supplies, with distribution boards which have twin vertical rows of breakers. Larger installations will often use subsidiary distribution boards. In both cases, modern boards handling supplies up to around 100 A (CUs) or 200 A (distribution boards) use circuit breakers and RCDs on DIN rail mountings. The main distribution board in an installation will also normally provide a main switch (known as an incomer) which switches the phase and neutral lines for the whole supply. (n.b., an incomer may be referred to, or sold as, an isolator, but this is problematic, as it will not necessarily be used as an isolator in the strict sense.) For each phase, power is fed along a busbar. In split-phase panels, separate busbars are fed directly from the incomer, which allows RCDs to be used to protect groups of circuits. Alternatively RCBOs may be used to provide both overcurrent and residual-current protection to single circuits. Other devices, such as transformers (e.g. for bell circuits) and contactors (relays; e.g. for large motor or heating loads) may also be used. New British distribution boards generally have the live parts enclosed to IP2X, even when the cover has been removed for servicing. Consumer units In the United Kingdom, BS 7671 defines a consumer unit as "A particular type of distribution board comprising a type tested coordinated assembly for the control and distribution of electrical energy, principally in domestic premises..." These installations usually have single-phase supplies at 230 V (nominal standard); historically, they were known as fuse boxes, as older consumer units used fuses until the advent of mini-circuit breakers (MCBs). A normal new domestic CU used as a main panel might have from 6 to 24 ways for devices (some of which might occupy two ways), and will be split into two or more sections (e.g. a non-RCD section for alarms etc., an RCD-protected section for socket outlets, and an RCD-protected section for lighting and other built-in appliances). Secondary CUs used for outbuildings usually have 1 to 4 ways plus an RCD. Recent (pre-17th edition wiring regulations) CUs would not normally have RCD protected sections for anything other than socket outlets, though some older CUs featured RCD incomers. Before 1990, RCDs (and split busbars) were not standard in CUs. Fuse boxes normally use cartridge or rewirable fuses with no other protective device, and basic 4-ways boxes are very common. Some older boxes are made of brown-black bakelite, sometimes with a wooden base. Although their design is historic, these were standard equipment for new installs as recently as the 1980s, so they are very common. Fuseholders in these boxes may not provide protection from accidental contact with live terminals. Examples In the UK, consumer units (CU) have evolved from basic main switch and rewireable fuses, that afforded only overload and short circuit protection, into sophisticated control units housing many safety features that can protect against different types of electrical fault. The choice of circuit protective device will depend upon the type of electrical circuit it is protecting and what level of protection needs to be afforded. BS7671:2018 Requirements for Electrical Installations, also referred to as the IET Wiring Regulations, gets regularly updated and its latest edition at the time of writing is amendment 2:2022 released on 28 March 2022. Typical configurations of CU: Main switch consumer unit - Consists of a main switch that will disconnect power to all circuits simultaneously which has one busbar linking all protective devices to a common live source, and one neutral conductor or link bar connecting to a common neutral rail. There will be a separate earth rail to allow the main earth conductor to be connected. This example offers the highest degree of circuit separation as all circuits are independent. This particular example amay not be suitable as a standalone solution with only overload and short circuit protection MCBs for each circuit. Additional protection from earth leakage RCBOs faults and arc faults AFDD may be required by BS7671 making this an expensive solution. Main Switch and Dual RCD consumer unit - Consists of a main switch that will disconnect power to all circuits simultaneously and two 30mA RCDs RCDs each with its own live busbar each protecting a separate bank of circuits, typically half-and-half but other combinations are available, from earth leakage faults. Offers a cost-effective solution by using a combination of cheaper mcbs and only two, more expensive, RCDs. High integrity consumer unit - Consists of a main switch that will disconnect power to all circuits simultaneously and three separate live busbars, one linked directly to the main switch and two others on each main RCD. The live busbar on the main switch allows the use of mcbs only where more sensitive devices such as RCBOs and AFDDs would not be appropriate, or the independent use of RCBOs, and may be limited to only one or two ways. The remainder of the circuits are divided in the same way as a dual RCD CU. This type of consumer unit offers improved circuit separation over a dual RCD CU whilst allowing for more flexibility. RCD incomer consumer unit - This is the least convenient solution in terms of circuit separation because the main switch is an RCD. Less common than the other types, it is no longer considered a standalone solution because power to all circuits is lost in the event of an earth fault causing the main switch RCD to activate. Modern consumer units are now required to be metal (non-combustible) and usually use DIN rail mounted devices. The DIN rail is standardized but the busbar arrangements are not. Mixing of different brands devices is against the manufacturers requirements and should generally be avoided. The choice of consumer unit will reflect several factors such as the size and layout of the dwelling, number of floors, outbuildings, the expected loads (lighting, sockets, ovens, showers, immersion heaters, car-chargers etc.), and how much protection is required for each circuit. The box pictured top-right is a "Wylex standard" fitted with rewirable fuses. These boxes can also be fitted with cartridge fuses or miniature circuit breakers (MCBs). This type of consumer unit was very popular in Britain until 2001 when wiring regulations mandated residual-current device (RCD) protection for sockets that could "reasonably be expected to" supply outdoor equipment (BS 7671:2001, ). There were a number of similar designs from other manufacturers but the Wylex ones are by far the most commonly encountered and the only ones for which fuseholders/breakers are still commonly available. Some manufacturers have added innovative features such as CPN Cudis who have added a LED strip light to their 'Lumo' consumer unit to enhance visibility in dark locations such as under staircases. RCD protection types Since the introduction of (BS 7671:2008 incorporating amendment no 1: 2011) 17th Edition IET Wiring Regulations, consumer units in the UK must provide RCD protection to all cables embedded in walls excepting high integrity circuits such as those for burglar alarms or smoke alarms. Consumer units have different methods of protecting circuits. For example, a dual split-load consumer unit can be arranged in a two-story dwelling as follows: RCD 1 Upstairs Lights, Downstairs Ring Final, Garage Sockets, Cooker RCD 2 Downstairs Lights, Upstairs Sockets, Shower, Heating By arranging the circuits like this, power will still be present on one of the floors if only one RCD trips out. Moreover, having sockets and lights on alternate RCD's means that if a faulty kettle downstairs trips that RCD for example, the kitchen lights will still be available, avoiding the hazard of investigating the fault in darkness. Another way to protect circuits under the 17th Edition IET Wiring Regulations is by fitting Residual Current Circuit Breaker With Overload (RCBOs) to every circuit, and although this is more costly than the RCD+MCB's option, it means any fault condition on a circuit trips only that circuit's RCBO, so the search for the fault is narrowed down from the start. When an electrician must be called out, this localised fault can be resolved faster (and therefore cheaper) in contrast with the RCD+MCB's arrangement, which only indicates a fault somewhere within that RCD's set of circuits. Some older systems such as those that use MK or old MEM Consumer Units that had one fuse per spur, so for instance: Upstairs Lights Fuse 1 Upstairs Sockets Fuse 2 Downstairs Lights Fuse 3 Downstairs Sockets Fuse 4 etc.. Legacy fuseboxes A small number of pre-1950 fuseboxes are still in service. These should be treated with caution because exposed live parts are common on these boxes. The installations they supply will not meet modern standards for electrical safety. Another characteristic of very old installations is that there may be two fuses for each circuit; one on the live and one on the neutral. In rare instances, old ring circuits may be encountered with no fewer than 4 15 A fuses per ring, one on each of L and N, and this duplicated for each of the two feeds for the ring. Manufacturer differences Despite the adoption of a standard DIN rail for mounting and a standard cut-out shape for seemingly interchangeable breakers, the positions of busbar connections and other features are not standardized. Each manufacturer has one or more "systems", or kinds of breaker panels, that is only fully compatible with breakers of that type. These assemblies have been tested and approved for use by a recognized authority. Replacing or adding equipment which "just happens to fit" can result in unexpected or even dangerous conditions. Such installations should not be done without first consulting knowledgeable sources, including manufacturers' datasheets. Location and designation For reasons of aesthetics and security, domestic circuit breaker panels and consumer units are normally located in out-of-the-way closets, attics, garages, or basements, but sometimes they are also featured as part of the aesthetic elements of a building (as an art installation, for example) or where they can be easily accessible. However, current U.S. building codes prohibit installation of a panel in a bathroom (or similar room), in closets intended for clothing, or where there is insufficient space for an electrician to gain access to the panel. Specific situations, such as an installation outdoors, in a hazardous environment, or in other out-of-the-ordinary locations might require specialized equipment and more stringent installation practices. Distribution boards may be designated for three phase or single phase and normal power or emergency power, or designated by use such as distribution panels for supplying other panels, lighting panels for lights, power panels for equipment and receptacles and special uses. Panels are located throughout the building in electric closets serving a section of the building. Theatre lighting In a theatre, a specialty panel known as a dimmer rack is used to feed stage lighting instruments. A U.S. style dimmer rack has a 208Y/120 volt 3-phase feed. Instead of just circuit breakers, the rack has a solid state electronic dimmer with its own circuit breaker for each stage circuit. This is known as a dimmer-per-circuit arrangement. The dimmers are equally divided across the three incoming phases. In a 96 dimmer rack, there are 32 dimmers on phase A, 32 dimmers on phase B, and 32 on phase C to spread out the lighting load as equally as possible. In addition to the power feed from the supply transformer in the building, a control cable from the lighting desk carries information to the dimmers in a control protocol such as DMX-512. The information includes lighting level information for each channel, by which it controls which dimmer circuits come up and go out during the lighting changes of the show (light cues), and over what fade time. Distribution boards may be surface-mounted or flush. The former arrangement provides easier alteration or addition to wiring at a later date, but the latter arrangement might be neater, particularly for a residential application. The other problem with recessing a distribution board into a wall is that if the wall is solid, a lot of brick or block might need to be removed—generally for this reason, recessed boards would only be installed on new-build projects when the required space can be built into the wall.
Technology
Electricity transmission and distribution
null
752048
https://en.wikipedia.org/wiki/Zinc%20chloride
Zinc chloride
Zinc chloride is an inorganic chemical compound with the formula ZnCl2·nH2O, with n ranging from 0 to 4.5, forming hydrates. Zinc chloride, anhydrous and its hydrates, are colorless or white crystalline solids, and are highly soluble in water. Five hydrates of zinc chloride are known, as well as four polymorphs of anhydrous zinc chloride. All forms of zinc chloride are deliquescent. They can usually be produced by the reaction of zinc or its compounds with some form of hydrogen chloride. Anhydrous zinc compound is a Lewis acid, readily forming complexes with a variety of Lewis bases. Zinc chloride finds wide application in textile processing, metallurgical fluxes, chemical synthesis of organic compounds, such as benzaldehyde, and processes to produce other compounds of zinc. History Zinc chloride has long been known but currently practiced industrial applications all evolved in the latter half of 20th century. An amorphous cement formed from aqueous zinc chloride and zinc oxide was first investigated in 1855 by Stanislas Sorel. Sorel later went on to investigate the related magnesium oxychloride cement, which bears his name. Dilute aqueous zinc chloride was used as a disinfectant under the name "Burnett's Disinfecting Fluid". From 1839 Sir William Burnett promoted its use as a disinfectant as well as a wood preservative. The Royal Navy conducted trials into its use as a disinfectant in the late 1840s, including during the cholera epidemic of 1849; and at the same time experiments were conducted into its preservative properties as applicable to the shipbuilding and railway industries. Burnett had some commercial success with his eponymous fluid. Following his death however, its use was largely superseded by that of carbolic acid and other proprietary products. Structure and properties Unlike other metal dichlorides, zinc dichloride adopts several crystalline forms (polymorphs). Four polymorph are known: α, β, γ, and δ. Each features centers surrounded in a tetrahedral manner by four chloride ligands. Here a, b, and c are lattice constants, Z is the number of structure units per unit cell, and ρ is the density calculated from the structure parameters. The orthorhombic form (δ) rapidly changes to another polymorph upon exposure to the atmosphere. A possible explanation is that the ions originating from the absorbed water facilitate the rearrangement. Rapid cooling of molten gives a glass. Molten has a high viscosity at its melting point and a comparatively low electrical conductivity, which increases markedly with temperature. As indicated by a Raman scattering study, the viscosity is explained by the presence of polymers,. Neutron scattering study indicated the presence of tetrahedral centers, which requires aggregation of monomers as well. Hydrates A variety of hydrated zinc chloride are known: with n = 1, 1.33, 2.5, 3, and 4.5. The 1.33-hydrate, previously thought to be the hemitrihydrate, consists of trans-Zn(H2O)4Cl2 centers with the chlorine atoms connected to repeating ZnCl4 chains. The hemipentahydrate, structurally formulated [Zn(H2O)5][ZnCl4], consists of Zn(H2O)5Cl octahedrons where the chlorine atom is part of a [ZnCl4]2- tetrahedera. The trihydrate consists of distinct hexaaquozinc(II) cations and tetrachlorozincate anions; formulated [Zn(H2O)6][ZnCl4]. Finally, the heminonahydrate, structurally formulated [Zn(H2O)6][ZnCl4]·3H2O also consists of distinct hexaaquozinc(II) cations and tetrachlorozincate anions like the trihydrate but has three extra water molecules. These hydrates can be produced by evaporation of aqueous solutions of zinc chloride at different temperatures. Preparation and purification Historically, zinc chlorides are prepared from the reaction of hydrochloric acid with zinc metal or zinc oxide. Aqueous acids cannot be used to produce anhydrous zinc chloride. According to an early procedure, a suspension of powdered zinc in diethyl ether is treated with hydrogen chloride, followed by drying The overall method remains useful in industry, but without the solvent: Aqueous solutions may be readily prepared similarly by treating Zn metal, zinc carbonate, zinc oxide, and zinc sulfide with hydrochloric acid: Hydrates can be produced by evaporation of an aqueous solution of zinc chloride. The temperature of the evaporation determines the hydrates For example, evaporation at room temperature produces the 1.33-hydrate. Lower evaporation temperatures produce higher hydrates. Commercial samples of zinc chloride typically contain water and products from hydrolysis as impurities. Laboratory samples may be purified by recrystallization from hot dioxane. Anhydrous samples can be purified by sublimation in a stream of hydrogen chloride gas, followed by heating the sublimate to 400 °C in a stream of dry nitrogen gas. A simple method relies on treating the zinc chloride with thionyl chloride. Reactions Chloride complexes A number of salts containing the tetrachlorozincate anion, , are known. "Caulton's reagent", , which is used in organic chemistry, is an example of a salt containing . The compound contains tetrahedral and anions, so, the compound is not caesium pentachlorozincate, but caesium tetrachlorozincate chloride. No compounds containing the ion (hexachlorozincate ion) have been characterized. The compound crystallizes from a solution of in hydrochloric acid. It contains a polymeric anion with balancing monohydrated hydronium ions, ions. Adducts The adduct with thf illustrates the tendency of zinc chloride to form 1:2 adducts with weak Lewis bases. Being soluble in ethers and lacking acidic protons, this complex is used in the synthesis of organozinc compounds. A related 1:2 complex is (zinc dichloride di(hydroxylamine)). Known as Crismer's salt, this complexes releases hydroxylamine upon heating. The distinctive ability of aqueous solutions of to dissolve cellulose is attributed to the formation of zinc-cellulose complexes, illustrating the stability of its adducts. Cellulose also dissolves in molten hydrate. Overall, this behavior is consistent with Zn2+ as a hard Lewis acid. When solutions of zinc chloride are treated with ammonia, diverse ammine complexes are produced. In addition to the tetrahedral 1:2 complex . the complex also has been isolated. The latter contains the ion,. The species in aqueous solution have been investigated and show that is the main species present with also present at lower :Zn ratio. Aqueous solutions of zinc chloride Zinc chloride dissolves readily in water to give species and some free chloride. Aqueous solutions of are acidic: a 6 M aqueous solution has a pH of 1. The acidity of aqueous solutions relative to solutions of other Zn2+ salts (say the sulfate) is due to the formation of the tetrahedral chloro aqua complexes such as [ZnCl3(H2O)]−. Most metal dichlorides for octahedral complexes, with stronger O-H bonds. The combination of hydrochloric acid and gives a reagent known as "Lucas reagent". Such reagents were once used a test for primary alcohols. Similar reactions are the basis of industrial routes from methanol and ethanol respectively to methyl chloride and ethyl chloride. In alkali solution, zinc chloride converts to various zinc hydroxychlorides. These include , , , and the insoluble . The latter is the mineral simonkolleite. When zinc chloride hydrates are heated, hydrogen chloride evolves and hydroxychlorides result. In aqueous solution , as well as other halides (bromide, iodide), behave interchangeably for the preparation of other zinc compounds. These salts give precipitates of zinc carbonate when treated with aqueous carbonate sources: Ninhydrin reacts with amino acids and amines to form a colored compound "Ruhemann's purple" (RP). Spraying with a zinc chloride solution, which is colorless, forms a 1:1 complex RP:, which is more readily detected as it fluoresces more intensely than RP. Redox Anhydrous zinc chloride melts and even boils without any decomposition up to 900 °C. When zinc metal is dissolved in molten at 500–700 °C, a yellow diamagnetic solution is formed consisting of the , which has zinc in the oxidation state +1. The nature of this dizinc dication has been confirmed by Raman spectroscopy. Although is unusual, mercury, a heavy congener of zinc, forms a wide variety of salts. In the presence of oxygen, zinc chloride oxidizes to zinc oxide above 400 °C. Again, this observation indicates the nonoxidation of Zn2+. Zinc hydroxychloride Concentrated aqueous zinc chloride dissolves zinc oxide to form zinc hydroxychloride, which is obtained as colorless crystals: The same material forms when hydrated zinc chloride is heated. The ability of zinc chloride to dissolve metal oxides (MO) is relevant to the utility of as a flux for soldering. It dissolves passivating oxides, exposing the clean metal surface. Organic syntheses with zinc chloride Zinc chloride is an occasional laboratory reagent often as a Lewis acid. A dramatic example is the conversion of methanol into hexamethylbenzene using zinc chloride as the solvent and catalyst: This kind of reactivity has been investigated for the valorization of C1 precursors. Examples of zinc chloride as a Lewis acid include the Fischer indole synthesis: Related Lewis-acid behavior is illustrated by a traditional preparation of the dye fluorescein from phthalic anhydride and resorcinol, which involves a Friedel-Crafts acylation. This transformation has in fact been accomplished using even the hydrated sample shown in the picture above. Many examples describe the use of zinc chloride in Friedel-Crafts acylation reactions. Zinc chloride also activates benzylic and allylic halides towards substitution by weak nucleophiles such as alkenes: In similar fashion, promotes selective reduction of tertiary, allylic or benzylic halides to the corresponding hydrocarbons. Zinc enolates, prepared from alkali metal enolates and , provide control of stereochemistry in aldol condensation reactions. This control is attributed to chelation at the zinc. In the example shown below, the threo product was favored over the erythro by a factor of 5:1 when . Organozinc precursor Being inexpensive and anhydrous, ZnCl2 is a widely used for the synthesis of many organozinc reagents, such as those used in the palladium catalyzed Negishi coupling with aryl halides or vinyl halides. The prominence of this reaction was highlighted by the award of the 2010 Nobel Prize in Chemistry to Ei-ichi Negishi. Rieke zinc, a highly reactive form of zinc metal, is generated by reduction of zinc dichloride with lithium. Rieke Zn is useful for the preparation of polythiophenes and for the Reformatsky reaction. Uses Industrial organic chemistry Zinc chloride is used as a catalyst or reagent in diverse reactions conducted on an industrial scale. Benzaldehyde, 20,000 tons of which is produced annually in Western countries, is produced from inexpensive toluene by exploiting the catalytic properties of zinc dichloride. This process begins with the chlorination of toluene to give benzal chloride. In the presence of a small amount of anhydrous zinc chloride, a mixture of benzal chloride are treated continuously with water according to the following stoichiometry: Similarly zinc chloride is employed in hydrolysis of benzotrichloride, the main route to benzoyl chloride. It serves as a catalyst for the production of methylene-bis(dithiocarbamate). As a metallurgical flux The use of zinc chloride as a flux, sometimes in a mixture with ammonium chloride (see also Zinc ammonium chloride), involves the production of HCl and its subsequent reaction with surface oxides. Zinc chloride forms two salts with ammonium chloride: and , which decompose on heating liberating HCl, just as zinc chloride hydrate does. The action of zinc chloride/ammonium chloride fluxes, for example, in the hot-dip galvanizing process produces gas and ammonia fumes. Other uses Relevant to its affinity for these paper and textiles, is used as a fireproofing agent and in the process of making Vulcanized fibre, which is made by soaking paper in concentrated zinc chloride. Zinc chloride is also used as a deodorizing agent and to make zinc soaps. Safety and health Zinc and chloride are essential for life. Zn2+ is a component of several enzymes, e.g., carboxypeptidase and carbonic anhydrase. Thus, aqueous solutions of zinc chlorides are rarely problematic as an acute poison. Anhydrous zinc chloride is however an aggressive Lewis acid as it can burn skin and other tissues. Ingestion of zinc chloride, often from soldering flux, requires endoscopic monitoring. Another source of zinc chloride is zinc chloride smoke mixture ("HC") used in smoke grenades. Containing zinc oxide, hexachloroethane and aluminium powder release zinc chloride, carbon and aluminium oxide smoke, an effective smoke screen. Such smoke screens can lead to fatalities.
Physical sciences
Halide salts
Chemistry
3454087
https://en.wikipedia.org/wiki/Respiratory%20tract%20infection
Respiratory tract infection
Respiratory tract infections (RTIs) are infectious diseases involving the lower or upper respiratory tract. An infection of this type usually is further classified as an upper respiratory tract infection (URI or URTI) or a lower respiratory tract infection (LRI or LRTI). Lower respiratory infections, such as pneumonia, tend to be far more severe than upper respiratory infections, such as the common cold. Types Upper respiratory tract infection The upper respiratory tract is considered the airway above the glottis or vocal cords; sometimes, it is taken as the tract above the cricoid cartilage. This part of the tract includes the nose, sinuses, pharynx, and larynx. Typical infections of the upper respiratory tract include tonsillitis, pharyngitis, laryngitis, sinusitis, otitis media, certain influenza types, and the common cold. Symptoms of URIs can include cough, sore throat, runny nose, nasal congestion, headache, low-grade fever, facial pressure, and sneezing. Lower respiratory tract infection The lower respiratory tract consists of the trachea (windpipe), bronchial tubes, bronchioles, and the lungs. Lower respiratory tract infections (LRIs) are generally more severe than upper respiratory infections. LRIs are the leading cause of death among all infectious diseases. The two most common LRIs are bronchitis and pneumonia. Influenza affects both the upper and lower respiratory tracts, but more dangerous strains such as the highly pernicious H5N1 tend to bind to receptors deep in the lungs. Diagnosis Pulmonary Function Testing (PFT) allows for the evaluation and assessment of airways, lung function, as well as specific benchmarks to diagnose an array of respiratory tract infections. Methods such as gas dilution techniques and plethysmography help determine the functional residual capacity and total lung capacity. To discover whether or not to perform a set of advanced Pulmonary Function Testing will be based on abnormally high values in previous test results. A 2014 systematic review of clinical trials does not support routine rapid viral testing to decrease antibiotic use for children in emergency departments. It is unclear if rapid viral testing in the emergency department for children with acute febrile respiratory infections reduces the rates of antibiotic use, blood testing, or urine testing. The relative risk reduction of chest x-ray utilization in children screened with rapid viral testing is 77% compared with controls. In 2013 researchers developed a breath tester that can promptly diagnose lung infections. Treatment Bacteria are unicellular organisms present on Earth can thrive in various environments, including the human body. Antibiotics are a medicine designed to treat bacterial infections that need a more severe treatment course; antibiotic use is not recommended for common bacterial infections as the immune system will resolve such infections. This medicine does not effectively treat a viral infection like sore throats, influenza, bronchitis, sinusitis and common respiratory tract infections. This is because antibiotics were developed to target features of bacteria that are not present in viruses, and so antibiotics are ineffective as antiviral agents. The CDC has reported that antibiotic prescription is high; 47 million prescriptions in the United States in 2018 were made for infections that do not need antibiotics to be treated with. It is recommended to avoid antibiotic use unless bacterial infections are severe, transmissible, or have a high risk of further complications if left untreated. Unnecessary use of antibiotics could increase antibiotic-resistant infections, affect the digestive system, create allergic reactions, and other intense side effects. A study published in JAMA found that narrow-spectrum antibiotics, such as amoxicillin, are just as effective as broad-spectrum alternatives for treating acute respiratory tract infections in children, but have a lower risk of side effects. Prevention Despite the superior filtration capability of N95 filtering facepiece respirators measured in vitro, insufficient clinical evidence has been published to determine whether standard surgical masks and N95 filtering facepiece respirators are equivalent to preventing respiratory infections in healthcare workers. Adults in intensive care units (ICU) have a higher risk of acquiring an RTI. A combination of topical and systematic antibiotics taken prophylactically can prevent infection and improve adults' overall mortality in the ICU for adult patients receiving mechanical ventilation for at least 48 hours, and topical antibiotic prophylaxis probably reduces respiratory infections but not mortality. However, the combination of treatments cannot rule out the relevant contribution in the systemic component of the observed reduction of mortality. There is no sufficient evidence to recommend that antibiotics be used to prevent complications from an RTI of unknown cause in children under the age of 5 years old. High-quality clinical research in the form of randomized controlled trials assessed the effectiveness of Vitamin D, another review of poorer quality RCTs addressed the effectiveness of immunostimulants for preventing respiratory tract infections. Despite some uncertainty due to small study sizes, there is some evidence that exercise may reduce severity of symptoms but had no impact on number of episodes or number of symptom days per episode. Viruses that cause RTI are more transmissible at very high or low relative humidity; ideal humidity for indoor spaces is between 40 and 60%. Therefore, relative humidity in this range can help lessen the risk of aerosol transmission. Epidemiology Respiratory infections often have strong seasonal patterns, with temperate climates more affected during the winter. Several factors explain winter peaks in respiratory infections, including environmental conditions and changes in human behaviors. Viruses that cause respiratory infections are affected by environmental conditions like relative humidity and temperature. Temperate climate winters have lower relative humidity, which is known to increase the transmission of influenza. Of the viruses that cause respiratory infections in humans, most have seasonal variation in prevalence. Influenza, Human orthopneumovirus (RSV), and human coronaviruses are more prevalent in the winter. Human bocavirus and Human metapneumovirus occur year-round, rhinoviruses (which cause the common cold) occur mostly in the spring and fall, and human parainfluenza viruses have variable peaks depending on the specific strain. Enteroviruses, with the exception of rhinoviruses, tend to peak in the summer.
Biology and health sciences
Infectious diseases by site
Health
3454256
https://en.wikipedia.org/wiki/Euparkeria
Euparkeria
Euparkeria (; meaning "Parker's good animal", named in honor of W. K. Parker) is an extinct genus of archosauriform reptile from the Triassic of South Africa. Euparkeria is close to the ancestry of Archosauria, the reptile group that includes crocodilians, pterosaurs, and dinosaurs (including birds). Fossils of Euparkeria, including nearly complete skeletons, have been recovered from the Cynognathus Assemblage Zone (CAZ, also known as the Burgersdorp Formation), which hosts the oldest advanced archosauriforms in the fossil-rich Karoo Basin. Tentative dating schemes place the CAZ around the latest Early Triassic (late Olenekian stage) or earliest Middle Triassic (early Anisian stage), approximately 247 million years old. Euparkeria is among the most heavily described and discussed non-archosaur archosauriforms. It was a small carnivorous reptile with a boxy skull, slender limbs, and two rows of tiny teardrop-shaped osteoderms (bony scutes) along its backbone. Euparkeria is a eucrocopod, meaning that it was among the reptiles most closely related to true crown group archosaurs, according to specializations of the ankle and hindlimbs. The hind limbs were slightly longer than its forelimbs, which has been taken as evidence that it may have been able to rear up on its hind legs as a facultative biped. This conception supplemented older studies which interpreted Euparkeria as a particularly close relative to fully bipedal early dinosaurs. Its normal movement was probably more quadrupedal, with limbs positioned in a semi-erect posture, analogous (but not identical) to a crocodilian high walk. Biomechanical analyses suggests that Euparkeria was incapable of even short periods of bipedal activity. Palaeobiology Locomotion The hind limbs of Euparkeria are somewhat longer than its forelimbs, which has led some researchers to conclude that it could have occasionally walked on its hind legs as a facultative biped. Other possible adaptations to bipedalism in Euparkeria include rows of osteoderms that could stabilize the back and a long tail that could act as a counterbalance to the rest of the body. Paleontologist Rosalie Ewer suggested in 1965 that Euparkeria spent most of its time on four legs but moved on its hind legs whilst running. However, adaptations to bipedalism in Euparkeria are not as obvious as they are in some other Triassic archosauriforms such as dinosaurs and poposauroids; the forelimbs are still relatively long and the head is so large that the tail might not have effectively counterbalanced its weight. The position of muscle anchorage points on the humerus or thigh bones suggest that Euparkeria could not have held its legs in a fully erect posture beneath its body, but would have held them slightly out to the side as in modern crocodilians and most other quadrupedal Triassic archosauriforms. Euparkeria has a large backward-pointing projection on the calcaneum (an ankle bone) that would have given strong leverage to the ankle during locomotion. A calcaneal projection might have enabled Euparkeria to move with all four limbs in a semi-erect "high walk" similar to the way in which living crocodilians sometimes move about on land. A 2020 study of range of motion in the hindlimbs of Euparkeria found conflicting evidence for its posture. The structure of the femur (thigh bone) and hip socket suggest that the legs were capable of a very wide range of motion, ranging from a nearly vertical stance to a thigh which projects forwards, backwards, or outwards at a nearly horizontal angle. Rotation of the thigh was more limited, a factor that argues against a sprawling gait reliant on broad outward leg sweeps. Although the hip socket argues in favor of an upright 'pillar-erect' hindlimb stance, the structure of the tibia (inner shin bone) and ankle show that the lower legs and feet would have splayed outwards during normal usage, supporting a semi-erect rather than fully erect stance. The hindlimbs of Euparkeria have been used to argue that the evolution of a fully erect gait in true archosaurs was a stepwise process which first developed in bones closer to the hip. A 2023 paper analyzed the possibility of facultative bipedalism and came to the conclusion that Euparkeria was quadrupedal at all times. Models of weight distribution found that the center of mass for Euparkeria was far in front of the hips, meaning that a body held horizontally during a bipedal stance would have to fight against a very large forward pitching moment. This pitching moment far exceeds that of modern long-limbed lizards capable of facultative bipedalism. The pitching moment would only stabilize if the body was held up at an implausibly high angle (>60 degrees), regardless of how the tail was held. In addition, models of muscle activation indicate that the ankle plantarflexor group (the muscles which bend the foot down to maintain stability) would have been overexerted to the point of failure if a bipedal posture was attempted by the animal. A recent comparative study of bone cross-sectional geometry also inferred a fully quadrupedal locomotion in Euparkeria. Nocturnality Some specimens of Euparkeria preserve bony rings in the eye sockets called sclerotic rings, which in life would have supported the eye. The sclerotic ring of Euparkeria is most similar to those of modern birds and reptiles that are nocturnal, suggesting that Euparkeria had a lifestyle adapted to low-light conditions. During the Early Triassic the Karoo Basin was at about 65 degrees south latitude, meaning that Euparkeria would have experienced long periods of darkness in winter months. Classification The family Euparkeriidae is named after Euparkeria. The family name was first proposed by German paleontologist Friedrich von Huene in 1920; Huene classified euparkeriids as members of Pseudosuchia, a traditional name for crocodilian relatives from the Triassic (Pseudosuchia means "false crocodiles"). Early phylogenetic analyses created by Jacques Gauthier in the 1980s provided an alternative hypothesis, that Euparkeria was closer to dinosaurs (including birds) rather than crocodilians. Many genera have been assigned to Euparkeriidae in the past, but only two other valid genera are currently believed to be part of the family, apart from Euparkeria itself: Halazhaisuchus and Osmolskina. More recent analyses starting with Benton & Clark (1988) place Euparkeria as a member of Archosauriformes in a position outside both crocodilian-line and bird-line (Avemetatarsalian). Although the ancestor to archosaurs likely shared several similarities with Euparkeria, archosaurs are probably not directly descendants of the genus. The precise placement of Euparkeria and other euparkeriids within Archosauriformes is controversial. Most analyses agree that Euparkeria was a closer relative of archosaurs than the proterosuchids or erythrosuchids were. The one exception is the study of Dilkes & Sues (2009), who found Euparkeria to be less crownward than Erythrosuchus. Nevertheless, these results have not been widely accepted. There still remains some ambiguity over whether Euparkeriidae was truly the sister group of the archosaurs. Many phylogenetic analyses place the long-snouted proterochampsians as more closely related to archosaurs than euparkeriids were. Such studies include Sereno (1991), Parrish (1993), Juul (1994), various analyses by Michael J. Benton, and Ezcurra (2016). On the other hand, several other notable studies consider Euparkeria to be closer to archosaurs than proterochampsians. Sterling Nesbitt's influential 2011 monograph on archosaurian relationships found a similar result, although he also placed phytosaurs as the sister group to Archosauria, rather than Euparkeria. Roland Sookias, a paleontologist responsible for many studies on euparkeriids in the 2010s, also considers them to be closer archosaur relatives than the proterochampsians. Like Nesbitt (2011), he found phytosaurs to be the closest relatives of Archosauria, followed by the Euparkeria-like reptile Dorosuchus, and then by the euparkeriids.
Biology and health sciences
Other prehistoric archosaurs
Animals
3459458
https://en.wikipedia.org/wiki/Magellanic%20Stream
Magellanic Stream
The Magellanic Stream is a stream of high-velocity clouds of gas extending from the Large and Small Magellanic Clouds over 100° through the Galactic south pole of the Milky Way. The stream contains a gaseous feature dubbed the leading arm. The stream was sighted in 1965 and its relation to the Magellanic Clouds was established in 1974. Discovery and early observations In 1965, anomalous velocity gas clouds were found in the region of the Magellanic Clouds. The gas stretches for at least 180 degrees across the sky. This corresponds to 180 kpc (600,000 ly) at an approximate distance of 55 kpc (180,000 ly). The gas is very collimated and polar with respect to the Milky Way. The velocity range is huge (from −400 to 400 km s−1 in reference to Local Standard of Rest) and velocity patterns do not follow the rest of the Milky Way. Hence, it was determined to be a classic high-velocity cloud. However, the gas was not mapped, and the connection to the two Magellanic Clouds was not made. The Magellanic Stream as such was discovered as a Neutral Hydrogen (HI) gas feature near the Magellanic Clouds by Wannier & Wrixon in 1972. Its connection to the Magellanic Clouds was made by Mathewson et al. in 1974. Owing to the closeness of the Magellanic Clouds and the ability to resolve individual stars and their parallaxes, and proper motion, subsequent observations gave the full 6-dimensional phase space information of both clouds (with very large relative errors for the transverse velocities). This enabled the calculation of the likely past orbit of the Large and the Small Magellanic Cloud in relation to the Milky Way. The calculation necessitated large assumptions, for example, on the shapes and masses of the 3 galaxies, and the nature of dynamical friction between the moving objects. Observations of individual stars revealed details of star formation history. Models Models describing the formation of the Magellanic Stream had been produced since 1980. Following computing power, the initial models were very simple, non-self-gravitating, and with few particles. Most models predicted a feature leading the Magellanic Clouds. These early models were 'tidal' models. Just like tides on Earth are induced by the gravity of the 'leading' Moon, the models predicted two directions opposite each other, in which particles are preferentially pulled. However, the predicted features were not observed. This led to a few models that did not require a leading element but which had problems of their own. In 1998 a study analysing the full sky survey made by the HIPASS team at Parkes Observatory generated important new observational data. Putman et al. discovered that a mass of high-velocity clouds leading the Magellanic Clouds was actually fully connected to the Magellanic Clouds. So, the leading arm feature had its existence finally established. Furthermore, Lu et al. (1998) and Gibson et al. (2000) established the chemical similarity between the streams and Magellanic Clouds. Newer, increasingly sophisticated models all tested the Leading Arm Feature hypothesis. These models make heavy use of gravity effects through tidal fields. Some models also rely on ram pressure stripping as a shaping mechanism. Most recent models increasingly include drag from the halo of the Milky Way as well as gas dynamics, star formation and chemical evolution. It is thought that the tidal forces mostly affect the Small Magellanic Cloud, since it has lower mass, and is less gravitationally bound. In contrast, ram pressure stripping mostly affects the Large Magellanic Cloud, because it has a larger reservoir of gas. Recent observations At the January 2010 meeting of the American Astronomical Society, David Nidever of the University of Virginia announced new results based on data derived from the National Science Foundation's Robert C. Byrd Green Bank Telescope and earlier radio astronomy observations. The Magellanic Stream is much longer than earlier thought, and is older too. This means that the Magellanic Stream likely formed when the two Magellanic Clouds passed close to each other around 2.5 billion years ago. In 2018, research confirmed that the chemical composition of the gas in the Magellanic Stream Leading Arm more closely resembles the composition of the Small Magellanic Cloud, rather than the Large Magellanic Cloud, by looking at light from background quasars shining through the Stream and analysing the spectrum of light that is either absorbed by, or let through it. This analysis confirmed that the gas most likely originated from the Small Magellanic Cloud, thereby indicating that the Large Magellanic Cloud is 'winning' in the gravity tug of both Clouds working on the Magellanic Stream. In 2019 astronomers discovered the young star cluster Price-Whelan 1 using Gaia data. The star cluster has a low metallicity and belongs to the leading arm of the Magellanic Clouds. The discovery of this star cluster suggests that the leading arm of the Magellanic Clouds is 90,000 light-years away from the Milky Way, only half as far from the Milky Way as previously thought. The star cluster is relatively young, which is a sign of recent star formation in the leading arm.
Physical sciences
Other notable objects
Astronomy
22112445
https://en.wikipedia.org/wiki/Pimple
Pimple
A pimple or zit is a kind of comedo that results from excess sebum and dead skin cells getting trapped in the pores of the skin. In its aggravated state, it may evolve into a pustule or papule. Pimples can be treated by acne medications, antibiotics, and anti-inflammatories prescribed by a physician, or various over the counter remedies purchased at a pharmacy. Causes Sebaceous glands inside the pore of the skin produce sebum. When the outer layers of skin shed (a natural and continuous process, normally), dead skin and oily sebum left behind may bond together and form a blockage of the sebaceous gland at the base of the skin. This is most common when the skin becomes thicker at puberty. The sebaceous gland continues to produce sebum, which builds up behind the blockage, allowing bacteria to grow in the area, including the species Staphylococcus aureus and Cutibacterium acnes, which causes inflammation and infection. Other causes of pimples include family history, stress, fluctuations in hormone levels, hair and skincare products, medication side effects, and un-diagnosed or underlying medical conditions. Pimples can be part of the presentation of rosacea. The American Academy of Dermatology recommends that adults with acne use products labeled as "non-comedogenic", "non-acnegenic", "oil-free" or "won’t clog pores", as they are "least likely" to cause skin irritation or acne. Treatment Over-the-counter medications Common over-the-counter medications for pimples are benzoyl peroxide, salicylic acid, adapalene, and antibacterial agents such as triclosan. These topical medications, which can be found in many creams and gels used to treat acne (acne vulgaris), induce skin to slough off more easily, helping to remove bacteria faster. Before application, the face should be washed with warm water or a topical cleanser and then dried. A regimen of keeping the affected skin area clean, plus the regular application of these topical medications is usually enough to keep acne under control, if not at bay altogether. The most common product is a topical treatment of benzoyl peroxide, which has minimal risk apart from minor skin irritation that may present similar as a mild allergy. Recently, nicotinamide (vitamin B3), applied topically, has been shown to be more effective in treatment of pimples than antibiotics such as clindamycin. Nicotinamide is not an antibiotic and has no side effects typically associated with antibiotics. It has the added advantage of reducing skin hyperpigmentation which results in pimple scars. Prescription medication Severe acne usually indicates the necessity of prescription medication to treat the pimples. Prescription medications used to treat acne and pimples include isotretinoin, which is a retinoid, anti-seborrheic medications, anti-androgen medications, hormonal treatments, alpha hydroxy acid, azelaic acid, and keratolytic soaps. Historically, antibiotics such as tetracyclines and erythromycin were prescribed. While they were more effective than topical applications of benzoyl peroxide, the bacteria eventually grew resistant to the antibiotics and the treatments became less and less effective. Also, antibiotics had more side effects than topical applications, such as stomach cramps and severe discoloration of teeth. Common antibiotics prescribed as of 2001 by dermatologists included doxycycline and minocycline. Isotretinoin is used primarily for severe cystic acne and acne that has not responded to other treatments. Many dermatologists also support its use for treatment of lesser degrees of acne that prove resistant to other treatments, or that produce physical or psychological scarring. It is teratogenic, and requires strict prevention of pregnancy during its use. Expression Expression, the manual bursting of pimples which have evolved into whiteheads with one's fingers (colloquially, "popping"), can allow bacteria to be introduced into the open wound this creates. This can result in infection and permanent scarring. Thus expression is generally recommended against by dermatologists and estheticians in favour of allowing pimples to run through their natural lifespans. Some dermatologists offer incision and drainage services to sterilely drain the pimple.
Biology and health sciences
Symptoms and signs
Health
22112814
https://en.wikipedia.org/wiki/Henna
Henna
Henna is a reddish dye prepared from the dried and powdered leaves of the henna tree. It has been used since at least the ancient Egyptian period as a hair and body dye, notably in the temporary body art of mehndi (or "henna tattoo") resulting from the staining of the skin using dyes from the henna plant. After henna stains reach their peak colour, they hold for a few days, then gradually wear off by way of exfoliation, typically within one to three weeks. Henna has been used in ancient Egypt, ancient Near East and then the Indian subcontinent to dye skin, hair and fingernails, as well as fabrics including silk, wool, and leather. Historically, henna was used in West Asia including the Arabian Peninsula and in Carthage, other parts of North Africa, West Africa, Central Africa, the Horn of Africa and the Indian subcontinent. The name henna is used in other skin and hair dyes, such as black henna and neutral henna, neither of which is derived from the henna plant. Etymology The word henna comes from the Arabic (ALA-LC: ḥinnāʾ; pronounced ). History The origins of the initial human uses of henna are uncertain; however, there are records that the plant was marketed in Babylonia, and was used in Ancient Egypt on some mummies to dye their hair, skin, nails, or funeral wrappings. It arrived in North Africa during the Punic civilization through Phoenician Diasporas where it was used as a beautification tool. Pliny the Elder wrote about its use in the Roman Empire as a medicine, a perfume, and a dye. Preparation and application Body art Whole, unbroken henna leaves will not stain the skin because the active chemical agent, lawsone, is bound within the plant. However, dried henna leaves will stain the skin if they are mashed into a paste. The lawsone will gradually migrate from the henna paste into the outer layer of the skin and bind to the proteins in it, creating a stain. Since it is difficult to form intricate patterns from coarsely crushed leaves, henna is commonly traded as a powder made by drying, milling and sifting the leaves. The dry powder is mixed with one of a number of liquids, including water, lemon juice, strong tea, and other ingredients, depending on the tradition. Many artists use sugar or molasses in the paste to improve consistency to keep it stuck to the skin better. The henna mix must rest between one and 48 hours before use in order to release the lawsone from the leaf matter. The timing depends on the crop of henna being used. Essential oils with high levels of monoterpene alcohols, such as tea tree, cajuput, or lavender, will improve skin stain characteristics. Other essential oils, such as eucalyptus and clove, are not used because they are too irritating to the skin. The paste can be applied with many traditional and innovative tools, starting with a basic stick or twig. In Morocco, a syringe is common. A plastic cone similar to those used to pipe icing onto cakes is used in India. A light stain may be achieved within minutes, but the longer the paste is left on the skin, the darker and longer lasting the stain will be, so it needs to be left on as long as possible. To prevent it from drying or falling off the skin, the paste is often sealed down by dabbing a sugar/lemon mix over the dried paste or adding some form of sugar to the paste. After some time the dry paste is simply brushed or scraped away. The paste should be kept on the skin for a minimum of four to six hours, but longer times and even wearing the paste overnight is a common practice. Removal should not be done with water, as water interferes with the oxidation process of stain development. Cooking oil may be used to loosen dry paste. Henna stains are orange when the paste is first removed, but darken over the following three days to a deep reddish brown due to oxidation. Soles and palms have the thickest layer of skin and so take up the most lawsone, and take it to the greatest depth, so that hands and feet will have the darkest and most long-lasting stains. Some also believe that steaming or warming the henna pattern will darken the stain, either during the time the paste is still on the skin, or after the paste has been removed. It is debatable whether this adds to the color of the result as well. After the stain reaches its peak color, it holds for a few days, then gradually wears off by way of exfoliation, typically within one to three weeks. Natural henna pastes containing only henna powder, a liquid (water, lemon juice, etc.) and an essential oil (lavender, cajuput, tea tree etc.) are not "shelf stable," meaning they expire quickly, and cannot be left out on a shelf for over one week without losing their ability to stain the skin. The leaf of the henna plant contains a finite amount of lawsone. As a result, once the powder has been mixed into a paste, this leaching of dye molecule into the mixture will only occur for an average of two to six days. If a paste will not be used within the first few days after mixing, it can be frozen for up to four months to halt the dye release, for thawing and use at a later time. Commercially packaged pastes that remain able to stain the skin longer than seven days without refrigeration or freezing contain other chemicals besides henna that may be dangerous to the skin. After the initial seven-day release of lawsone dye, the henna leaf is spent, therefore any dye created by these commercial cones on the skin after this time period is actually the result of other compounds in the product. These chemicals are often undisclosed on packaging, and have a wide range of colors including what appears to be a natural looking color stain produced by dyes such as sodium picramate. These products often do not contain any henna. There are many adulterated henna pastes such as these, and others, for sale today that are erroneously marketed as "natural", "pure", or "organic", all containing potentially dangerous undisclosed additives. The length of time a pre-manufactured paste takes to arrive in the hands of consumers is typically longer than the seven-day dye release window of henna, therefore one can reasonably expect that any pre-made mass-produced cone that is not shipped frozen is a potentially harmful adulterated chemical variety. · Henna only stains the skin one color, a variation of reddish brown, at full maturity three days after application. Powdered fresh henna, unlike pre-mixed paste, can be easily shipped all over the world and stored for many years in a well-sealed package. Body art quality henna is often more finely sifted than henna powders for hair. Hair/eyebrow dye History In Ancient Egypt, Ahmose-Henuttamehu (17th Dynasty, 1574 BCE) was probably a daughter of Seqenenre Tao and Ahmose Inhapy. Smith reports that the mummy of Henuttamehu's own hair had been dyed a bright red at the sides, probably with henna. In Europe, henna was popular among women connected to the aesthetic movement and the Pre-Raphaelite artists of England in the 1800s. Dante Gabriel Rossetti's wife and muse, Elizabeth Siddal, had naturally bright red hair. Contrary to the cultural tradition in Britain that considered red hair unattractive, the Pre-Raphaelites fetishized red hair. Siddal was portrayed by Rossetti in many paintings that emphasized her flowing red hair. The other Pre-Raphaelites, including Evelyn De Morgan and Frederick Sandys, academic classicists such as Frederic Leighton, and French painters such as Gaston Bussière and the Impressionists, further popularized the association of henna-dyed hair and young bohemian women. Opera singer Adelina Patti is sometimes credited with popularizing the use of henna in Europe in the late nineteenth century. Parisian courtesan Cora Pearl was often referred to as La Lune Rousse (the red-haired moon) for dyeing her hair red. In her memoirs, she relates an incident when she dyed her pet dog's fur to match her own hair. By the 1950s, Lucille Ball popularized "henna rinse" as her character, Lucy Ricardo, called it on the television show I Love Lucy. It gained popularity among young people in the 1960s through growing interest in Eastern cultures. Today Commercially packaged henna, intended for use as a cosmetic hair dye, originated in ancient Egypt and the ancient Near East and is now popular in many countries in South Asia, Europe, Australia, and North America. The color that results from dyeing with henna depends on the original color of the hair, as well as the quality of the henna, and can range from orange to auburn to burgundy. Henna can be mixed with other natural hair dyes, including Cassia obovata for lighter shades of red or even blond and indigo to achieve brown and black shades. Some products sold as "henna" include these other natural dyes. Others may include metal salts that can interact with other chemical treatments, or oils and waxes that may inhibit the dye, or dyes which may be allergens. Apart from its use as a hair dye, henna has recently been used as a temporal substitute to eyebrow pencil or even as eyebrow embroidery. Traditions of henna as body art The different words for henna in ancient languages imply that it had more than one point of discovery and origin, as well as different pathways of daily and ceremonial use. Henna has been used to adorn young women's bodies as part of social and holiday celebrations since the late Bronze Age in the Eastern Mediterranean. The earliest text mentioning henna in the context of marriage and fertility celebrations comes from the Ugaritic legend of Baal and Anath, which has references to women marking themselves with henna in preparation to meet their husbands, and Anath adorning herself with henna to celebrate a victory over the enemies of Baal. Wall paintings excavated at Akrotiri (dating prior to the eruption of Thera in 1680 BCE) show women with markings consistent with henna on their nails, palms and soles, in a tableau consistent with the henna bridal description from Ugarit. Many statuettes of young women dating between 1500 and 500 BCE along the Mediterranean coastline have raised hands with markings consistent with henna. This early connection between young, fertile women and henna seems to be the origin of the Night of the Henna, which is now celebrated in all the middle east. The Night of the Henna was celebrated by most groups in the areas where henna grew naturally: Jews, Muslims, Sikhs, and Hindus among others, all celebrated marriages and weddings by adorning the bride, and often the groom, with henna. Across the henna-growing region, Purim, Eid, Diwali, Karva Chauth, Passover, Mawlid, and most saints' days were celebrated with some henna. Favourite horses, donkeys, and salukis had their hooves, paws, and tails hennaed. Battle victories, births, circumcision, birthdays, Zār, as well as weddings, usually included some henna as part of the celebration. Bridal henna nights remain an important custom in many of these areas, particularly among traditional families. Henna was regarded as having Barakah ("blessings"), and was applied for luck as well as joy and beauty. Brides typically had the most henna, and the most complex patterns, to support their greatest joy and wishes for luck. Some bridal traditions were very complex, such as those in Yemen, where the Jewish bridal henna process took four or five days to complete, with multiple applications and resist work. Specific henna designs may also vary by region. For example, geometric shapes such as triangles and diamonds characteristic of traditional Moroccan beading is represented in Moroccan henna designs. The fashion of "Bridal Mehndi" in North Indian, Bangladesh, Northern Libya and in Pakistan is currently growing in complexity and elaboration, with new innovations in glitter, gilding, and fine-line work. Recent technological innovations in grinding, sifting, temperature control, and packaging henna, as well as government encouragement for henna cultivation, have improved dye content and artistic potential for henna. Though traditional henna artists were from the Nai caste in India, and barbering castes in other countries (lower social classes), talented contemporary henna artists can command high fees for their work. Women in countries where women are discouraged from working outside the home can find socially acceptable, lucrative work doing henna. Morocco, Mauritania, Yemen, Libya, Somalia, Kenya, Sudan, the United Arab Emirates, India and many other countries have thriving women's henna businesses. These businesses are often open all night for Eid, Diwali and Karva Chauth. Many women may work together during a large wedding, wherein hundreds of guests have henna applied to their body parts. This particular event at a marriage is known as the Mehndi Celebration or Mehndi Night or Laylat al Henna, and is mainly held for the bride and groom. Regions Algeria In Algeria, brides receive gifts of jewellery and have henna painted on their hands prior to their weddings. The bride and the groom seal their vows in front of their guests by getting applied a circle-shaped henna on the palm of their hands. Usually, the grandmothers or mothers of the groom and bride apply this henna, and a small decorative pillow with a satin ribbon is attached on their hands for a few hours. Afghanistan In Afghanistan, henna is also known as "kheena". Afghan tradition holds that henna brings good luck and happiness. It is used by both men and women on many occasions such as wedding nights, Nawroz, Eidul fitr, Eidul Adha, Shabe-e Barat, and circumcision celebrations. Armenia Henna traditions were widespread in both eastern and western Armenia, however, the customs differ based on region. The henna night, called hina gisher or khennagedje in Armenian, has always been deemed an essential part of Armenian marriage traditions. In Kesaria, henna parties were organized by the bride’s female friends and family on the Friday before her wedding. Traditional Armenian henna was usually applied on the fingertips, however young women also received designs on their hands. In Nirzeh, elderly women applied henna to young girls and boys. Furthermore, in the Armenian communities of Sis, both the groom and the bride had henna nights, where the groom would get his hair cut and his friends bid for the honor of drawing the cross with henna on the hands of the groom and godfather. The tradition of hinadreq, painting the palms of a bride-to-be, is still practiced in parts of Armenia today as a sign of fertility and happiness in married life. Bangladesh In Bangladesh, women use mehndi on hands on occasions like weddings and engagements as well as during Eid al-Fitr, Eid al-Adha and other events. In wedding ceremonies, the Mehndi ceremony has traditionally been separated into two events: one organized by the bride's family and one by the groom's family. These two events are solely dedicated for adorning the bride and groom in Mehndi and is known as a 'Mehndi Shondha' meaning the Evening of Mehndi. Some brides tend to go for Alta. Sometimes Hindu women also apply Mehendi instead (or along with) Alta on their feet during the Bodhu Boron ceremony. Bulgaria In an attempt to ritually clean a bride before her wedding day, Bulgarian Romani decorate the bride with a blot of henna. This blot symbolizes the drop of blood on the couples' sheets after consummating the marriage and breaking the female's hymen. The tradition also holds that the longer the henna lasts, the longer the husband will love his new bride. Saudi Arabia In Saudi Arabia, henna has long been associated with wedding traditions in certain regions. Women adorn their hands and feet with intricate henna designs during these occasions. The night before the wedding is referred to as the "Henna Night" or “Al-Ghumra.” On this evening” female relatives of the bride and groom gather at the bride's family home to decorate the bride's hands and feet with henna patterns. During Eid Al-Fitr and Eid Al-Adha, women also come together to prepare special henna mixtures for designs that vary by age group. On December 3, 2024, the tradition of "Henna, rituals, aesthetic and social practices" was inscribed on UNESCO’s Intangible Cultural Heritage List. This recognition was a collaborative effort between 16 Arab countries, including Saudi Arabia, Sudan, Egypt, the UAE, Iraq, Jordan, Kuwait, Palestine, Tunisia, Algeria, Bahrain, Morocco, Mauritania, Oman, Yemen, and Qatar. The nomination was led by the United Arab Emirates, with Saudi Arabia represented by the Heritage Commission, in cooperation with the Saudi National Commission for Education, Culture, and Science, and Saudi Arabia's permanent delegation to UNESCO. Egypt In Egypt, the bride gathers with her friends the night before her wedding day to celebrate the henna night. India In India, Hindu women have motifs and tattoos on hands and feet on occasions like weddings and engagements. In Kerala, women and girls, especially brides, have their hands decorated with Mailanchi. In North Indian wedding ceremonies, there is one evening solely dedicated for adorning the bride and groom in Mehndi, also known as 'Mehndi ki raat. Iran In Iran, the most common use of henna is among the long wedding rituals practiced in Iran. The henna ritual, which is called ḥanā-bandān, is held for both the bride and the bridegroom during the wedding week The ceremony is held prior to the wedding and is a traditional farewell ritual for newlyweds before they officially start their life together in a new house. The ceremonies take place in the presence of family members, friends, relatives, neighbors, and guests. In Iran, Māzār () is indicating a job title for a person whose work is associated with the milling or grinding henna leaves and sell it in a powder form. This type of business is an old job still alive in some parts of Iran, especially in the world recognized archeologically ancient "Yazd" province. The most famous one is a family owned business by "Mazar Atabaki" families resided in the land hundreds of years ago. Māzāri () is a place for milling henna mixed with other herbs. Israel & Palestine In Palestine (region), Israel and territories of the Palestinian National Authority, some Middle Eastern and North African Jewish communities and families, also Druze, Christian and Muslim ones, host henna parties the night or week before a wedding, according to familial customs. The use of henna in this region can be traced as far back to the Song of Songs in which the author wrote, "My beloved is to me a cluster of henna blossoms in the vineyards of Engedi." Sephardic Jews and Mizrahi Jews, such as Moroccan Jews and Yemenite Jews who have immigrated to Israel, continue these familial customs. Malaysia In Malaysia, henna () is used to adorn the bride and groom's hands before the wedding at a berinai ceremony. Morocco In Morocco, henna is applied symbolically when individuals go through life cycle events. Moroccans refer to the paste as henna and the designs as naqsh, which means painting or inscription. In Morocco, there are two types of henna artists: non-specialists, who traditionally partake in wedding rituals, and specialists, who partake in tourism and decorative henna. Nqaasha, the low-end Henna specialists, are known for attracting tourists, which they refer to as gazelles or international tourists, in artisan slang. For Moroccans, a wedding festival can last up to 5 days, with 2 days involving henna art. One of these days is referred to as azmomeg (meaning unknown), and is the Thursday before the wedding where guests are invited to apply henna to the bride. The other henna ceremony occurs after the wedding ceremony, called the Day of Henna. On this day, typically an older woman applies henna to the bride after she dips in the mikveh to ward off evil spirits who may be jealous of the newlyweds. The groom is also painted with henna after the wedding. During the groom's henna painting, he commonly wears black clothing, this tradition emerged from the Pact of Umar as the Jews were not permitted to dress similar to colorful Muslim dress in Morocco. Pakistan In Pakistan, henna is often used in weddings, Eid ul fitr, Eidul Adha, milad and other events. The henna ceremony is known as the Rasm-e-Heena, which is often one of the most important pre-wedding ceremonies celebrated by both the bride and groom's families. The night of mehndi, as the gathering at which the application of the henna is performed, usually falls on the second day of the festivities and one day before the wedding itself. The process commonly involves only the bride and groom but also can include close friends or other family members. The hands of the wedding couple are elegantly painted on this night to act as a sign of their union. In Sindh, henna is known as "Mehndi" and serves both as a decorative art on the hands, arms, feet, and legs, and as a natural dye for gray hair, used by both women and men in every ceremonial occasions, events and festivals. Mehndi is applied with traditional designs featuring motifs like mor (peacock), badak (duck), tikra (dotted), other floral and geometric designs are also used. Typically, female relatives apply henna to the groom’s hands and feet as part of the wedding ritual. Henna is both offered and received as part of religious rituals during the urs and mela (fairs) honoring Sufi saints. On the 7th day of Muharram, the tradition of carrying Mehndi to pay respect to Hazrat Imam Qasim a.s is observed each year. Somalia In Somalia, henna has been used for centuries, it is cultivated from the leaves of the Ellan tree, which grows wild in the mountainous regions of Somalia. It is used for practical purposes such as dyeing hair and also more extravagantly by coloring the fingers and toes of married women and creating intricate designs. It is also applied to the hands and feet of young Somali women in preparation for their weddings and all the Islamic celebrations. Sometimes also done by young school girls for several occasions Spain Henna was cultivated in the Nasrid kingdom of Granada and applied to the face and hair by both sexes. After the Castilian conquest of Granada (1492), it was forbidden for Moriscos as it was a sign distinguishing them from Old Christians. After the expulsion of the Moriscos (1609–1614), cultivation ceased. Sudan In Sudan, Henna dyes are regarded with a special sanctity in Sudan and for that reason they are always present during happy occasions: weddings and children circumcisions, in particular. Henna has been part of Sudan's social and cultural heritage ever since the days of Sudan's ancient civilizations where both would-be couples get their hands and feet pigmented with this natural dye. Children also have their hands and feet dyed with henna during their circumcision festivity. Tunisia In Tunisia, The traditional wedding process begins 8 days before the wedding ceremony when a basket is delivered to the bride, which contains henna. The mother of the groom supervises the process in order to ensure all is being done correctly. Today, the groom accompanies the bride in the ritual at the henna party, but the majority of henna painting is done on the bride's body. Turkey During the Victorian era, Turkey was a major exporter of henna for use in dyeing hair. Henna parties were commonly practiced in Turkey similarly to Arab countries. Yemen For Yemenite Jews, most of them living in Israel, the purpose of a henna party is to ward off evil from the couple before their wedding. In some areas, the party has evolved from tradition to an opportunity for the family to show off their wealth in the dressing of the bride. For other communities, it is practiced as a ritual that has been passed on for generations. The dressing of the bride is typically done by a post-menopausal woman in the bride's family. Often, the dresser of the bride sings to the bride as she is dressed in exquisite designs. These songs discuss marriage, what married life is like, and address the feelings a bride may have before her wedding. The costumes worn by Yemenite brides to their henna parties is considered some of the most exquisite attire in the Yemenite community. These outfits include robes, headwear, and often several pounds of silver jewelry. This jewelry often holds fresh green herbs to ward off the Jinn in keeping with the ritual element of the party. The zavfa is the procession of the bride from her mother's house to the Henna Party. During the zavfa, the guests of the party sing traditional songs to the bride and bang on tin plates and drums to ward off evil. Today, it is common for the groom to join in on this aspect of the ritual, although traditionally it was only for the bride. During the party, guests eat, sing, and dance. Initially, the singing and dancing was to ward off the Jinn with loud noises, but today these elements are associated with the mitzvah of entertaining the bride and groom on their wedding day. In the middle of the party, the bride returns to her home to be painted in henna mixed by her mother. The mixture consists of rose water, eggs, cognac, salt, and shadab, believed to be a magical herb that repels evil. The bride changes into a less elaborate outfit and incense is burned while she is painted with henna. Then, another zavfa (procession) occurs as the bride returns to her party. Back at the henna party, the bride sits on stage while family members and friends come up to her to have their palms marked with blots of henna. These marks represent the long-lasting marriage as henna remains for many days. It also represents the blood from breaking the hymen upon consummating the marriage on the wedding night. Others add that the red stain on the hands of the guests is to mislead the evil spirits of the Jinn who are looking for the bride. After the painting, the party ends after lasting about 4 or 5 hours. Health effects Henna is known to be dangerous to people with glucose-6-phosphate dehydrogenase deficiency (G6PD deficiency), which is more common in males than females. Infants and children of particular ethnic groups, mainly from the Middle East and North Africa, are especially vulnerable. Though user accounts cite few other negative effects of natural henna paste, save for occasional mild allergic reactions (often associated with lemon juice or essential oils in a paste and not the henna itself), pre-mixed commercial henna body art pastes may have undisclosed ingredients added to darken stain, or to alter stain color. The health risks involved in pre-mixed paste can be significant. The United States Food and Drug Administration (FDA) does consider these risks to be adulterants and therefore illegal for use on skin. Some commercial pastes have been noted to include: p-Phenylenediamine, sodium picramate, amaranth (red dye #2 banned in the US in 1976), silver nitrate, carmine, pyrogallol, disperse orange dye, and chromium. These have been found to cause allergic reactions, chronic inflammatory reactions, or late-onset allergic reactions to hairdressing products and textile dyes. The U.S. FDA has not approved henna for direct application to the skin. It is, however, grandfathered in as a hair dye and can only be imported for that purpose. Henna imported into the U.S. that appears to be for use as body art is subject to seizure, but prosecution is rare. Commercial henna products that are adulterated often claim to be 100% natural on product packaging in order to pass import regulations in other countries. Black henna Natural henna produces a rich red-brown stain which can darken in the days after it is first applied and last for several weeks. It is sometimes referred to as "red henna" to differentiate it from products sold as "black henna" or "neutral henna," which may not actually contain henna, but are instead made from other plants or dyes. Black henna powder may be derived from indigo (from the plant Indigofera tinctoria). It may also contain unlisted dyes and chemicals such as para-phenylenediamine (PPD), which can stain skin black quickly, but can cause severe allergic reactions and permanent scarring if left on for more than 2–3 days. The FDA specifically forbids PPD to be used for this purpose, and may prosecute those who produce black henna. Artists who injure clients with black henna in the U.S. may be sued for damages. The name arose from imports of plant-based hair dyes into the West in the late 19th century. Partly fermented, dried indigo was called black henna because it could be used in combination with henna to dye hair black. This gave rise to the belief that there was such a thing as black henna which could dye skin black. Indigo will not dye skin black. Pictures of indigenous people with black body art (either alkalized henna or from some other source) also fed the belief that there was such a thing as black henna. Neutral henna does not change the colour of hair. This is not henna powder; it is usually the powder of the plant Senna italica (often referred to by the synonym Cassia obovata) or closely related Cassia and Senna species. para-phenylenediamine In the 1990s, henna artists in Africa, India, Bali, the Arabian Peninsula and the West began to experiment with PPD-based black hair dye, applying it as a thick paste as they would apply henna, in an effort to find something that would quickly make jet-black temporary body art. PPD can cause severe allergic reactions, with blistering, intense itching, permanent scarring, and permanent chemical sensitivities. Estimates of allergic reactions range between 3% and 15%. Henna does not cause these injuries. Black henna made with PPD can cause lifelong sensitization to coal tar derivatives while black henna made with gasoline, kerosene, lighter fluid, paint thinner, and benzene has been linked to adult acute leukemia. The most frequent serious health consequence of having a black henna temporary tattoo is sensitization to hair dye and related chemicals. If a person has had a black henna tattoo and later dyes their hair with chemical hair dye, the allergic reaction may be life-threatening and require hospitalization. Because of the epidemic of PPD allergic reactions, chemical hair dye products now post warnings on the labels: "Temporary black henna tattoos may increase your risk of allergy. Do not colour your hair if: ... – you have experienced a reaction to a temporary black henna tattoo in the past." PPD is illegal for use on skin in western countries, though enforcement is difficult. Physicians have urged governments to legislate against black henna because of the frequency and severity of injuries, especially to children. To assist the prosecution of vendors, government agencies encourage citizens to report injuries and illegal use of PPD black henna. When used in hair dye, the PPD amount must be below 6%, and application instructions warn that the dye must not touch the scalp and must be quickly rinsed away. Black henna pastes have PPD percentages from 10% to 80%, and are left on the skin for half an hour. PPD black henna use is widespread, particularly in tourist areas. Because the blistering reaction appears 3 to 12 days after the application, most tourists have left and do not return to show how much damage the artist has done. This permits the artists to continue injuring others, unaware they are causing severe injuries. The high-profit margins of black henna and the demand for body art that emulates "tribal tattoos" further encourage artists to deny the dangers. It is not difficult to recognize and avoid PPD black henna: if a paste stains skin on the torso black in less than ½ hour, it has PPD in it. if the paste is mixed with peroxide, or if peroxide is wiped over the design to bring out the color, it has PPD in it. Anyone who has an itching and blistering reaction to a black body stain should go to a doctor, and report that they have had an application of PPD to their skin. PPD sensitivity is lifelong. A person who has become sensitized through black henna tattoos may have future allergic reactions to perfumes, printer ink, chemical hair dyes, textile dye, photographic developer, sunscreen and some medications. A person who has had a black henna tattoo should consult their physician about the health consequences of PPD sensitization.
Biology and health sciences
Myrtales
null
18426568
https://en.wikipedia.org/wiki/NASA
NASA
The National Aeronautics and Space Administration (NASA ) is an independent agency of the US federal government responsible for the United States' civil space program, aeronautics research and space research. Established in 1958, it succeeded the National Advisory Committee for Aeronautics (NACA) to give the US space development effort a distinct civilian orientation, emphasizing peaceful applications in space science. It has since led most of America's space exploration programs, including Project Mercury, Project Gemini, the 1968–1972 Apollo Moon landing missions, the Skylab space station, and the Space Shuttle. Currently, NASA supports the International Space Station (ISS) along with the Commercial Crew Program, and oversees the development of the Orion spacecraft and the Space Launch System for the lunar Artemis program. NASA's science division is focused on better understanding Earth through the Earth Observing System; advancing heliophysics through the efforts of the Science Mission Directorate's Heliophysics Research Program; exploring bodies throughout the Solar System with advanced robotic spacecraft such as New Horizons and planetary rovers such as Perseverance; and researching astrophysics topics, such as the Big Bang, through the James Webb Space Telescope, the four Great Observatories, and associated programs. The Launch Services Program oversees launch operations for its uncrewed launches. History Creation NASA traces its roots to the National Advisory Committee for Aeronautics (NACA). Despite being the birthplace of aviation, by 1914 the United States recognized that it was far behind Europe in aviation capability. Determined to regain American leadership in aviation, the United States Congress created the Aviation Section of the US Army Signal Corps in 1914 and established NACA in 1915 to foster aeronautical research and development. Over the next forty years, NACA would conduct aeronautical research in support of the US Air Force, US Army, US Navy, and the civil aviation sector. After the end of World War II, NACA became interested in the possibilities of guided missiles and supersonic aircraft, developing and testing the Bell X-1 in a joint program with the US Air Force. NACA's interest in space grew out of its rocketry program at the Pilotless Aircraft Research Division. The Soviet Union's launch of Sputnik 1 ushered in the Space Age and kicked off the Space Race. Despite NACA's early rocketry program, the responsibility for launching the first American satellite fell to the Naval Research Laboratory's Project Vanguard, whose operational issues ensured the Army Ballistic Missile Agency would launch Explorer 1, America's first satellite, on February 1, 1958. The Eisenhower Administration decided to split the United States' military and civil spaceflight programs, which were organized together under the Department of Defense's Advanced Research Projects Agency. NASA was established on July 29, 1958, with the signing of the National Aeronautics and Space Act and it began operations on October 1, 1958. As the US's premier aeronautics agency, NACA formed the core of NASA's new structure by reassigning 8,000 employees and three major research laboratories. NASA also proceeded to absorb the Naval Research Laboratory's Project Vanguard, the Army's Jet Propulsion Laboratory (JPL), and the Army Ballistic Missile Agency under Wernher von Braun. This left NASA firmly as the United States' civil space lead and the Air Force as the military space lead. First orbital and hypersonic flights Plans for human spaceflight began in the US Armed Forces prior to NASA's creation. The Air Force's Man in Space Soonest project formed in 1956, coupled with the Army's Project Adam, served as the foundation for Project Mercury. NASA established the Space Task Group to manage the program, which would conduct crewed sub-orbital flights with the Army's Redstone rockets and orbital flights with the Air Force's Atlas launch vehicles. While NASA intended for its first astronauts to be civilians, President Eisenhower directed that they be selected from the military. The Mercury 7 astronauts included three Air Force pilots, three Navy aviators, and one Marine Corps pilot. On May 5, 1961, Alan Shepard became the first American to enter space, performing a suborbital spaceflight in the Freedom 7. This flight occurred less than a month after the Soviet Yuri Gagarin became the first human in space, executing a full orbital spaceflight. NASA's first orbital spaceflight was conducted by John Glenn on February 20, 1962, in the Friendship 7, making three full orbits before reentering. Glenn had to fly parts of his final two orbits manually due to an autopilot malfunction. The sixth and final Mercury mission was flown by Gordon Cooper in May 1963, performing 22 orbits over 34 hours in the Faith 7. The Mercury Program was wildly recognized as a resounding success, achieving its objectives to orbit a human in space, develop tracking and control systems, and identify other issues associated with human spaceflight. While much of NASA's attention turned to space, it did not put aside its aeronautics mission. Early aeronautics research attempted to build upon the X-1's supersonic flight to build an aircraft capable of hypersonic flight. The North American X-15 was a joint NASA–US Air Force program, with the hypersonic test aircraft becoming the first non-dedicated spacecraft to cross from the atmosphere to outer space. The X-15 also served as a testbed for Apollo program technologies, as well as ramjet and scramjet propulsion. Moon landing Escalations in the Cold War between the United States and Soviet Union prompted President John F. Kennedy to charge NASA with landing an American on the Moon and returning him safely to Earth by the end of the 1960s and installed James E. Webb as NASA administrator to achieve this goal. On May25, 1961, President Kennedy openly declared this goal in his "Urgent National Needs" speech to the United States Congress, declaring: Kennedy gave his "We choose to go to the Moon" speech the next year, on September12, 1962 at Rice University, where he addressed the nation hoping to reinforce public support for the Apollo program. Despite attacks on the goal of landing astronauts on the Moon from the former president Dwight Eisenhower and 1964 presidential candidate Barry Goldwater, President Kennedy was able to protect NASA's growing budget, of which 50% went directly to human spaceflight and it was later estimated that, at its height, 5% of Americans worked on some aspect of the Apollo program. Mirroring the Department of Defense's program management concept using redundant systems in building the first intercontinental ballistic missiles, NASA requested the Air Force assign Major General Samuel C. Phillips to the space agency where he would serve as the director of the Apollo program. Development of the SaturnV rocket was led by Wernher von Braun and his team at the Marshall Space Flight Center, derived from the Army Ballistic Missile Agency's original SaturnI. The Apollo spacecraft was designed and built by North American Aviation, while the Apollo Lunar Module was designed and built by Grumman. To develop the spaceflight skills and equipment required for a lunar mission, NASA initiated Project Gemini. Using a modified Air Force TitanII launch vehicle, the Gemini capsule could hold two astronauts for flights of over two weeks. Gemini pioneered the use of fuel cells instead of batteries, and conducted the first American spacewalks and rendezvous operations. The Ranger Program was started in the 1950s as a response to Soviet lunar exploration, however most missions ended in failure. The Lunar Orbiter program had greater success, mapping the surface in preparation for Apollo landings, conducting meteoroid detection, and measuring radiation levels. The Surveyor program conducted uncrewed lunar landings and takeoffs, as well as taking surface and regolith observations. Despite the setback caused by the Apollo1 fire, which killed three astronauts, the program proceeded. Apollo8 was the first crewed spacecraft to leave low Earth orbit and the first human spaceflight to reach the Moon. The crew orbited the Moon ten times on December24 and25, 1968, and then traveled safely back to Earth. The three Apollo8 astronauts—Frank Borman, James Lovell, and William Anders—were the first humans to see the Earth as a globe in space, the first to witness an Earthrise, and the first to see and manually photograph the far side of the Moon. The first lunar landing was conducted by Apollo11. Commanded by Neil Armstrong with astronauts Buzz Aldrin and Michael Collins, Apollo11 was one of the most significant missions in NASA's history, marking the end of the Space Race when the Soviet Union gave up its lunar ambitions. As the first human to step on the surface of the Moon, Neil Armstrong uttered the now famous words: NASA would conduct six total lunar landings as part of the Apollo program, with Apollo17 concluding the program in 1972. End of Apollo Wernher von Braun had advocated for NASA to develop a space station since the agency was created. In 1973, following the end of the Apollo lunar missions, NASA launched its first space station, Skylab, on the final launch of the SaturnV. Skylab reused a significant amount of Apollo and Saturn hardware, with a repurposed SaturnV third stage serving as the primary module for the space station. Damage to Skylab during its launch required spacewalks to be performed by the first crew to make it habitable and operational. Skylab hosted nine missions and was decommissioned in 1974 and deorbited in 1979, two years prior to the first launch of the Space Shuttle and any possibility of boosting its orbit. In 1975, the Apollo–Soyuz mission was the first ever international spaceflight and a major diplomatic accomplishment between the Cold War rivals, which also marked the last flight of the Apollo capsule. Flown in 1975, a US Apollo spacecraft docked with a Soviet Soyuz capsule. Interplanetary exploration and space science During the 1960s, NASA started its space science and interplanetary probe program. The Mariner program was its flagship program, launching probes to Venus, Mars, and Mercury in the 1960s. The Jet Propulsion Laboratory was the lead NASA center for robotic interplanetary exploration, making significant discoveries about the inner planets. Despite these successes, Congress was unwilling to fund further interplanetary missions and NASA Administrator James Webb suspended all future interplanetary probes to focus resources on the Apollo program. Following the conclusion of the Apollo program, NASA resumed launching interplanetary probes and expanded its space science program. The first planet tagged for exploration was Venus, sharing many similar characteristics to Earth. First visited by American Mariner 2 spacecraft, Venus was observed to be a hot and inhospitable planet. Follow-on missions included the Pioneer Venus project in the 1970s and Magellan, which performed radar mapping of Venus' surface in the 1980s and 1990s. Future missions were flybys of Venus, on their way to other destinations in the Solar System. Mars has long been a planet of intense fascination for NASA, being suspected of potentially having harbored life. Mariner 5 was the first NASA spacecraft to flyby Mars, followed by Mariner 6 and Mariner 7. Mariner 9 was the first orbital mission to Mars. Launched in 1975, Viking program consisted of two landings on Mars in 1976. Follow-on missions would not be launched until 1996, with the Mars Global Surveyor orbiter and Mars Pathfinder, deploying the first Mars rover, Sojourner. During the early 2000s, the 2001 Mars Odyssey orbiter reached the planet and in 2004 the Sprit and Opportunity rovers landed on the Red Planet. This was followed in 2005 by the Mars Reconnaissance Orbiter and 2007 Phoenix Mars lander. The 2012 landing of Curiosity discovered that the radiation levels on Mars were equal to those on the International Space Station, greatly increasing the possibility of Human exploration, and observed the key chemical ingredients for life to occur. In 2013, the Mars Atmosphere and Volatile Evolution (MAVEN) mission observed the Martian upper atmosphere and space environment and in 2018, the Interior exploration using Seismic Investigations Geodesy, and Heat Transport (InSight) studied the Martian interior. The 2021 Perseverance rover carried the first extraplanetary aircraft, a helicopter named Ingenuity. NASA also launched missions to Mercury in 2004, with the MESSENGER probe demonstrating as the first use of a solar sail. NASA also launched probes to the outer Solar System starting in the 1960s. Pioneer 10 was the first probe to the outer planets, flying by Jupiter, while Pioneer 11 provided the first close up view of the planet. Both probes became the first objects to leave the Solar System. The Voyager program launched in 1977, conducting flybys of Jupiter and Saturn, Neptune, and Uranus on a trajectory to leave the Solar System. The Galileo spacecraft, deployed from the Space Shuttle flight STS-34, was the first spacecraft to orbit Jupiter, discovering evidence of subsurface oceans on the Europa and observed that the moon may hold ice or liquid water. A joint NASA-European Space Agency-Italian Space Agency mission, Cassini–Huygens, was sent to Saturn's moon Titan, which, along with Mars and Europa, are the only celestial bodies in the Solar System suspected of being capable of harboring life. Cassini discovered three new moons of Saturn and the Huygens probe entered Titan's atmosphere. The mission discovered evidence of liquid hydrocarbon lakes on Titan and subsurface water oceans on the moon of Enceladus, which could harbor life. Finally launched in 2006, the New Horizons mission was the first spacecraft to visit Pluto and the Kuiper Belt. Beyond interplanetary probes, NASA has launched many space telescopes. Launched in the 1960s, the Orbiting Astronomical Observatory were NASA's first orbital telescopes, providing ultraviolet, gamma-ray, x-ray, and infrared observations. NASA launched the Orbiting Geophysical Observatory in the 1960s and 1970s to look down at Earth and observe its interactions with the Sun. The Uhuru satellite was the first dedicated x-ray telescope, mapping 85% of the sky and discovering a large number of black holes. Launched in the 1990s and early 2000s, the Great Observatories program are among NASA's most powerful telescopes. The Hubble Space Telescope was launched in 1990 on STS-31 from the Discovery and could view galaxies 15 billion light years away. A major defect in the telescope's mirror could have crippled the program, had NASA not used computer enhancement to compensate for the imperfection and launched five Space Shuttle servicing flights to replace the damaged components. The Compton Gamma Ray Observatory was launched from the Atlantis on STS-37 in 1991, discovering a possible source of antimatter at the center of the Milky Way and observing that the majority of gamma-ray bursts occur outside of the Milky Way galaxy. The Chandra X-ray Observatory was launched from the Columbia on STS-93 in 1999, observing black holes, quasars, supernova, and dark matter. It provided critical observations on the Sagittarius A* black hole at the center of the Milky Way galaxy and the separation of dark and regular matter during galactic collisions. Finally, the Spitzer Space Telescope is an infrared telescope launched in 2003 from a Delta II rocket. It is in a trailing orbit around the Sun, following the Earth and discovered the existence of brown dwarf stars. Other telescopes, such as the Cosmic Background Explorer and the Wilkinson Microwave Anisotropy Probe, provided evidence to support the Big Bang. The James Webb Space Telescope, named after the NASA administrator who lead the Apollo program, is an infrared observatory launched in 2021. The James Webb Space Telescope is a direct successor to the Hubble Space Telescope, intended to observe the formation of the first galaxies. Other space telescopes include the Kepler space telescope, launched in 2009 to identify planets orbiting extrasolar stars that may be Terran and possibly harbor life. The first exoplanet that the Keplar space telescope confirmed was Kepler-22b, orbiting within the habitable zone of its star. NASA also launched a number of different satellites to study Earth, such as Television Infrared Observation Satellite (TIROS) in 1960, which was the first weather satellite. NASA and the United States Weather Bureau cooperated on future TIROS and the second generation Nimbus program of weather satellites. It also worked with the Environmental Science Services Administration on a series of weather satellites and the agency launched its experimental Applications Technology Satellites into geosynchronous orbit. NASA's first dedicated Earth observation satellite, Landsat, was launched in 1972. This led to NASA and the National Oceanic and Atmospheric Administration jointly developing the Geostationary Operational Environmental Satellite and discovering Ozone depletion. Space Shuttle NASA had been pursuing spaceplane development since the 1960s, blending the administration's dual aeronautics and space missions. NASA viewed a spaceplane as part of a larger program, providing routine and economical logistical support to a space station in Earth orbit that would be used as a hub for lunar and Mars missions. A reusable launch vehicle would then have ended the need for expensive and expendable boosters like the Saturn V. In 1969, NASA designated the Johnson Space Center as the lead center for the design, development, and manufacturing of the Space Shuttle orbiter, while the Marshall Space Flight Center would lead the development of the launch system. NASA's series of lifting body aircraft, culminating in the joint NASA-US Air Force Martin Marietta X-24, directly informed the development of the Space Shuttle and future hypersonic flight aircraft. Official development of the Space Shuttle began in 1972, with Rockwell International contracted to design the orbiter and engines, Martin Marietta for the external fuel tank, and Morton Thiokol for the solid rocket boosters. NASA acquired six orbiters: the Enterprise, Columbia, Challenger, Discovery, Atlantis, and Endeavour The Space Shuttle program also allowed NASA to make major changes to its Astronaut Corps. While almost all previous astronauts were Air Force or Naval test pilots, the Space Shuttle allowed NASA to begin recruiting more non-military scientific and technical experts. A prime example is Sally Ride, who became the first American woman to fly in space on STS-7. This new astronaut selection process also allowed NASA to accept exchange astronauts from US allies and partners for the first time. The first Space Shuttle flight occurred in 1981, when the Columbia launched on the STS-1 mission, designed to serve as a flight test for the new spaceplane. NASA intended for the Space Shuttle to replace expendable launch systems like the Air Force's Atlas, Delta, and Titan and the European Space Agency's Ariane. The Space Shuttle's Spacelab payload, developed by the European Space Agency, increased the scientific capabilities of shuttle missions over anything NASA was able to previously accomplish. NASA launched its first commercial satellites on the STS-5 mission and in 1984, the STS-41-C mission conducted the world's first on-orbit satellite servicing mission when the Challenger captured and repaired the malfunctioning Solar Maximum Mission satellite. It also had the capability to return malfunctioning satellite to Earth, like it did with the Palapa B2 and Westar 6 satellites. Once returned to Earth, the satellites were repaired and relaunched. Despite ushering in a new era of spaceflight, where NASA was contracting launch services to commercial companies, the Space Shuttle was criticized for not being as reusable and cost-effective as advertised. In 1986, Challenger disaster on the STS-51L mission resulted in the loss of the spacecraft and all seven astronauts on launch, grounding the entire space shuttle fleet for 36 months and forced the 44 commercial companies that contracted with NASA to deploy their satellites to return to expendable launch vehicles. When the Space Shuttle returned to flight with the STS-26 mission, it had undergone significant modifications to improve its reliability and safety. Following the collapse of the Soviet Union, the Russian Federation and United States initiated the Shuttle-Mir program. The first Russian cosmonaut flew on the STS-60 mission in 1994 and the Discovery rendezvoused, but did not dock with, the Russian Mir in the STS-63 mission. This was followed by Atlantis''' STS-71 mission where it accomplished the initial intended mission for the Space Shuttle, docking with a space station and transferring supplies and personnel. The Shuttle-Mir program would continue until 1998, when a series of orbital accidents on the space station spelled an end to the program. In 2003, a second space shuttle was destroyed when the Columbia was destroyed upon reentry during the STS-107 mission, resulting in the loss of the spacecraft and all seven astronauts. This accident marked the beginning of the retiring of the Space Shuttle program, with President George W. Bush directing that upon the completion of the International Space Station, the space shuttle be retired. In 2006, the Space Shuttle returned to flight, conducting several mission to service the Hubble Space Telescope, but was retired following the STS-135 resupply mission to the International Space Station in 2011. Space stations NASA never gave up on the idea of a space station after Skylab's reentry in 1979. The agency began lobbying politicians to support building a larger space station as soon as the Space Shuttle began flying, selling it as an orbital laboratory, repair station, and a jumping off point for lunar and Mars missions. NASA found a strong advocate in President Ronald Reagan, who declared in a 1984 speech: In 1985, NASA proposed the Space Station Freedom, which both the agency and President Reagan intended to be an international program. While this would add legitimacy to the program, there were concerns within NASA that the international component would dilute its authority within the project, having never been willing to work with domestic or international partners as true equals. There was also a concern with sharing sensitive space technologies with the Europeans, which had the potential to dilute America's technical lead. Ultimately, an international agreement to develop the Space Station Freedom program would be signed with thirteen countries in 1985, including the European Space Agency member states, Canada, and Japan. Despite its status as the first international space program, the Space Station Freedom was controversial, with much of the debate centering on cost. Several redesigns to reduce cost were conducted in the early 1990s, stripping away much of its functions. Despite calls for Congress to terminate the program, it continued, in large part because by 1992 it had created 75,000 jobs across 39 states. By 1993, President Bill Clinton attempted to significantly reduce NASA's budget and directed costs be significantly reduced, aerospace industry jobs were not lost, and the Russians be included. In 1993, the Clinton Administration announced that the Space Station Freedom would become the International Space Station in an agreement with the Russian Federation. This allowed the Russians to maintain their space program through an infusion of American currency to maintain their status as one of the two premier space programs. While the United States built and launched the majority of the International Space Station, Russia, Canada, Japan, and the European Space Agency all contributed components. Despite NASA's insistence that costs would be kept at a budget of $17.4, they kept rising and NASA had to transfer funds from other programs to keep the International Space Station solvent. Ultimately, the total cost of the station was $150 billion, with the United States paying for two-thirds.Following the Space Shuttle Columbia disaster in 2003, NASA was forced to rely on Russian Soyuz launches for its astronauts and the 2011 retirement of the Space Shuttle accelerated the station's completion. In the 1980s, right after the first flight of the Space Shuttle, NASA started a joint program with the Department of Defense to develop the Rockwell X-30 National Aerospace Plane. NASA realized that the Space Shuttle, while a massive technological accomplishment, would not be able to live up to all its promises. Designed to be a single-stage-to-orbit spaceplane, the X-30 had both civil and military applications. With the end of the Cold War, the X-30 was canceled in 1992 before reaching flight status. Unleashing commercial space and return to the Moon Following the Space Shuttle Columbia disaster in 2003, President Bush started the Constellation program to smoothly replace the Space Shuttle and expand space exploration beyond low Earth orbit. Constellation was intended to use a significant amount of former Space Shuttle equipment and return astronauts to the Moon. This program was canceled by the Obama Administration. Former astronauts Neil Armstrong, Gene Cernan, and Jim Lovell sent a letter to President Barack Obama to warn him that if the United States did not get new human spaceflight ability, the US risked become a second or third-rate space power. As early as the Reagan Administration, there had been calls for NASA to expand private sector involvement in space exploration rather than do it all in-house. In the 1990s, NASA and Lockheed Martin entered into an agreement to develop the Lockheed Martin X-33 demonstrator of the VentureStar spaceplane, which was intended to replace the Space Shuttle. Due to technical challenges, the spacecraft was cancelled in 2001. Despite this, it was the first time a commercial space company directly expended a significant amount of its resources into spacecraft development. The advent of space tourism also forced NASA to challenge its assumption that only governments would have people in space. The first space tourist was Dennis Tito, an American investment manager and former aerospace engineer who contracted with the Russians to fly to the International Space Station for four days, despite the opposition of NASA to the idea. Advocates of this new commercial approach for NASA included former astronaut Buzz Aldrin, who remarked that it would return NASA to its roots as a research and development agency, with commercial entities actually operating the space systems. Having corporations take over orbital operations would also allow NASA to focus all its efforts on deep space exploration and returning humans to the Moon and going to Mars. Embracing this approach, NASA's Commercial Crew Program started by contracting cargo delivery to the International Space Station and flew its first operational contracted mission on SpaceX Crew-1. This marked the first time since the retirement of the Space Shuttle that NASA was able to launch its own astronauts on an American spacecraft from the United States, ending a decade of reliance on the Russians. In 2019, NASA announced the Artemis program, intending to return to the Moon and establish a permanent human presence. This was paired with the Artemis Accords with partner nations to establish rules of behavior and norms of space commercialization on the Moon. In 2023, NASA established the Moon to Mars Program office. The office is designed to oversee the various projects, mission architectures and associated timelines relevant to lunar and Mars exploration and science. Active programs Human spaceflight International Space Station (1993–present) The International Space Station (ISS) combines NASA's Space Station Freedom project with the Russian Mir-2 station, the European Columbus station, and the Japanese Kibō laboratory module. NASA originally planned in the 1980s to develop Freedom alone, but US budget constraints led to the merger of these projects into a single multi-national program in 1993, managed by NASA, the Russian Federal Space Agency (RKA), the Japan Aerospace Exploration Agency (JAXA), the European Space Agency (ESA), and the Canadian Space Agency (CSA). The station consists of pressurized modules, external trusses, solar arrays and other components, which were manufactured in various factories around the world and launched by Russian Proton and Soyuz rockets, and the American Space Shuttle. The on-orbit assembly began in 1998, the completion of the US Orbital Segment occurred in 2009 and the completion of the Russian Orbital Segment occurred in 2010. The ownership and use of the space station is established in intergovernmental treaties and agreements, which divide the station into two areas and allow Russia to retain full ownership of the Russian Orbital Segment (with the exception of Zarya), with the US Orbital Segment allocated between the other international partners. Long-duration missions to the ISS are referred to as ISS Expeditions. Expedition crew members typically spend approximately six months on the ISS. The initial expedition crew size was three, temporarily decreased to two following the Columbia disaster. Between May 2009 and until the retirement of the Space Shuttle, the expedition crew size has been six crew members. As of 2024, though the Commercial Program's crew capsules can allow a crew of up to seven, expeditions using them typically consist of a crew of four. The ISS has been continuously occupied for the past , having exceeded the previous record held by Mir; and has been visited by astronauts and cosmonauts from 15 different nations. The station can be seen from the Earth with the naked eye and, as of , is the largest artificial satellite in Earth orbit with a mass and volume greater than that of any previous space station. The Russian Soyuz and American Dragon and Starliner spacecraft are used to send astronauts to and from the ISS. Several uncrewed cargo spacecraft provide service to the ISS; they are the Russian Progress spacecraft which has done so since 2000, the European Automated Transfer Vehicle (ATV) since 2008, the Japanese H-II Transfer Vehicle (HTV) since 2009, the (uncrewed) Dragon since 2012, and the American Cygnus spacecraft since 2013. The Space Shuttle, before its retirement, was also used for cargo transfer and would often switch out expedition crew members, although it did not have the capability to remain docked for the duration of their stay. Between the retirement of the Shuttle in 2011 and the commencement of crewed Dragon flights in 2020, American astronauts exclusively used the Soyuz for crew transport to and from the ISS The highest number of people occupying the ISS has been thirteen; this occurred three times during the late Shuttle ISS assembly missions. The ISS program is expected to continue until 2030, after which the space station will be retired and destroyed in a controlled de-orbit. Commercial Resupply Services (2008–present) Commercial Resupply Services (CRS) are a contract solution to deliver cargo and supplies to the International Space Station on a commercial basis by private companies. NASA signed its first CRS contracts in 2008 and awarded $1.6 billion to SpaceX for twelve cargo Dragon and $1.9 billion to Orbital Sciences for eight Cygnus flights, covering deliveries until 2016. Both companies evolved or created their launch vehicle products to launch the spacecrafts (SpaceX with The Falcon 9 and Orbital with the Antares). SpaceX flew its first operational resupply mission (SpaceX CRS-1) in 2012. Orbital Sciences followed in 2014 (Cygnus CRS Orb-1). In 2015, NASA extended CRS-1 to twenty flights for SpaceX and twelve flights for Orbital ATK. A second phase of contracts (known as CRS-2) was solicited in 2014; contracts were awarded in January 2016 to Orbital ATK Cygnus, Sierra Nevada Corporation Dream Chaser, and SpaceX Dragon 2, for cargo transport flights beginning in 2019 and expected to last through 2024. In March 2022, NASA awarded an additional six CRS-2 missions each to both SpaceX and Northrop Grumman (formerly Orbital). Northrop Grumman successfully delivered Cygnus NG-17 to the ISS in February 2022. In July 2022, SpaceX launched its 25th CRS flight (SpaceX CRS-25) and successfully delivered its cargo to the ISS. The Dream Chaser spacecraft is currently scheduled for its Demo-1 launch in the first half of 2024. Commercial Crew Program (2011–present) The Commercial Crew Program (CCP) provides commercially operated crew transportation service to and from the International Space Station (ISS) under contract to NASA, conducting crew rotations between the expeditions of the International Space Station program. American space manufacturer SpaceX began providing service in 2020, using the Crew Dragon spacecraft, while Boeing's Starliner spacecraft began providing service in 2024. NASA has contracted for six operational missions from Boeing and fourteen from SpaceX, ensuring sufficient support for ISS through 2030. The spacecraft are owned and operated by the vendor, and crew transportation is provided to NASA as a commercial service. Each mission sends up to four astronauts to the ISS, with an option for a fifth passenger available. Operational flights occur approximately once every six months for missions that last for approximately six months. A spacecraft remains docked to the ISS during its mission, and missions usually overlap by at least a few days. Between the retirement of the Space Shuttle in 2011 and the first operational CCP mission in 2020, NASA relied on the Soyuz program to transport its astronauts to the ISS. A Crew Dragon spacecraft is launched to space atop a Falcon 9 Block 5 launch vehicle and the capsule returns to Earth via splashdown in the ocean near Florida. The program's first operational mission, SpaceX Crew-1, launched on November 16, 2020. Boeing Starliner operational flights will now commence with Boeing Starliner-1 which will launched atop an Atlas V N22 launch vehicle. Instead of a splashdown, Starliner capsules return on land with airbags at one of four designated sites in the western United States. Artemis (2017–present) Since 2017, NASA's crewed spaceflight program has been the Artemis program, which involves the help of US commercial spaceflight companies and international partners such as ESA, JAXA, and CSA. The goal of this program is to land "the first woman and the next man" on the lunar south pole region by 2025. Artemis would be the first step towards the long-term goal of establishing a sustainable presence on the Moon, laying the foundation for companies to build a lunar economy, and eventually sending humans to Mars. The Orion Crew Exploration Vehicle was held over from the canceled Constellation program for Artemis. Artemis I was the uncrewed initial launch of Space Launch System (SLS) that would also send an Orion spacecraft on a Distant Retrograde Orbit. The first tentative steps of returning to crewed lunar missions will be Artemis II, which is to include the Orion crew module, propelled by the SLS, and is to launch in 2025. This mission is to be a 10-day mission planned to briefly place a crew of four into a Lunar flyby. Artemis III aims to conduct the first crewed lunar landing since Apollo 17, and is scheduled for no earlier than September 2026. In support of the Artemis missions, NASA has been funding private companies to land robotic probes on the lunar surface in a program known as the Commercial Lunar Payload Services. As of March 2022, NASA has awarded contracts for robotic lunar probes to companies such as Intuitive Machines, Firefly Space Systems, and Astrobotic. On April 16, 2021, NASA announced they had selected the SpaceX Lunar Starship as its Human Landing System. The agency's Space Launch System rocket will launch four astronauts aboard the Orion spacecraft for their multi-day journey to lunar orbit where they will transfer to SpaceX's Starship for the final leg of their journey to the surface of the Moon. In November 2021, it was announced that the goal of landing astronauts on the Moon by 2024 had slipped to no earlier than 2025 due to numerous factors. Artemis I launched on November 16, 2022, and returned to Earth safely on December 11, 2022. As of April 2024, NASA plans to launch Artemis II in September 2025 and Artemis III in September 2026. Additional Artemis missions, Artemis IV, Artemis V, and Artemis VI are planned to launch between 2028 and 2031. NASA's next major space initiative is the construction of the Lunar Gateway, a small space station in lunar orbit. This space station will be designed primarily for non-continuous human habitation. The construction of the Gateway is expected to begin in 2027 with the launch of the first two modules: the Power and Propulsion Element (PPE) and the Habitation and Logistics Outpost (HALO). Operations on the Gateway will begin with the Artemis IV mission, which plans to deliver a crew of four to the Gateway in 2028. In 2017, NASA was directed by the congressional NASA Transition Authorization Act of 2017 to get humans to Mars-orbit (or to the Martian surface) by the 2030s. Commercial LEO Development (2021–present) The Commercial Low Earth Orbit Destinations program is an initiative by NASA to support work on commercial space stations that the agency hopes to have in place by the end of the current decade to replace the "International Space Station". The three selected companies are: Blue Origin (et al.) with their Orbital Reef station concept, Nanoracks (et al.) with their Starlab Space Station concept, and Northrop Grumman with a station concept based on the HALO-module for the Gateway station. Robotic exploration NASA has conducted many uncrewed and robotic spaceflight programs throughout its history. More than 1,000 uncrewed missions have been designed to explore the Earth and the Solar System. Mission selection process NASA executes a mission development framework to plan, select, develop, and operate robotic missions. This framework defines cost, schedule and technical risk parameters to enable competitive selection of missions involving mission candidates that have been developed by principal investigators and their teams from across NASA, the broader US Government research and development stakeholders, and industry. The mission development construct is defined by four umbrella programs. Explorer program The Explorer program derives its origin from the earliest days of the US Space program. In current form, the program consists of three classes of systems – Small Explorers (SMEX), Medium Explorers (MIDEX), and University-Class Explorers (UNEX) missions. The NASA Explorer program office provides frequent flight opportunities for moderate cost innovative solutions from the heliophysics and astrophysics science areas. The Small Explorer missions are required to limit cost to NASA to below $150M (2022 dollars). Medium class explorer missions have typically involved NASA cost caps of $350M. The Explorer program office is based at NASA Goddard Space Flight Center. Discovery program The NASA Discovery program develops and delivers robotic spacecraft solutions in the planetary science domain. Discovery enables scientists and engineers to assemble a team to deliver a solution against a defined set of objectives and competitively bid that solution against other candidate programs. Cost caps vary but recent mission selection processes were accomplished using a $500M cost cap for NASA. The Planetary Mission Program Office is based at the NASA Marshall Space Flight Center and manages both the Discovery and New Frontiers missions. The office is part of the Science Mission Directorate. NASA Administrator Bill Nelson announced on June 2, 2021, that the DAVINCI+ and VERITAS missions were selected to launch to Venus in the late 2020s, having beat out competing proposals for missions to Jupiter's volcanic moon Io and Neptune's large moon Triton that were also selected as Discovery program finalists in early 2020. Each mission has an estimated cost of $500 million, with launches expected between 2028 and 2030. Launch contracts will be awarded later in each mission's development. New Frontiers program The New Frontiers program focuses on specific Solar System exploration goals identified as top priorities by the planetary science community. Primary objectives include Solar System exploration employing medium class spacecraft missions to conduct high-science-return investigations. New Frontiers builds on the development approach employed by the Discovery program but provides for higher cost caps and schedule durations than are available with Discovery. Cost caps vary by opportunity; recent missions have been awarded based on a defined cap of $1 billion. The higher cost cap and projected longer mission durations result in a lower frequency of new opportunities for the program – typically one every several years. OSIRIS-REx and New Horizons are examples of New Frontiers missions. NASA has determined that the next opportunity to propose for the fifth round of New Frontiers missions will occur no later than the fall of 2024. Missions in NASA's New Frontiers Program tackle specific Solar System exploration goals identified as top priorities by the planetary science community. Exploring the Solar System with medium-class spacecraft missions that conduct high-science-return investigations is NASA's strategy to further understand the Solar System. Large strategic missions Large strategic missions (formerly called Flagship missions) are strategic missions that are typically developed and managed by large teams that may span several NASA centers. The individual missions become the program as opposed to being part of a larger effort (see Discovery, New Frontiers, etc.). The James Webb Space Telescope is a strategic mission that was developed over a period of more than 20 years. Strategic missions are developed on an ad-hoc basis as program objectives and priorities are established. Missions like Voyager, had they been developed today, would have been strategic missions. Three of the Great Observatories were strategic missions (the Chandra X-ray Observatory, the Compton Gamma Ray Observatory, and the Hubble Space Telescope). Europa Clipper is the next large strategic mission in development by NASA. Planetary science missions NASA continues to play a material role in exploration of the Solar System as it has for decades. Ongoing missions have current science objectives with respect to more than five extraterrestrial bodies within the Solar System – Moon (Lunar Reconnaissance Orbiter), Mars (Perseverance rover), Jupiter (Juno), asteroid Bennu (OSIRIS-REx), and Kuiper Belt Objects (New Horizons). The Juno extended mission will make multiple flybys of the Jovian moon Io in 2023 and 2024 after flybys of Ganymede in 2021 and Europa in 2022. Voyager 1 and Voyager 2 continue to provide science data back to Earth while continuing on their outward journeys into interstellar space. On November 26, 2011, NASA's Mars Science Laboratory mission was successfully launched for Mars. The Curiosity rover successfully landed on Mars on August 6, 2012, and subsequently began its search for evidence of past or present life on Mars. In September 2014, NASA's MAVEN spacecraft, which is part of the Mars Scout Program, successfully entered Mars orbit and, as of October 2022, continues its study of the atmosphere of Mars. NASA's ongoing Mars investigations include in-depth surveys of Mars by the Perseverance rover. NASA's Europa Clipper, launched in October 2024, will study the Galilean moon Europa through a series of flybys while in orbit around Jupiter. Dragonfly will send a mobile robotic rotorcraft to Saturn's biggest moon, Titan. As of May 2021, Dragonfly is scheduled for launch in June 2027. Astrophysics missions The NASA Science Mission Directorate Astrophysics division manages the agency's astrophysics science portfolio. NASA has invested significant resources in the development, delivery, and operations of various forms of space telescopes. These telescopes have provided the means to study the cosmos over a large range of the electromagnetic spectrum. The Great Observatories that were launched in the 1980s and 1990s have provided a wealth of observations for study by physicists across the planent. The first of them, the Hubble Space Telescope, was delivered to orbit in 1990 and continues to function, in part due to prior servicing missions performed by the Space Shuttle. The other remaining active great observatories include the Chandra X-ray Observatory (CXO), launched by STS-93 in July 1999 and is now in a 64-hour elliptical orbit studying X-ray sources that are not readily viewable from terrestrial observatories. The Imaging X-ray Polarimetry Explorer (IXPE) is a space observatory designed to improve the understanding of X-ray production in objects such as neutron stars and pulsar wind nebulae, as well as stellar and supermassive black holes. IXPE launched in December 2021 and is an international collaboration between NASA and the Italian Space Agency (ASI). It is part of the NASA Small Explorers program (SMEX) which designs low-cost spacecraft to study heliophysics and astrophysics. The Neil Gehrels Swift Observatory was launched in November 2004 and is a gamma-ray burst observatory that also monitors the afterglow in X-ray, and UV/Visible light at the location of a burst. The mission was developed in a joint partnership between Goddard Space Flight Center (GSFC) and an international consortium from the United States, United Kingdom, and Italy. Pennsylvania State University operates the mission as part of NASA's Medium Explorer program (MIDEX). The Fermi Gamma-ray Space Telescope (FGST) is another gamma-ray focused space observatory that was launched to low Earth orbit in June 2008 and is being used to perform gamma-ray astronomy observations. In addition to NASA, the mission involves the United States Department of Energy, and government agencies in France, Germany, Italy, Japan, and Sweden. The James Webb Space Telescope (JWST), launched in December 2021 on an Ariane 5 rocket, operates in a halo orbit circling the Sun-Earth point. JWST's high sensitivity in the infrared spectrum and its imaging resolution will allow it to view more distant, faint, or older objects than its predecessors, including Hubble. Earth Sciences Program missions (1965–present) NASA Earth Science is a large, umbrella program comprising a range of terrestrial and space-based collection systems in order to better understand the Earth system and its response to natural and human-caused changes. Numerous systems have been developed and fielded over several decades to provide improved prediction for weather, climate, and other changes in the natural environment. Several of the current operating spacecraft programs include: Aqua, Aura, Orbiting Carbon Observatory 2 (OCO-2), Gravity Recovery and Climate Experiment Follow-on (GRACE FO), and Ice, Cloud, and land Elevation Satellite 2 (ICESat-2). In addition to systems already in orbit, NASA is designing a new set of Earth Observing Systems to study, assess, and generate responses for climate change, natural hazards, forest fires, and real-time agricultural processes. The GOES-T satellite (designated GOES-18 after launch) joined the fleet of US geostationary weather monitoring satellites in March 2022. NASA also maintains the Earth Science Data Systems (ESDS) program to oversee the life cycle of NASA's Earth science data – from acquisition through processing and distribution. The primary goal of ESDS is to maximize the scientific return from NASA's missions and experiments for research and applied scientists, decision makers, and society at large. The Earth Science program is managed by the Earth Science Division of the NASA Science Mission Directorate. Space operations architecture NASA invests in various ground and space-based infrastructures to support its science and exploration mandate. The agency maintains access to suborbital and orbital space launch capabilities and sustains ground station solutions to support its evolving fleet of spacecraft and remote systems. Deep Space Network (1963–present) The NASA Deep Space Network (DSN) serves as the primary ground station solution for NASA's interplanetary spacecraft and select Earth-orbiting missions. The system employs ground station complexes near Barstow, California, in Spain near Madrid, and in Australia near Canberra. The placement of these ground stations approximately 120 degrees apart around the planet provides the ability for communications to spacecraft throughout the Solar System even as the Earth rotates about its axis on a daily basis. The system is controlled at a 24x7 operations center at JPL in Pasadena, California, which manages recurring communications linkages with up to 40 spacecraft. The system is managed by the Jet Propulsion Laboratory. Near Space Network (1983–present) The Near Space Network (NSN) provides telemetry, commanding, ground-based tracking, data and communications services to a wide range of customers with satellites in low earth orbit (LEO), geosynchronous orbit (GEO), highly elliptical orbits (HEO), and lunar orbits. The NSN accumulates ground station and antenna assets from the Near-Earth Network and the Tracking and Data Relay Satellite System (TDRS) which operates in geosynchronous orbit providing continuous real-time coverage for launch vehicles and low earth orbit NASA missions. The NSN consists of 19 ground stations worldwide operated by the US Government and by contractors including Kongsberg Satellite Services (KSAT), Swedish Space Corporation (SSC), and South African National Space Agency (SANSA). The ground network averages between 120 and 150 spacecraft contacts a day with TDRS engaging with systems on a near-continuous basis as needed; the system is managed and operated by the Goddard Space Flight Center. Sounding Rocket Program (1959–present) The NASA Sounding Rocket Program (NSRP) is located at the Wallops Flight Facility and provides launch capability, payload development and integration, and field operations support to execute suborbital missions. The program has been in operation since 1959 and is managed by the Goddard Space Flight Center using a combined US Government and contractor team. The NSRP team conducts approximately 20 missions per year from both Wallops and other launch locations worldwide to allow scientists to collect data "where it occurs". The program supports the strategic vision of the Science Mission Directorate collecting important scientific data for earth science, heliophysics, and astrophysics programs. In June 2022, NASA conducted its first rocket launch from a commercial spaceport outside the US. It launched a Black Brant IX from the Arnhem Space Centre in Australia. Launch Services Program (1990–present) The NASA Launch Services Program (LSP) is responsible for procurement of launch services for NASA uncrewed missions and oversight of launch integration and launch preparation activity, providing added quality and mission assurance to meet program objectives. Since 1990, NASA has purchased expendable launch vehicle launch services directly from commercial providers, whenever possible, for its scientific and applications missions. Expendable launch vehicles can accommodate all types of orbit inclinations and altitudes and are ideal vehicles for launching Earth-orbit and interplanetary missions. LSP operates from Kennedy Space Center and falls under the NASA Space Operations Mission Directorate (SOMD). Aeronautics Research The Aeronautics Research Mission Directorate (ARMD) is one of five mission directorates within NASA, the other four being the Exploration Systems Development Mission Directorate, the Space Operations Mission Directorate, the Science Mission Directorate, and the Space Technology Mission Directorate. The ARMD is responsible for NASA's aeronautical research, which benefits the commercial, military, and general aviation sectors. ARMD performs its aeronautics research at four NASA facilities: Ames Research Center and Armstrong Flight Research Center in California, Glenn Research Center in Ohio, and Langley Research Center in Virginia. NASA X-57 Maxwell aircraft (2016–present) The NASA X-57 Maxwell is an experimental aircraft being developed by NASA to demonstrate the technologies required to deliver a highly efficient all-electric aircraft. The primary goal of the program is to develop and deliver all-electric technology solutions that can also achieve airworthiness certification with regulators. The program involves development of the system in several phases, or modifications, to incrementally grow the capability and operability of the system. The initial configuration of the aircraft has now completed ground testing as it approaches its first flights. In mid-2022, the X-57 was scheduled to fly before the end of the year. The development team includes staff from the NASA Armstrong, Glenn, and Langley centers along with number of industry partners from the United States and Italy. Next Generation Air Transportation System (2007–present) NASA is collaborating with the Federal Aviation Administration and industry stakeholders to modernize the United States National Airspace System (NAS). Efforts began in 2007 with a goal to deliver major modernization components by 2025. The modernization effort intends to increase the safety, efficiency, capacity, access, flexibility, predictability, and resilience of the NAS while reducing the environmental impact of aviation. The Aviation Systems Division of NASA Ames operates the joint NASA/FAA North Texas Research Station. The station supports all phases of NextGen research, from concept development to prototype system field evaluation. This facility has already transitioned advanced NextGen concepts and technologies to use through technology transfers to the FAA. NASA contributions also include development of advanced automation concepts and tools that provide air traffic controllers, pilots, and other airspace users with more accurate real-time information about the nation's traffic flow, weather, and routing. Ames' advanced airspace modeling and simulation tools have been used extensively to model the flow of air traffic flow across the US, and to evaluate new concepts in airspace design, traffic flow management, and optimization. Technology research Nuclear in-space power and propulsion (ongoing) NASA has made use of technologies such as the multi-mission radioisotope thermoelectric generator (MMRTG), which is a type of radioisotope thermoelectric generator used to power spacecraft. Shortages of the required plutonium-238 have curtailed deep space missions since the turn of the millennium. An example of a spacecraft that was not developed because of a shortage of this material was New Horizons 2. In July 2021, NASA announced contract awards for development of nuclear thermal propulsion reactors. Three contractors will develop individual designs over 12 months for later evaluation by NASA and the US Department of Energy. NASA's space nuclear technologies portfolio are led and funded by its Space Technology Mission Directorate. In January 2023, NASA announced a partnership with Defense Advanced Research Projects Agency (DARPA) on the Demonstration Rocket for Agile Cislunar Operations (DRACO) program to demonstrate a NTR engine in space, an enabling capability for NASA missions to Mars. In July 2023, NASA and DARPA jointly announced the award of $499 million to Lockheed Martin to design and build an experimental NTR rocket to be launched in 2027. Other initiatives Free Space Optics. NASA contracted a third party to study the probability of using Free Space Optics (FSO) to communicate with Optical (laser) Stations on the Ground (OGS) called laser-com RF networks for satellite communications.Water Extraction from Lunar Soil. On July 29, 2020, NASA requested American universities to propose new technologies for extracting water from the lunar soil and developing power systems. The idea will help the space agency conduct sustainable exploration of the Moon. In 2024, NASA was tasked by the US Government to create a Time standard for the Moon. The standard is to be called Coordinated Lunar Time and is expected to be finalized in 2026. Human Spaceflight Research (2005–present) NASA's Human Research Program (HRP) is designed to study the effects of space on human health and also to provide countermeasures and technologies for human space exploration. The medical effects of space exploration are reasonably limited in low Earth orbit or in travel to the Moon. Travel to Mars is significantly longer and deeper into space, significant medical issues can result. These include bone density loss, radiation exposure, vision changes, circadian rhythm disturbances, heart remodeling, and immune alterations. In order to study and diagnose these ill-effects, HRP has been tasked with identifying or developing small portable instrumentation with low mass, volume, and power to monitor the health of astronauts. To achieve this aim, on May 13, 2022, NASA and SpaceX Crew-4 astronauts successfully tested its rHEALTH ONE universal biomedical analyzer for its ability to identify and analyzer biomarkers, cells, microorganisms, and proteins in a spaceflight environment. Planetary Defense (2016–present) NASA established the Planetary Defense Coordination Office (PDCO) in 2016 to catalog and track potentially hazardous near-Earth objects (NEO), such as asteroids and comets and develop potential responses and defenses against these threats. The PDCO is chartered to provide timely and accurate information to the government and the public on close approaches by Potentially hazardous objects (PHOs) and any potential for impact. The office functions within the Science Mission Directorate Planetary Science Division. The PDCO augmented prior cooperative actions between the United States, the European Union, and other nations which had been scanning the sky for NEOs since 1998 in an effort called Spaceguard. Near Earth object detection (1998–present) From the 1990s NASA has run many NEO detection programs from Earth bases observatories, greatly increasing the number of objects that have been detected. Many asteroids are very dark and those near the Sun are much harder to detect from Earth-based telescopes which observe at night, and thus face away from the Sun. NEOs inside Earth orbit only reflect a part of light also rather than potentially a "full Moon" when they are behind the Earth and fully lit by the Sun. In 1998, the United States Congress gave NASA a mandate to detect 90% of near-Earth asteroids over diameter (that threaten global devastation) by 2008. This initial mandate was met by 2011. In 2005, the original USA Spaceguard mandate was extended by the George E. Brown, Jr. Near-Earth Object Survey Act, which calls for NASA to detect 90% of NEOs with diameters of or greater, by 2020 (compare to the 20-meter Chelyabinsk meteor that hit Russia in 2013). , it is estimated that less than half of these have been found, but objects of this size hit the Earth only about once in 2,000 years. In January 2020, NASA officials estimated it would take 30 years to find all objects meeting the size criteria, more than twice the timeframe that was built into the 2005 mandate. In June 2021, NASA authorized the development of the NEO Surveyor spacecraft to reduce that projected duration to achieve the mandate down to 10 years. Involvement in current robotic missions NASA has incorporated planetary defense objectives into several ongoing missions. In 1999, NASA visited 433 Eros with the NEAR Shoemaker spacecraft which entered its orbit in 2000, closely imaging the asteroid with various instruments at that time. NEAR Shoemaker became the first spacecraft to successfully orbit and land on an asteroid, improving our understanding of these bodies and demonstrating our capacity to study them in greater detail. OSIRIS-REx used its suite of instruments to transmit radio tracking signals and capture optical images of Bennu during its study of the asteroid that will help NASA scientists determine its precise position in the solar system and its exact orbital path. As Bennu has the potential for recurring approaches to the Earth-Moon system in the next 100–200 years, the precision gained from OSIRIS-REx will enable scientists to better predict the future gravitational interactions between Bennu and our planet and resultant changes in Bennu's onward flight path. The WISE/NEOWISE mission was launched by NASA JPL in 2009 as an infrared-wavelength astronomical space telescope. In 2013, NASA repurposed it as the NEOWISE mission to find potentially hazardous near-Earth asteroids and comets; its mission has been extended into 2023. NASA and Johns Hopkins Applied Physics Laboratory (JHAPL) jointly developed the first planetary defense purpose-built satellite, the Double Asteroid Redirection Test (DART) to test possible planetary defense concepts. DART was launched in November 2021 by a SpaceX Falcon 9 from California on a trajectory designed to impact the Dimorphos asteroid. Scientists were seeking to determine whether an impact could alter the subsequent path of the asteroid; a concept that could be applied to future planetary defense. On September 26, 2022, DART hit its target. In the weeks following impact, NASA declared DART a success, confirming it had shortened Dimorphos' orbital period around Didymos by about 32 minutes, surpassing the pre-defined success threshold of 73 seconds. NEO Surveyor, formerly called the Near-Earth Object Camera (NEOCam) mission, is a space-based infrared telescope under development to survey the Solar System for potentially hazardous asteroids. The spacecraft is scheduled to launch in 2026. Study of Unidentified Aerial Phenomena (2022–present) In June 2022, the head of the NASA Science Mission Directorate, Thomas Zurbuchen, confirmed the start of NASA's UAP independent study team. At a speech before the National Academies of Science, Engineering and Medicine, Zurbuchen said the space agency would bring a scientific perspective to efforts already underway by the Pentagon and intelligence agencies to make sense of dozens of such sightings. He said it was "high-risk, high-impact" research that the space agency should not shy away from, even if it is a controversial field of study. Collaboration NASA Advisory Council In response to the Apollo 1 accident, which killed three astronauts in 1967, Congress directed NASA to form an Aerospace Safety Advisory Panel (ASAP) to advise the NASA Administrator on safety issues and hazards in NASA's air and space programs. In the aftermath of the Shuttle Columbia disaster, Congress required that the ASAP submit an annual report to the NASA Administrator and to Congress. By 1971, NASA had also established the Space Program Advisory Council and the Research and Technology Advisory Council to provide the administrator with advisory committee support. In 1977, the latter two were combined to form the NASA Advisory Council (NAC). The NASA Authorization Act of 2014 reaffirmed the importance of ASAP. National Oceanic and Atmospheric Administration (NOAA) NASA and NOAA have cooperated for decades on the development, delivery and operation of polar and geosynchronous weather satellites. The relationship typically involves NASA developing the space systems, launch solutions, and ground control technology for the satellites and NOAA operating the systems and delivering weather forecasting products to users. Multiple generations of NOAA Polar orbiting platforms have operated to provide detailed imaging of weather from low altitude. Geostationary Operational Environmental Satellites (GOES) provide near-real-time coverage of the western hemisphere to ensure accurate and timely understanding of developing weather phenomenon. United States Space Force The United States Space Force (USSF) is the space service branch of the United States Armed Forces, while the National Aeronautics and Space Administration (NASA) is an independent agency of the United States government responsible for civil spaceflight. NASA and the Space Force's predecessors in the Air Force have a long-standing cooperative relationship, with the Space Force supporting NASA launches out of Kennedy Space Center, Cape Canaveral Space Force Station, and Vandenberg Space Force Base, to include range support and rescue operations from Task Force 45. NASA and the Space Force also partner on matters such as defending Earth from asteroids. Space Force members can be NASA astronauts, with Colonel Michael S. Hopkins, the commander of SpaceX Crew-1, commissioned into the Space Force from the International Space Station on December 18, 2020. In September 2020, the Space Force and NASA signed a memorandum of understanding formally acknowledging the joint role of both agencies. This new memorandum replaced a similar document signed in 2006 between NASA and Air Force Space Command. US Geological Survey The Landsat program is the longest-running enterprise for acquisition of satellite imagery of Earth. It is a joint NASA / USGS program. On July 23, 1972, the Earth Resources Technology Satellite was launched. This was eventually renamed to Landsat 1 in 1975. The most recent satellite in the series, Landsat 9, was launched on September 27, 2021. The instruments on the Landsat satellites have acquired millions of images. The images, archived in the United States and at Landsat receiving stations around the world, are a unique resource for global change research and applications in agriculture, cartography, geology, forestry, regional planning, surveillance and education, and can be viewed through the US Geological Survey (USGS) "EarthExplorer" website. The collaboration between NASA and USGS involves NASA designing and delivering the space system (satellite) solution, launching the satellite into orbit with the USGS operating the system once in orbit. As of October 2022, nine satellites have been built with eight of them successfully operating in orbit. European Space Agency (ESA) NASA collaborates with the European Space Agency on a wide range of scientific and exploration requirements. From participation with the Space Shuttle (the Spacelab missions) to major roles on the Artemis program (the Orion Service Module), ESA and NASA have supported the science and exploration missions of each agency. There are NASA payloads on ESA spacecraft and ESA payloads on NASA spacecraft. The agencies have developed joint missions in areas including heliophysics (e.g. Solar Orbiter) and astronomy (Hubble Space Telescope, James Webb Space Telescope). Under the Artemis Gateway partnership, ESA will contribute habitation and refueling modules, along with enhanced lunar communications, to the Gateway. NASA and ESA continue to advance cooperation in relation to Earth Science including climate change with agreements to cooperate on various missions including the Sentinel-6 series of spacecraft Japan Aerospace Exploration Agency (JAXA) NASA and the Japan Aerospace Exploration Agency (JAXA) cooperate on a range of space projects. JAXA is a direct participant in the Artemis program, including the Lunar Gateway effort. JAXA's planned contributions to Gateway include I-Hab's environmental control and life support system, batteries, thermal control, and imagery components, which will be integrated into the module by the European Space Agency (ESA) prior to launch. These capabilities are critical for sustained Gateway operations during crewed and uncrewed time periods. JAXA and NASA have collaborated on numerous satellite programs, especially in areas of Earth science. NASA has contributed to JAXA satellites and vice versa. Japanese instruments are flying on NASA's Terra and Aqua satellites, and NASA sensors have flown on previous Japanese Earth-observation missions. The NASA-JAXA Global Precipitation Measurement mission was launched in 2014 and includes both NASA- and JAXA-supplied sensors on a NASA satellite launched on a JAXA rocket. The mission provides the frequent, accurate measurements of rainfall over the entire globe for use by scientists and weather forecasters. Roscosmos NASA and Roscosmos have cooperated on the development and operation of the International Space Station since September 1993. The agencies have used launch systems from both countries to deliver station elements to orbit. Astronauts and Cosmonauts jointly maintain various elements of the station. Both countries provide access to the station via launch systems noting Russia's unique role as the sole provider of delivery of crew and cargo upon retirement of the space shuttle in 2011 and prior to commencement of NASA COTS and crew flights. In July 2022, NASA and Roscosmos signed a deal to share space station flights enabling crew from each country to ride on the systems provided by the other. Current geopolitical conditions in late 2022 make it unlikely that cooperation will be extended to other programs such as Artemis or lunar exploration. Indian Space Research Organisation (ISRO) In September 2014, NASA and Indian Space Research Organisation (ISRO) signed a partnership to collaborate on and launch a joint radar mission, the NASA-ISRO Synthetic Aperature Radar (NISAR) mission. The mission is targeted to launch in 2024. NASA will provide the mission's L-band synthetic aperture radar, a high-rate communication subsystem for science data, GPS receivers, a solid-state recorder and payload data subsystem. ISRO provides the spacecraft bus, the S-band radar, the launch vehicle and associated launch services. Artemis Accords The Artemis Accords have been established to define a framework for cooperating in the peaceful exploration and exploitation of the Moon, Mars, asteroids, and comets. The accords were drafted by NASA and the US State Department and are executed as a series of bilateral agreements between the United States and the participating countries. As of September 2022, 21 countries have signed the accords. They are Australia, Bahrain, Brazil, Canada, Colombia, France, Israel, Italy, Japan, the Republic of Korea, Luxembourg, Mexico, New Zealand, Poland, Romania, the Kingdom of Saudi Arabia, Singapore, Ukraine, the United Arab Emirates, the United Kingdom, and the United States. China National Space Administration The Wolf Amendment was passed by the US Congress into law in 2011 and prevents NASA from engaging in direct, bilateral cooperation with the Chinese government and China-affiliated organizations such as the China National Space Administration without the explicit authorization from Congress and the Federal Bureau of Investigation. The law has been renewed annually since by inclusion in annual appropriations bills. Management Leadership The agency's administration is located at NASA Headquarters in Washington, DC, and provides overall guidance and direction. Except under exceptional circumstances, NASA civil service employees are required to be US citizens. NASA's administrator is nominated by the President of the United States subject to the approval of the US Senate, and serves at the President's pleasure as a senior space science advisor. The current administrator is Janet Petro, appointed by President Donald Trump, since January 20, 2025. Strategic plan NASA operates with four FY2022 strategic goals. Expand human knowledge through new scientific discoveries Extend human presence to the Moon and on towards Mars for sustainable long-term exploration, development, and utilization Catalyze economic growth and drive innovation to address national challenges Enhance capabilities and operations to catalyze current and future mission success Budget NASA budget requests are developed by NASA and approved by the administration prior to submission to the US Congress. Authorized budgets are those that have been included in enacted appropriations bills that are approved by both houses of Congress and enacted into law by the US president. NASA fiscal year budget requests and authorized budgets are listed below. Organization NASA funding and priorities are developed through its six Mission Directorates. Center-wide activities such as the Chief Engineer and Safety and Mission Assurance organizations are aligned to the headquarters function. The MSD budget estimate includes funds for these HQ functions. The administration operates 10 major field centers with several managing additional subordinate facilities across the country. Each center is led by a director (data below valid as of December 23, 2024). Sustainability Environmental impact The exhaust gases produced by rocket propulsion systems, both in Earth's atmosphere and in space, can adversely affect the Earth's environment. Some hypergolic rocket propellants, such as hydrazine, are highly toxic prior to combustion, but decompose into less toxic compounds after burning. Rockets using hydrocarbon fuels, such as kerosene, release carbon dioxide and soot in their exhaust. Carbon dioxide emissions are insignificant compared to those from other sources; on average, the United States consumed of liquid fuels per day in 2014, while a single Falcon 9 rocket first stage burns around of kerosene fuel per launch. Even if a Falcon 9 were launched every single day, it would only represent 0.006% of liquid fuel consumption (and carbon dioxide emissions) for that day. Additionally, the exhaust from LOx- and LH2- fueled engines, like the SSME, is almost entirely water vapor. NASA addressed environmental concerns with its canceled Constellation program in accordance with the National Environmental Policy Act in 2011. In contrast, ion engines use harmless noble gases like xenon for propulsion. An example of NASA's environmental efforts is the NASA Sustainability Base. Additionally, the Exploration Sciences Building was awarded the LEED Gold rating in 2010. On May 8, 2003, the Environmental Protection Agency recognized NASA as the first federal agency to directly use landfill gas to produce energy at one of its facilities—the Goddard Space Flight Center, Greenbelt, Maryland. In 2018, NASA along with other companies including Sensor Coating Systems, Pratt & Whitney, Monitor Coating and UTRC launched the project CAUTION (CoAtings for Ultra High Temperature detectION). This project aims to enhance the temperature range of the Thermal History Coating up to and beyond. The final goal of this project is improving the safety of jet engines as well as increasing efficiency and reducing CO2 emissions. Climate change NASA also researches and publishes on climate change. Its statements concur with the global scientific consensus that the climate is warming. Bob Walker, who has advised former US President Donald Trump on space issues, has advocated that NASA should focus on space exploration and that its climate study operations should be transferred to other agencies such as NOAA. Former NASA atmospheric scientist J. Marshall Shepherd countered that Earth science study was built into NASA's mission at its creation in the 1958 National Aeronautics and Space Act. NASA won the 2020 Webby People's Voice Award for Green in the category Web. STEM Initiatives Educational Launch of Nanosatellites (ELaNa). Since 2011, the ELaNa program has provided opportunities for NASA to work with university teams to test emerging technologies and commercial-off-the-shelf solutions by providing launch opportunities for developed CubeSats using NASA procured launch opportunities. By example, two NASA-sponsored CubeSats launched in June 2022 on a Virgin Orbit LauncherOne vehicle as the ELaNa 39 mission.Cubes in Space. NASA started an annual competition in 2014 named "Cubes in Space". It is jointly organized by NASA and the global education company I Doodle Learning, with the objective of teaching school students aged 11–18 to design and build scientific experiments to be launched into space on a NASA rocket or balloon. On June 21, 2017, the world's smallest satellite, KalamSAT, was launched. Use of the metric system US law requires the International System of Units to be used in all US Government programs, "except where impractical". In 1969, Apollo 11 landed on the Moon using a mix of United States customary units and metric units. In the 1980s, NASA started the transition towards the metric system, but was still using both systems in the 1990s. On September 23, 1999, a mixup between NASA's use of SI units and Lockheed Martin Space's use of US units resulted in the loss of the Mars Climate Orbiter. In August 2007, NASA stated that all future missions and explorations of the Moon would be done entirely using the SI system. This was done to improve cooperation with space agencies of other countries that already use the metric system. As of 2007, NASA is predominantly working with SI units, but some projects still use US units, and some, including the International Space Station, use a mix of both. Media presence NASA TV Approaching 40 years of service, the NASA TV channel airs content ranging from live coverage of crewed missions to video coverage of significant milestones for operating robotic spacecraft (e.g. rover landings on Mars) and domestic and international launches. The channel is delivered by NASA and is broadcast by satellite and over the Internet. The system initially started to capture archival footage of important space events for NASA managers and engineers and expanded as public interest grew. The Apollo 8 Christmas Eve broadcast while in orbit around the Moon was received by more than a billion people. NASA's video transmission of the Apollo 11 Moon landing was awarded a primetime Emmy in commemoration of the 40th anniversary of the landing. The channel is a product of the US Government and is widely available across many television and Internet platforms. NASAcast NASAcast is the official audio and video podcast of the NASA website. Created in late 2005, the podcast service contains the latest audio and video features from the NASA web site, including NASA TV's This Week at NASA and educational materials produced by NASA. Additional NASA podcasts, such as Science@NASA, are also featured and give subscribers an in-depth look at content by subject matter. NASA EDGE NASA EDGE is a video podcast which explores different missions, technologies and projects developed by NASA. The program was released by NASA on March 18, 2007, and, , there have been 200 vodcasts produced. It is a public outreach vodcast sponsored by NASA's Exploration Systems Mission Directorate and based out of the Exploration and Space Operations Directorate at Langley Research Center in Hampton, Virginia. The NASA EDGE team takes an insider's look at current projects and technologies from NASA facilities around the United States, and it is depicted through personal interviews, on-scene broadcasts, computer animations, and personal interviews with top scientists and engineers at NASA. The show explores the contributions NASA has made to society as well as the progress of current projects in materials and space exploration. NASA EDGE vodcasts can be downloaded from the NASA website and from iTunes. In its first year of production, the show was downloaded over 450,000 times. the average download rate is more than 420,000 per month, with over one million downloads in December 2009 and January 2010. NASA and the NASA EDGE have also developed interactive programs designed to complement the vodcast. The Lunar Electric Rover App allows users to drive a simulated Lunar Electric Rover between objectives, and it provides information about and images of the vehicle. The NASA EDGE Widget provides a graphical user interface for accessing NASA EDGE vodcasts, image galleries, and the program's Twitter feed, as well as a live NASA news feed. Astronomy Picture of the Day NASA+ In July 2023, NASA announced a new streaming service known as NASA+. It launched on November 8, 2023, and has live coverage of launches, documentaries and original programs. According to NASA, it will be free of ads and subscription fees. It will be a part of the NASA app on iOS, Android, Amazon Fire TV, Roku and Apple TV as well as on the web on desktop and mobile devices. Gallery
Technology
Programs and launch sites
null
6101870
https://en.wikipedia.org/wiki/Vaalbara
Vaalbara
Vaalbara is a hypothetical Archean supercontinent consisting of the Kaapvaal Craton (now in eastern South Africa) and the Pilbara Craton (now in north-western Western Australia). E. S. Cheney derived the name from the last four letters of each craton's name. The two cratons consist of continental crust dating from 2.7 to 3.6 Ga, which would make Vaalbara one of Earth's earliest supercontinents. Existence and lifespan There has been some debate as to when and even if Vaalbara existed. An Archaean–Palaeoproterozoic (2.8–2.1 Ga) link between South Africa and Western Australia was first proposed by A. Button in 1976. He found a wide range of similarities between the Transvaal Basin in South Africa and the Hamersley Basin in Australia. Button, however, placed Madagascar between Africa and Australia and concluded that Gondwana must have had a long stable tectonic history. Similarly, in the reconstruction of Rogers 1993, 1996 the oldest continent is Ur. In Rogers' reconstructions, however, Kaapvaal and Pilbara are placed far apart already in their Gondwana configuration, a reconstruction contradicted by later orogenic events and incompatible with the Vaalbara hypothesis. , nevertheless, found a three-fold stratigraphic similarity and proposed that the two cratons once formed a continent which he named Vaalbara. This model is supported by the palaeomagnetic data of . Reconstructions of the palaeolatitudes of the two cratons at 2.78–2.77 Ga are ambiguous however. In the reconstruction of they fail to overlap, but they do in more recent reconstructions, for example . Other scientists dispute the existence of Vaalbara and explain similarities between the two cratons as the product of global processes. They point, for example, to thick volcanic deposits on other cratons such as Amazonia, São Francisco, and Karnataka. Zimgarn, another proposed supercraton composed of the Zimbabwe and Yilgarn cratons at 2.41 Ga, is distinct from Vaalbara. Zimgarn should have disintegrated around 2.1–2.0 Ga to reassemble as the Kalahari and West Australian (Yilgarn and Pilbara) cratons around 1.95–1.8 Ga. The Archaean–Palaeoproterozoic Grunehogna Craton in Queen Maud Land, East Antarctica, formed the eastern part of the Kalahari Craton for at least a billion years. Grunehogna collided with the rest of East Antarctica during the Mesoproterozoic assembly of the supercontinent Rodinia and the Grenville orogeny. The Neoproterozoic Pan-African orogeny and the assembly of Gondwana/Pannotia produced large shear zones between Grunehogna and Kalahari. During the Jurassic break-up of Gondwana, these shear zones finally separated Grunehogna and the rest of Antarctica from Africa. In the Annandags Peaks in Antarctica, the only exposed parts of Grunehogna, detrital zircons from several crustal sources have been dated to 3.9–3.0 Ga suggesting intracrustal recycling was an important part in the formation of the first cratons. The Kaapvaal craton is marked by dramatic events such as the intrusion of the Bushveld Complex (2.045 Ga) and the Vredefort impact event (2.025 Ga), and no traces of these events have been found in the Pilbara craton, clearly indicating that the two cratons were separated before 2.05 Ga. Furthermore, geochronological and palaeomagnetic evidence show that the two cratons had a rotational 30° latitudinal separation in the time period of 2.78–2.77 Ga, which indicates they were no longer joined after c. 2.8 billion years ago. Vaalbara thus remained stable for 1–0.4 Ga and hence had a life span similar to that of later supercontinents such as Gondwana and Rodinia. Some palaeomagnetic reconstructions suggest a Palaeoarchaean proto-Vaalbara is possible, although the existence of this 3.6–3.2 Ga continent cannot be proven. Evidence South Africa's Kaapvaal craton and Western Australia's Pilbara craton have similar early Precambrian cover sequences. Kaapvaal's Barberton granite-greenstone terrane and Pilbara's eastern block show evidence of four large meteorite impacts between 3.2 and 3.5 billion years ago. Similar greenstone belts are found at the margins of the Superior Craton of Canada. The high temperatures created by the impacts' forces fused sediments into small glassy spherules. Spherules of 3.5 billion years old exist in South Africa, and spherules of a similar age have been found in Western Australia; they are the oldest-known terrestrial impact products. The spherules resemble the glassy chondrules (rounded granules) in carbonaceous chondrites, which are found in carbon-rich meteorites and lunar soils. Remarkably similar lithostratigraphic and chronostratigraphic structural sequences between these two cratons have been noted for the period between 3.5 and 2.7 Ga. Paleomagnetic data from two ultramafic complexes in the cratons showed that at 3.87 Ga the two cratons could have been part of the same supercontinent. Both the Pilbara and Kaapvaal cratons show extensional faults which were active about 3.47 Ga during felsic volcanism and coeval with the impact layers. Origin of life The Pilbara and Kaapvaal cratons contain well-preserved Archaean microfossils. Drilling has revealed traces of microbial life and photosynthesis from the Archaean in both Africa and Australia. The oldest widely accepted evidence of photosynthesis by early life forms is molecular fossils found in 2.7 Ga-old shales in the Pilbara Craton. These fossils have been interpreted as traces of eukaryotes and cyanobacteria, though some scientists argue that these biomarkers must have entered these rocks later and date the fossils to 2.15–1.68 Ga. This later time span agrees with estimates based on molecular clocks which dates the eukaryote last common ancestor at 1.8–1.7 Ga. If the Pilbara fossils are traces of early eukaryotes, they could represent groups that went extinct before modern groups emerged.
Physical sciences
Paleogeography
Earth science
6103494
https://en.wikipedia.org/wiki/Slave%20Craton
Slave Craton
The Slave Craton is an Archaean craton in the north-western Canadian Shield, in Northwest Territories and Nunavut. The Slave Craton includes the 4.03 Ga-old Acasta Gneiss which is one of the oldest dated rocks on Earth. Covering about , it is a relatively small but well-exposed craton dominated by ~2.73–2.63 Ga (billion years-old) greenstones and turbidite sequences and ~2.72–2.58 Ga plutonic rocks, with large parts of the craton underlain by older gneiss and granitoid units. The Slave Craton is one of the blocks that compose the Precambrian core of North America, also known as the palaeocontinent Laurentia. The exposed portion of the craton, called the Slave Province, comprises and has an elliptical shape that stretches NNE from Gros Cap on the Great Slave Lake to Cape Barrow on the Coronation Gulf and EW along latitude 64°N. It covers about and is bounded by Palaeoproterozoic belts to the south, east, and west, while younger rocks cover it to the north. The Slave Craton is divided into a west-central basement complex, the Central Slave Basement Complex, and an eastern province, named the Hackett River Terrane or the Eastern Slave Province. These two domains are separated by a 2.7 Ga-old suture defined by two isotopic boundaries running north to south over the craton. Subdivisions Central Slave basement complex The Central Slave basement complex (CSBC) is the basement under the central and western part of the craton. The CSBC's eastern extent is unknown, as its disappearance is marked by Nd and Pb isotopic boundaries. The CSBC dips to the east and underlies at least the central part of the craton. Along the Acasta River the CSBC includes the Acasta Gneisses with a protolith age of about 4.03 Ga, one of the oldest dated rock units on Earth. These gneisses are polymetamorphic and have a tonalitic and gabbroic composition. The rest of the CSBC is younger with a central core <3.5 Ga and the remaining craton with detrial and protolith ages ranging from 3.4 to 2.8 Ga. The basement complex is overlain by Neoarchaean supracrustal sequences and intruded by plutonic suites. The Acasta gneisses are geochemically similar to other Archaean complexes but, four billion years old, they contain even older zircon cores. These cores indicate that the parental magmas of such complexes formed by interaction between the zircon-bearing crust and mantle-derived melts. No such older Acasta gneisses have been discovered yet, but the zircon cores indicate they could exist. Back River volcanic complex The Back River volcanic complex is an Archaean stratovolcano preserved in an upright position surrounded by four sedimentary sequences reflecting the volcano's magmatic history. An exposed dome in the southern half of the complex is interpreted to be the eroded portion of the volcano. In contrast to the remaining craton, the complex has only undergone a low degree of deformation. Yellowknife Supergroup The Yellowknife Supergroup, also known as the Yellowknife greenstone belt, was deposited over 300 million years from ca. 2.9-2.6 Ga, and directly overlies the CSBC including much of the Eastern Slave Province. The CSBC and the Yellowknife greenstone belt are separated by a distinct unconformity that is laterally continuous over hundreds of kilometres. The Yellowknife Supergroup has been exposed to major metamorphism around 2605 Ma resulting in a range of greenschist to lower amphibolite facies. The Supergroup contains at least four distinct sequences representing different tectonic environments, deposited in separate intervals. The four main sequences include from oldest to youngest, the Central Slaver Cover Group, Kam Group, Banting Group, and Jackson Lake Formation. The Yellowknife Supergroup has been used to represent the general stratigraphy of the greenstone belts in the Slave Craton, including belts in the Eastern Slave, in order to interpret the processes involved in the evolution of the Slave Craton. Central Slave Cover Group The Neoarchean supracrustal sequence known as the Central Slave Cover Group (informally Dwyer Group) is a 2.9–2.8 Ga package of fuchsitic quartzites overlain by banded iron formations. This fuchisitic quartziite sequence seems to be characteristic of many other cratons between about 3.1 and 2.8 Ga and marks a global peak in quartzite production. The Central Slave Cover Group is typically 100 to 200 meters thick. A quartz pebble conglomerate found at the base of the Central Slave Cover Group marks a distinct unconformity that is laterally continuous over much of the CSBC. This quartz pebble conglomerate layer has been found as far northwest as the 4.03 Ga Acasta Gneiss Complex. The Central Slave Cover Group is autochthonous and represents a single continuous cover sequence, that links the basement complex in the northwest with the basement in the south-central Slave Province. Uniform, laterally continuous deposition implies that the CSBC was previously a part of a single ancient craton that existed as early as 2.85 Ga. Kam Group The Kam Group is a 0.3–6 kilometre thick sequence that overlies banded iron formations of the Central Slave Cover Group. The contact between these two groups is not well preserved due to the intrusion of gabbro sills and moderate shearing. The Kam group is separated into a lower and upper group based on the existence of a thin felsic volcaniclastic layer (Ranney Chert) dated at 2722 Ma. The lower Kam group consists of the Chan Formation which contains flows of pillowed basalts intruded by a series of gabbroic sills and dikes that were produced in an extensional back-arc basin setting. Sedimentary rocks exposed in the northern part of the formation are between 2.84 and 2.80 Ga. The Upper Kam Group contains three formations deposited between 2772 and 2701 Ma. It is composed mainly of intermediate and basaltic volcanic rocks with thin intercalated rhyolite tuff layers and minor komatiite flows. Rocks in this formation were seemingly formed in an arc environment and may be a result of rifting of basement rocks due to increased mantle plume activity. Banting Group The Banting group is a north-striking sequence that is faulted overtop of the older Kam Group and younger Jackson Lake Formation. The contact between the lower units and the Banting Group is a disconformity that represents a ~40 million year gap in deposition. The Banting Group contains siliceous to intermediate volcanic rocks that are typically calk-alkalic. The Banting Group formation is largely a result of post 2.7 Ga volcanism and sub-volcanic activity. A series of 2658 Ma quartz-feldspar intrusions are found throughout the underlying Kam group that are related to the post 2.7 Ga volcanism found in the Banting Group. Jackson Lake Formation Deposition of the Jackson Lake Formation began at 2605 Ma. The formation is a high energy sedimentary deposit that overlies the volcanic rocks of the Kam Group. The deposit consists of polymict conglomerates and fluvial sandstones that have been subjected to a major metamorphic event as evident by similarly oriented vertical dips and lineations found in older groups. Evolution of Slave Craton Earliest formation Information on the earliest formation of the Slave Craton may be found in the Acasta Gneiss Complex, but due to the complex history, poor preservation, and lack of exposure, much is still unknown about crustal forming processes in the Hadean and early Archean. Xenocrysts found within 3.94 Ga tonalitic gneisses of the Acasta Gneiss Complex have U–Pb dates of 4.2 Ga. These zircon xenocrysts originally crystallized in a granitic magma of crustal origin. Further evidence suggests that the 3.94 Ga tonalitic gneisses are at least partly derived from this 4.2 Ga granite magma, which indicates that crustal reworking was an important process in the Eoarchean. Zircons from this granite protolith shows similarities to zircons from the Yilgarn Craton in Western Australia, and may be evidence for continental crust forming in the Hadean Eon. However, it is suggested that these two cratons have never been directly connected, which may indicate that the early Hadean crust was primarily continental granite. Trace isotope analysis show that these early granite rocks originated from a highly depleted mantle and suggest that large scale differentiation occurred before ~4.0 Ga. These zircon crystals may be important in furthering the understanding of the earliest crustal formation processes, as little is yet known. Craton stabilization The overall stability of a craton is highly correlated to the presence of a strong and deep continental lithospheric mantle because it protects the crust from thermal erosion and mitigates the effects of tectonism. The Slave Craton shows a long history of continental lithospheric mantle formation. Diamond formation is relatively extensive throughout the Slave Craton and requires a thick cratonic root. The oldest diamonds derived from the mantle were between 3.5 and 3.3 Ga, which suggest that the Slave Protocraton would have formed a thick crustal root by this time. The main stabilization of the Slave craton occurred in the Neoarchean at ~2.75 Ga as noted by an abundance of peridotite formation The Kaapval Craton showed a similar peak age of growth which may suggest that much of the earth's continental lithospheric mantle was formed in the Neoarchean The formation and stabilization of the continental lithospheric mantle and the evolution of the crust are closely related during the period between 2.8 and 2.0 Ga. Tectonic history Attempts to reconstruct the craton's tectonic history have focused extensively on the east–west asymmetry. The presence of a collisional suture suggests the CSBC collided with an island arc terrane along a boundary directed north–south before 2.69 Ga. Alternatively, the Eastern Slave may be an attenuated and modified Mesoarchaean lithosphere which developed during rifting at 2.85–2.70 Ga. The mantle lithosphere under the western Slave can be 400 Ma older than that underlying the eastern Slave. Furthermore, rifting is backed up by the existence of younger arc or back-arc rocks that overly the CSBC, but make up most of the Eastern Slave. However, whether the Eastern Slave was the result of rifting or the accretion of another terrane is still up for debate. Following the 2.7 Ga rifting or accretion event, the Slave underwent large scale extension at 2680 Ma resulting in the formation of the > 400x800 km Burwash Basin, widespread mafic sills, and other younger turbidites along the northwestern margin. The Burwash Basin consists of metamorphosed turbiditic sandstones and slates interspersed with thin felsic tuff layers. At 2634 Ma the Slave switched to a compressional regime and the Burwash Basin started to close, possibly due to shallow subduction from the NW or SE. By 2.6 Ga the Slave had collided with the much larger Sclavia, resulting in shortening and cross-folding over the craton. The presence of three rifted margins around the Slave, as well as similarly aged 3.3–3.5 Ga basement rocks, fuchsitic quartzite, and 2.9 Ga tonalites, suggest that the Dharwar, Zimbabwe, and Wyoming cratons were also part of Sclavia. The Slave broke off of Sclavia between 2.2 and 2.0 Ga, as noted by a host of dyke swarms at its margins. The Slave Craton drifted for approximately 200 million years before its accretion with the Rae Craton around 2.0–1.8 Ga in the Taltson–Thelon orogeny. The orogenic belt accreted smaller exotic terranes before the Slave was eventually subducted eastward under the Rae, resulting in a continental magmatic arc known as the Taltson magmatic zone. Continual eastward movement of the Slave province, along with collision of the Hottah terrane on the western margin of the Slave, lead to intense deformation of the Taltson magmatic zone. The Hottah terrane accreted with the Slave during the Wopmay orogeny at 1.88 Ga, shortly after the Thelon orogeny. This event produced another continental magmatic arc on the Slave's western margin, the Great Bear magmatic zone, as well as the Wopmay fault zone. The Wopmay fault zone consists of thin skinned thrust belts that mark the suture between the Hottah terrane and Slave Craton. These two orogenies have emplaced the Slave Craton within Laurentia, where it is still found today.
Physical sciences
Geologic features
Earth science
4661656
https://en.wikipedia.org/wiki/Alkali%20salt
Alkali salt
Alkali salts or base salts are salts that are the product of incomplete neutralization of a strong base and a weak acid. Rather than being neutral (as some other salts), alkali salts are bases as their name suggests. What makes these compounds basic is that the conjugate base from the weak acid hydrolyzes to form a basic solution. In sodium carbonate, for example, the carbonate from the carbonic acid hydrolyzes to form a basic solution. The chloride from the hydrochloric acid in sodium chloride does not hydrolyze, though, so sodium chloride is not basic. The difference between a basic salt and an alkali is that an alkali is the soluble hydroxide compound of an alkali metal or an alkaline earth metal. A basic salt is any salt that hydrolyzes to form a basic solution. Another definition of a basic salt would be a salt that contains amounts of both hydroxide and other anions. White lead is an example. It is basic lead carbonate, or lead carbonate hydroxide. These materials are known for their high levels of dissolution in polar solvents. These salts are insoluble and are obtained through precipitation reactions. Examples Sodium carbonate Sodium acetate Potassium cyanide Sodium sulfide Sodium bicarbonate Sodium hydroxide Alkaline salts 'Alkaline salts' are often the major component of alkaline dishwasher detergent powders. These salts may include: alkali metasilicates alkali metal hydroxides Sodium carbonate Sodium Bicarbonate Examples of other strongly alkaline salts, include: Sodium percarbonate Sodium persilicate (?) Potassium metabisulfite
Physical sciences
Salts and ions: General
Chemistry
1818157
https://en.wikipedia.org/wiki/Eider
Eider
The eiders () are large seaducks in the genus Somateria. The three extant species all breed in the cooler latitudes of the Northern Hemisphere. The down feathers of eider ducks and some other ducks and geese, are used to fill pillows and quilts—they have given the name to the type of quilt known as an eiderdown. Taxonomy The genus Somateria was introduced in 1819 to accommodate the king eider by the English zoologist William Leach in an appendix to John Ross's account of his voyage to look for the Northwest Passage. The name is derived from Ancient Greek : sōma "body" (stem somat-) and : erion "wool", referring to eiderdown. Steller's eider (Polysticta stelleri) is in a different genus despite its name. Species The genus contains three species. Two undescribed species are known from fossils, one from Middle Oligocene rocks in Kazakhstan and another from the Late Miocene or Early Pliocene of Lee Creek Mine, United States. The former may not actually belong in this genus.
Biology and health sciences
Anseriformes
Animals
1818801
https://en.wikipedia.org/wiki/Washingtonia%20filifera
Washingtonia filifera
Washingtonia filifera, the desert fan palm, California fan palm, or California palm, is a flowering plant in the palm family Arecaceae, native to the far southwestern United States and Baja California, Mexico. Growing to tall by broad, it is an evergreen monocot with a tree-like growth habit. It has a sturdy, columnar trunk and waxy, fan-shaped (palmate) leaves. Names The Latin specific epithet filifera means "thread-bearing". Description Washingtonia filifera grows to in height, and occasionally to as much as in ideal conditions. The California fan palm is also known as the desert fan palm, American cotton palm, and Arizona fan palm. The fronds are up to long, made up of a thorned petiole up to long, bearing a fan of leaflets long. They have long, thread-like, white fibers, and the petioles are pure green with yellow edges and filifera-filaments, between the segments. The trunk is gray and tan, and the leaves are gray green. When the fronds die, they remain attached and drop down to cloak the trunk in a wide skirt. The shelter that the skirt creates provides a microhabitat for many small birds and invertebrates. Washingtonia filifera typically lives from 80 to 250 years or even more. Distribution Washingtonia filifera is the only palm native to the Western United States and one of the country's largest native palms, exceeded in height only by the Cuban or Florida royal palm. Primary populations are found in desert riparian habitats at spring-fed and stream-fed oases in the Colorado Desert and at a few scattered locations in the Mojave Desert. It is also found near watercourses in the Sonoran Desert along the Gila River in Yuma, along the Hassayampa River and near New River in Maricopa County, and in portions of Pima County, Pinal County, Mohave County (along the Colorado River), and several other isolated locations in Clark County, Nevada. In Mexico, it is native only to the state of Baja California, where it occurs in isolated canyons and oases as far south as Bahía de los Angeles. It is a naturalized species in the warm springs near Death Valley and in the extreme northwest of Sonora (Mexico). It is also reportedly naturalized in the South and Southeast Texas, Florida, Hawaii, extreme southwest Utah, the U.S. Virgin Islands, and Australia as well as in Morocco, Egypt, Iraq, Spain and Italy. Ecology Desert fan palms provide habitat for the giant palm-boring beetle, western yellow bat, hooded oriole, and many other bird species. Hooded orioles rely on the trees for food and places to build nests. Numerous insect species visit the hanging inflorescences that appear in late spring. Historically, natural oases are mainly restricted to areas downstream from the source of hot springs, though water is not always visible at the surface. Today's oasis environment may have been protected from colder climatic changes over the course of its evolution. Thus, this palm is restricted by both water and climate to widely separated relict groves. The trees in these groves show little if any genetic differentiation, (through electrophoretic examination), suggesting that the genus is genetically very stable. Fire adaptations Fan palm oases have historically been subject to both natural and manmade fires. Fires are rarely fatal for the fan palm, but it is also not completely immune to them. The fan palm's trunk is heavily resistant to burning. In most cases, the trunk is only at risk of losing some of its outer vascular layers during a fire. After those layers are ignited and burnt off, the remaining surface is left heavily charred, which fortifies the trunk against future flames. Subsequent burnings serve to char the trunk more, further increasing its fire resistance.  The palm's fronds are the most flammable portion of the tree. The unchecked buildup of dead fronds as a 'skirt' around the trunk can be especially dangerous in a crown fire. A severe accumulation of them could constitute enough kindling to completely burn through the trunk, killing the tree. However, if a palm can survive the burning of its fronds, they will take time to regrow, leaving it less susceptible to fire in the meantime. Barring extreme, fatal conditions, fires are even conducive to the health and propagation of fan palms. The palms' reproduction process benefits from burnings, as fires help release saplings and clear away overgrowth from surrounding vegetation. Fires can also help palms conserve water by burning away their crowns and parts of their trunks, leading to a reduction in surface area and therefore decreased rates of evaporation and transpiration. Threats Grazing animals can kill young plants through trampling, or by eating the terminus at the apical meristem, the growing portion of the plant. This may have kept palms restricted to a lesser range than indicated by the availability of water. The palm boring beetle Dinapate wrightii (Bostrichidae) can chew through the trunks of this and other palms. Eventually, a continued infestation of beetles can kill various genera and species of palms. W. filifera appears to be resistant to the red palm weevil (Rhynchophorus ferrugineus) by a mechanism of antibiosis – production of compounds lethal to the larvae. Currently, the desert fan palm is experiencing a population and range expansion, perhaps due to global warming or mustang control. Uses The sweet fruit pulp of the fan palm is edible. The fruit is eaten raw, cooked, or ground into flour for cakes by Native Americans. The Cahuilla and related tribes use the leaves to make sandals, roof thatch, and baskets. The woody petioles are used to make cooking utensils. The Moapa band of Paiutes and other Southern Paiute people have written memories of using this palm's seed, fruit, or leaves for various purposes, including as famine food. The bud (known as heart of palm) is also eaten. Access Joshua Tree National Park in the Mojave Desert preserves and protects healthy riparian palm habitat examples in the Little San Bernardino Mountains, and westward where water rises through the San Andreas Fault on the east valley side. One such location is the Fortynine Palms oasis. In the central Coachella Valley, the Indio Hills Palms State Reserve and nearby Coachella Valley Preserve, other large oases are protected and accessible. The Santa Rosa and San Jacinto Mountains National Monument, and Anza-Borrego Desert State Park both have large and diverse W. filifera canyon oasis habitats. In Arizona, Kofa National Wildlife Refuge hosts an accessible grove of this species. Cultivation Washingtonia filifera is widely cultivated as an ornamental tree. It is one of the hardiest coryphoid palms, rated as hardy to USDA hardiness zone 8. It can survive brief temperatures of with minor damage, and established plants have survived, with severe leaf damage, brief periods as low as . The plants grow best in arid or Mediterranean climates, but can be found in humid subtropical climates such as eastern Australia and the southeastern USA. It has gained the Royal Horticultural Society's Award of Garden Merit. Gallery
Biology and health sciences
Arecales (inc. Palms)
Plants